I’m trying to type the word angst into my phone, and autocomplete, even as far along as angs, suggests ands. I am sure that as recently as today you have been typing on an iPhone or Android smartphone and experienced this type of angst.
That’s why I am so pleased with SwiftKey, an Android app that for $3.99 replaces the default Android keyboard with its own. Grant it permission to access your communications and social accounts–SMS, email, Facebook, and Twitter–and the app (and the cloud-based service behind it) learns how you communicate and then uses it to predict what you might type in a message. On our phones we blend words with photos, stickers, and emoji characters, so why should a smartphone keyboard mimic a desktop layout? It shouldn’t. SwiftKey even learns how you type–say you always hit R when aiming for E–and reshapes the keyboard keys under the covers to make your typing more accurate. By marrying machine learning with natural language processing, SwiftKey makes dealing with emails and text messages a breeze–and in the process, it has become one of the top paid apps in the Google Play store. (Sorry, iPhone users: Apple doesn’t let developers change its default keyboard.)
SwiftKey is part of a growing movement to anticipate the computing we want to do–and do it for us. “A new range of tools are required to solve a new class of problems that didn’t exist 10 years ago,” says Ben Medlock, SwiftKey’s cofounder and CTO. What’s driving us toward this new era–wherein computers provide context to our data, and they also allow machines to anticipate our next need–is a fundamental change in the Internet itself. We live in a world that is always connected. We’re always on social networks: checking our news feeds, sharing articles, and publishing pictures. We’re digitally collecting personal health data, monitoring our sleep patterns using wearable sensors and our weight via Wi-Fi–powered scales. These information streams, when combined with the substantial decline in the computing cost of crunching all that data, allow our machines to sense our needs.
This idea of anticipatory computing is going to be the next big change in our relationship with computers. And it’s coming more quickly than you realize.
Look around the App Store and there are powerful illustrations emerging. The iPad app MindMeld, made by the startup Expect Labs, listens in on your conference call and starts to display relevant information based on what you’re talking about. When I’m speaking, you might see basic facts about me from my Wikipedia page. When the conversation turns to the latest Audi S4, MindMeld displays car photos and even a map showing the location of the closest dealer. By following along and adding context where it can, MindMeld can make a call more fruitful.
Cover–a brand-new app cofounded by Todd Jackson, who worked on such early experiments in anticipatory computing as Gmail’s Priority Inbox and Facebook’s News Feed–is a simplelooking replacement for your Android smartphone lock screen. Its secret is that it adapts based on your location. If you are in the office (which it learns from your Wi-Fi network’s address and location), it shows work-related apps such as Salesforce. If you are at home, ESPN and Netflix populate the launcher. “I am a firm believer that we will no longer have to worry about things we currently spend time trying to make work for us,” Jackson says.
With a trend this big, Google and Apple are also spending millions racing to this future. The Google Now service looks at all of your personal data on Google–the content of your emails, your calendar, your search history, and more–and makes predictions as to what you might need to do. If you’re flying, it fetches flight-status updates to see if there are any delays. It pulls up a map to guide you from one appointment to the next, giving you traffic data and even a rough estimate on how much time to allot to get there. All of this pops up on your screen, as if by magic. Could you get it yourself? Sure. This info lives in dozens of apps you could access, but Google stitches it together without your having to ask for it. Google Now isn’t perfect, but it points to a future where it might even replace Google Search as we know it.
Google Now, in fact, shows why Apple’s Siri can at best be described as “cute.” Siri isn’t anticipating our next need. (Maybe that’s why earlier this year Apple spent a rumored $40 million to acquire Cue, a startup that had built an intelligent agent to manage your personal agenda but never quite got traction.) But what Apple and Google have going for them that no startup can compete with is raw processing power and virtually unlimited access to our personal data. Much as I like MindMeld, it would be more useful if it could surface notes from my Dropbox or PowerPoint presentations from Microsoft SkyDrive. (Before year’s end, Expect Labs will allow other applications to use MindMeld’s technology.)
Some of you have read this far and are thinking that things like Google Now sound both useful and creepy. You’re right. But as humans,it seems that we’re falling behind in our ability to deal with all the data we create. A smart algorithm can not only find us amid this data smog, but it can also paint a very accurate picture of what we like, dislike, where we might be at a given time, and what we might want. However much we might want to be private, we are not.
This tension is already the subject of debate as we enter this new age. We have started walking down a road full of sensors, smartphones, and digital devices, and there’s no turning back. We’ll have no choice but to deal with the inevitability of it all, and I don’t pretend to know the answer.
All I know is that there is this one app that reads my Twitter stream, my Facebook updates,and looks at my emails and tells me what word I might want to use next when replying to my friends. I may be squeamish about giving any company access to private data. But convenience trumps squeamishness (and angst). You’re likely to have to make that trade, too.