When you walk into a restaurant, why doesn’t your phone silence immediately, sensing that you’ll be enjoying the company of the people you’re eating with rather than receiving calls? Or when you get up from the table after two hours, why doesn’t it hail an Uber, knowing that you need a ride home?
Why is it that your phone can’t automatically do things for you based on your context, but instead you have to hunt through pages and pages of applications to find the app you need, then tap, swipe, tap, exit—taking multiple steps within the app to finally get the relevant information? Your phone should be able to understand who you are and be able to predict what you need in the moment. This is called anticipatory computing, and it is the future of search.
While this concept is not new (Marissa Mayer discussed it in 2008), it isn’t pervasive, and it won’t be until anticipatory computing becomes a core component of the majority of systems. Anticipatory computing is reliant on two pieces: data and user experience.
Expect the Unexpected
In today’s world, the data is available. With more and more sensors being built into our phone, they have the capability of knowing where we’ve been, the music we listen to, upcoming events–the list goes on and on. When we pair this information with all of the data that we now store online (e.g photos, documents, posts), there is more than enough information to understand our patterns and to predict our needs.
Despite all of this data, the user experience remains a challenge because it has not addressed the unpredictability of human behavior. To better understand unpredictable behavior might look like, let’s return to the restaurant example. In addition to wanting your phone go on silent while you’re sitting down to a meal, there are a variety of other things you may want. Maybe you want a suggestion on what to order, want to check in on Foursquare to let your friends know where you are, or want to send your family a photo of the meal.
It’s also quite possible that you want to do something completely unrelated, like respond to email. Each action may be more or less probable based on current context, but it’s very difficult to predict human behavior with 100% certainty. So what method comes closest to providing the information we want just when we need it?
Between Push and Pull
Today, computers, tablets, and phones provide two primary models to access information: pull and push. In the first, the user is pulling information by opening an app or visiting a website. The pull case happens when the user knows exactly what they are looking for (e.g to check in at a restaurant) and therefore opens an app like
Foursquare to do so.
In the second model, users access information through push notifications. Push notifications are great for time sensitive information when the service sending it knows with a very high degree of confidence that the user wants this information. Phone calls and text messages were the first to use the push model on phones. This was a perfect use-case for the push model as there is very high likelihood that the user wants to know about a call or text message at the moment they receive it.
The problem with the push model is the high “user cost” for every notification that is sent, meaning unless there is close to 100% certainty that the user wants the information at that exact moment, the cost of being wrong quickly outweighs the benefit of being right. Even when there is information that you do want (e.g a tip on what to order), if it is pushed to you at the wrong time, like while you are in the middle of a conversation, the push model breaks down. Currently over 60% of users are opting out of push notifications, which indicates that users are not satisfied with the current system.
An anticipatory computing system wouldn’t wait for the user to open Foursquare at a restaurant, because it knows by the time that happens it’s too late—you already ordered before you had a chance to see an awesome suggestion from the secret menu that Foursquare revealed. We believe that success in the world of anticipatory computing lives between the pull and push models.
Learn, Predict, Suggest
We need to build the next generation of systems that can better support this world of probabilistic predictions. A few already exist. The Nest thermostat is a great example of a system with anticipatory computing central to its design. It understands your home’s temperature patterns and typical behavior–when you go to work, when you come home–and your current context (i.e. where you live) to make a prediction on the ideal temperature.
Nest does not rely on the push and pull model with its entirely new user experience. Specifically, it does not require that you pull information from it (e.g walk up to the thermostat and see the prediction it makes), and it also does not force you to interact with it with an interruption (push model). The ambient temperature lives between the push and pull models, and when you feel too hot or cold, there is a simple interface to use to correct the system in cases where Nest provided the wrong prediction. The benefits of Nest being right far outweigh the cost of adjusting the system when it is wrong.
A few applications have replaced core system components, supporting systems designed with anticipatory computing. Swiftkey, a predictive keyboard, is one example. Switfkey lives between the world of push and pull as it automatically opens every time you open the keyboard and does not need to send you push notifications. Every time you use Swiftkey, it leverages your historical communications to predict what you’re going to type given your current phrase and keystrokes. It surfaces the best guess prediction in an unobtrusive way but allows the user to easily correct the suggestion.
There’s No Place Like Your Homescreen
With phone apps, it’s about surfacing the right app at the right time and then going deeper, pulling up contextually relevant information. The perfect place to be able to deliver that information is the homescreen, which lives between the notification system (push) and users opening an application (pull). We touch the homescreen between 100 and 150 times a day, providing the opportunity to predict what users want to do next without interrupting them with a push notification. At Aviate, we’re bringing a system designed with anticipatory computing at its core to the homescreen of an Android phone, and then using it to address the unpredictability of human behavior. The goal is to connect users with the right information every time they interact with their phones
The next generation of systems will be built with anticipatory computing at the heart of their designs, changing the way we access information. Just imagine finding what you need to know without typing. You’ll never search the same way again.