Google Assistant is getting an overhaul on smartphones, and it’s discarding one of 2016’s biggest tech fads in the process.
When Google Assistant arrived a couple of years ago, chatbots were all the rage in products like Facebook Messenger and Microsoft’s Skype. Simulating a back-and-forth conversation was supposed to be more efficient than poking around in traditional apps, and chatbot proponents hyped this model as the future of software design. Google Assistant itself debuted as a feature within Google’s Allo messaging app, so you could exchange text messages with the search giant just like you would with a friend.
Google has since soured on this approach and has “paused” development on Allo. The new version of Google Assistant–available now on Android, and coming to the iPhone in a few weeks–emphasizes visual response cards that you can interact with, and that stay on the screen even as you ask follow-up questions. Meanwhile, the actual transcript has moved from the bottom to the top of the screen, so in most cases you’ll only see the most recent topic of conversation. The overall experience is less like texting and more like, well, using an app.
“When we built the Assistant, you can clearly see inspiration from Allo in what we did, in this chatty back-and-forth model where you’re talking with an intelligent assistant,” says Chris Perry, the Google product manager who leads Assistant on Android. “And we found that was somewhat restrictive of a model for us. It ended up constraining us in a number of different ways.”
“Everyone’s kind of trying to figure out how you should do things”
One major problem with the chatbot approach was that it was too linear, says Ye-Jeong Kim, Google’s user experience manager for Search and Assistant. You might get a visual card when asking about the weather, but if you asked a follow-up about wind chill or a future forecast, the resulting chat transcript would push the original weather card off the screen. This can be disorienting, so now Google will simply update the original card with new information as you ask for it.
“If you think about visual–unlike spoken or written conversation–visual doesn’t have to be so ephemeral,” Kim says. “It’s lingering, and helping to aid a conversation.”
Besides, not every Google Assistant device is conducive to dialog bubbles. When Assistant arrived in 2016, Google was mainly pushing it through Allo and on its new Google Home smart speaker. Now, Google Assistant is available in cars through Android Auto, on kitchen counters with devices like the Lenovo Smart Display, and on televisions through Chromecast and Android TV. A chat-like interface doesn’t make as much sense on those devices.
Even on smartphones, users are doing more than just talking to their virtual assistants. Perry says that about half of interactions with Assistant’s smartphone app still involve touching the screen. By redesigning Assistant with more visuals, Google is setting it up to be more consistent across all devices and situations.
“We wanted to build a framework that can actually expand and be more fluidly adaptive to the various contexts you’re in,” Kim says.
That’s not to say the conversational elements are going away completely. Google Assistant’s responses will still include text at the top of the screen, and will still suggest follow-ups at the bottom. By swiping downward, you can still scroll through previous queries. But overall, the hope is that you won’t be generating lengthy transcripts with each conversation.
“It’s not this back, forth, back, forth, back, forth,” Perry says. “It’s an immersive experience inside the canvas itself.”
Despite the new system, Perry still acknowledges that Google–and anyone else working on voice assistants–still has a lot of work left to do. Embracing and abandoning the chatbot model is a sign that no one’s quite cracked the code to putting virtual assistants on screens.
“It feels a lot like 2009 for me,” Perry says, “where we’re building apps, we have this new platform, and everyone’s kind of trying to figure out how you should do things.”
Looping in app makers
Even with Assistant’s new design, Google stresses that it’s not trying to replace apps. Don’t expect Google Assistant to show you recent posts from Twitter, or provide its own miniature version of Instagram’s camera. For those sorts of uses, you might as well just go directly to the app. (Google Assistant can, however, open those apps for you.)
Still, it’s likely that the lines between Assistant and standalone apps will blur over time. Perry points to flights as an example. To book a flight today, you might open the app for an airline such as United or an aggregator such as Expedia. But in the future, you might ask Google Assistant for flights to a certain place on a certain date, and it might provide you with its own interface for browsing all the options. At that point, is a standalone app really necessary?
“You look at all the different interactions that you have with your phone, there’s a lot of them that can be made easier,” Perry says.
The big challenge will be getting outside developers on board with the idea. Google is laying the groundwork by letting third-party developers add their own visual responses on phones, so a company like Starbucks can try to upsell you on food items (with pictures!) after you’ve placed a pickup order. Google Assistant’s interface would then act as a mediator of sorts, whisking users around to different third-party skills or apps based on what they’re trying to accomplish.
But while Google has a strong incentive to funnel more smartphone usage through Assistant–more interactions mean more fodder for the company’s search and advertising machine–it’s easier for developers to stick with the apps they already have. The same could be said for users, who’ve spent the last decade getting comfortable with apps as the primary way to interact with smartphones.
Ye-Jeong Kim is hoping the redesign will start to change people’s attitudes. If Google Assistant can become a little more helpful now, users may trust it down the road as it gets better at handling more complex tasks.
“I want users to be able to build a mental model of this relationship with Assistant,” she says, “so they can actually ask it harder things than just ‘Turn on the lights,’ ‘Set the alarm,’ or ‘What’s the weather?'”