Google Hasn’t Cracked The Smart Display’s Complexity Problem

Navigation remains a challenge for the onscreen versions of Google Assistant and Alexa. But it’s tough to solve without turning a simple device into a PC.

Google Hasn’t Cracked The Smart Display’s Complexity Problem
Lenovo Smart Display with the Google Assistant [Photos: Zhela912/Wikimedia Commons; courtesy of Lenovo]

Google knows how to give a slick demo. So it was easy to give into the temptation to be wowed by the “smart displays” that the company was showing off during CES.


In a meeting room this week at the Wynn–away from Google’s temporarily rain-soaked outdoor booth at the Las Vegas Convention Center–the company demoed a tabletop touchscreen device that could display Google Photos albums, bring up directions from Google Maps (and send them to your phone), look up recipes, play music with album art, and of course load videos from YouTube. Google Assistant voice commands controlled most of the action, with the 8-inch touchscreen providing extra control and context.

Google’s software is a work in progress–devices using it won’t ship until later this year–but it gave me the impression that the company is ready to take on touchscreen devices powered by Amazon’s Alexa assistant, such as the Echo Show and Echo Spot. It also revealed an unsolved problem for both companies: How complex should navigation be on a device that’s more like an appliance than a computer? As Amazon and Google are learning, the race toward simplification comes at a cost.

One example: During the demo, Bibo Xu, a Google Assistant product manager, asked for a list of restaurants nearby, then tapped on the screen to select one of the results and bring up its location in Google Maps. But from there, the display didn’t provide a return way back to the original results list.

“It is a really tricky area, and we’re trying to figure things out,” Xu says. “The logic of exactly what a back [function] looks like is something to be finalized.”

Puzzling over the nature of a back button might seem a bit silly. But once you introduce the concept of going back, you’re dealing with a more complicated hierarchy of menus, and asking users to keep track of where they are as they move between tasks. Suddenly the device is more of a computer than an appliance.

“We’re trying to find what’s the most natural way without having to [include] Back, Home, Star, Favorites, all those things that make computers complex,” says Gummi Hafsteinsson, the product lead for Google Assistant. “You want to keep it as simple as possible.”


The Voice-First Philosophy

The dilemma underscores just how similar Google’s approach is to that of Amazon, which first brought Alexa to touchscreens on the Echo Show last year. As Miriam Daniel, Amazon’s head of product management for Alexa, told me, the company deliberately avoided smartphone concepts such as a home screen, back buttons, and complex menus which might distract from using voice controls.

That’s also the case with Google’s smart displays, which for the most part uses visuals as a supplement to the voice-driven Google Assistant. While Google’s home screen is a bit more interactive, with a few widgets for things like reminders and kitchen timers, the emphasis is still on using voice to perform quick, linear tasks.

JBL Link View [Photo: courtesy of Harman]
“Making sure that the experience feels very natural, we’re going to put a lot of work into that,” says Hafsteinsson. “It is an appliance, it’s going to be in people’s kitchens and livings and so on and so forth. You want learning a recipe to be really short and simple.”

The biggest distinction between Google and Amazon is that the former isn’t producing its own smart display hardware yet, instead partnering with other vendors to offer 8- or 10-inch screens. Hafsteinsson says this affords users a choice of hardware designs, though he didn’t rule out the possibility of Google making its own hardware.

Google has some time to figure this out. The first Smart Displays, built by Lenovo, JBL, Sony, and LG, won’t arrive until the summer at the soonest. But there’s also other work to do in the meantime. For instance, Google hasn’t yet released the tools for outside developers to add visual components to their own actions, and Xu says Google and its hardware partners need to do some fine tuning with audio quality and other features before launch.

My sense, then, is that striking the right balance between simplicity and complexity will be a work in progress even after these devices arrive. And just like with Google’s and Amazon’s smart speakers, the two companies will take cues from each other as they figure this whole voice assistant category out.


About the author

Jared Newman covers apps and technology for Fast Company from his remote outpost in Cincinnati. He also writes for PCWorld and TechHive, and previously wrote for