In 50 to 100 years, I firmly believe that your home could be made of Google.
I know, that sounds weird. But it’s also the ultimate manifestation of Material Design, the design principles Google revealed just two years ago. On the surface, Material Design was a new paradigm for interface design: UI carved out of real objects, casting shadows like real paper. But the philosophy and motivations ran deeper–the design team, at the time, told me that they imagined Material Design as a path to shape-shifting interfaces that could transform to add a room to your home or a screen to your wall. In this future, they suggested, Google could be the intersection of digital infrastructure and physical infrastructure. It wouldn’t have to take over the world anymore because it literally would be the world.
But Material Design wasn’t even an afterthought at the company’s hardware launch this week. Instead, Google recast itself as a distinctly intangible digital assistant with no face. And to support that vision, it introduced three new products, all with Google Assistant at their core. It’s Google as an interaction, rather than an interface–a friend you bring along on a trip just because they’re good with directions and can always find a decent bar, an omnipresent third wheel that’s quick with a GIF.
This shift shows us how the company’s work in AI is leading its products away from screens and materials, and toward conversational interactions with machines themselves.
To be clear, nothing has happened to Material Design. It’s still the reference spec for Android app developers and Google’s own interfaces. And having redesigned using these guidelines, Google products across the board look better and more resolved as a single brand.
But the Google Assistant we saw on stage on October 4 is also its polar opposite. It’s Immaterial Design, if you will. It’s not typography or layout or animations. It’s what Google calls “Conversational Actions.” Quite literally, these are conversations pre-scripted by Google (or third parties who can use its API). You say “Okay Google, order me an Uber,” and Uber has common questions and responses at the ready–“Where should I pick you up?” “Where are you going?” “Do you need an Uber XL?” With 70 billion points of data of its own, Google Assistant can answer all sorts of questions through a good old Google search, too. Ask how many planets are in the solar system, and it’ll have an answer, just like the search bar of yore.
There’s an army of chatbots coming our way. These branded personalities are designed to become our friends and confidants, all while the brands behind them–whether Google, Amazon, or Apple–elbow to ensure that they’re the AI you’re screaming at every time you need to check the weather.
By controlling the conversation, these companies could benefit by being able to subtly suggest–with the demeanor of a friend–what we buy, where we go, and who we trust. Well, maybe. It all depends on whether users actually buy into this vision of human-machine interaction, and it’s not yet clear they will.
The entire Valley is making a major assumption: that we’ll want to talk to our products. Apple’s wireless earbuds will stick Siri right into your ears. Google, too, is betting on this by providing us with spoken ubiquity through the Google Assistant. As hardware chief Rick Osterloh explained on stage during the unveil this week, the company is “building hardware with Google Assistant at its core.” Aside from the Pixel phone, which you talk to at any time by saying “Okay Google” or long pressing the home button, you can also install a Google Home speaker in every room of your house, creating a microphone/speaker system that will more or less follow you through your life. It’s an aggressive trend line moving toward enabling vocal commands everywhere and anywhere.
There’s just one catch: When’s the last time you talked to Siri? If you use Microsoft Windows, do you spend all day chatting with Cortana? I myself have lived with a Microsoft Kinect controlling my TV and Xbox for several years. I loved it at first! Yet now I talk only when I literally can’t open an app any other way. Because as it turns out, that third wheel listening into a conversation is often at odds to that second wheel sitting by your side. (In other words, I’d often bark a command right as my wife started saying something else.)
Google demoed the example of its Home speaker playing music at a party. What party have you ever been to that you can imagine shouting over, so a cloud-based service can hear you? Just because Google can stick conversational interfaces–and voice in particular–into every device on earth doesn’t mean we’ll choose to talk to them.
The truth is, the situations where voice and chat interfaces don’t work probably don’t matter so much. Interface design no longer needs to be a zero sum game. New types of interfaces can take over the world without taking over our lives. The iPhone introduced touch screens to the masses, and yet, most of us still use trackpads and keyboards every day. Text messaging has cut into phone calls, but we still connect on voice and video chats. Novel interfaces may tip the scales in how we use software and services, but they don’t cause mass extinctions overnight (unless, okay, your name was Blackberry). Heck, some people still fax!
Just two years ago, Google’s future as amorphous, programmable goo seemed imminent. But that doesn’t mean Google can’t be other things, too. Our future is not Material Design, or conversational design, or gestures, or computers that read our emotions, or touch screens, or anything else. It’s all of these modalities, available all the time, in every single object worthy of a microchip.
So I believe that in 100 years, your home will still be made of subtle, reactive material interfaces. But that home will also know how to listen and talk. Because Google will be the most powerful infrastructure company of all time when we can tell it to do absolutely anything.