Once again, Google would like to divert some attention away from Amazon’s Alexa.
At Austin’s SXSW conference, the company has erected a “Fun House” on behalf of Google Assistant, with a sock-folding robot, a system for ordering beer from the couch, and a lowrider outside that can find parking spots on demand. It’s similar to the hoopla that Google generated at the CES trade show in January, which included a two-story outdoor booth with a twisting slide and a slew of billboard advertisements around Las Vegas.
There’s also some substance behind the spectacle. Now that Google has sold tens of millions of smart speakers, the pressure is on to build an ecosystem around Assistant that can keep pace with Amazon. To that end, it’s adding a few new features coming to Google Assistant, mainly for device makers and developers that want to bring voice commands to their products.
“With the Assistant, our mission is to help you get things done, and a large portion of what users need to get done involve third-party services and devices, so I think this is really critical for us,” says Brad Abrams, Google’s group product manager for the Assistant platform.
For device makers who want to put voice controls in their hardware, Google is adding a way to create custom voice commands with less clumsy syntax than the current system. With a connected oven, for instance, you might say, “Hey Google, preheat the oven for chicken,” instead of “Hey Google, ask Geneva Home to preheat the oven for chicken.”
“I don’t want to have that extra triggering [phrase],” Abrams says. “If you’re talking to a particular device, prioritize their grammar for that device.”
It’s a subtle change, but Abrams says it was one of the top requests Google heard from device makers. And in a way, it might help justify having more devices with built-in speakers and microphones, even when Google’s own smart speakers are getting cheap enough to put in every room. That in turn will help further Google’s goal of being able to ingest voice commands from practically anywhere.
“We’re moving to a world of ambient computing, where your query to Google is picked up by whatever device is closest to you, and the result is played out on whatever device is best for that,” Abrams says.
More Nags (If You Want Them)
Google also has a plan to stop people from abandoning third-party voice actions over time. On the smartphone version of Google Assistant, users will be able to subscribe to notifications from those actions, the same way they can already ask for daily weather, traffic, or news updates from Google itself.
“One of the things that we’ve noticed is, to be a really successful assistant in the real world, you can’t wait for your client to ask you,” Abrams says. “You have to reach out sometimes and help.”
Magazine publisher Hearst will be among the first partners to support subscriptions, offering daily “wisdom”–factoids, fashion advice, and so on–at whatever time of day the user specifies. Users can also subscribe to alerts for important events, which would be useful for breaking news, or when the price drops on a stock the user is watching. Google Assistant will suggest these subscriptions as users are talking with third-party apps.
Sheel Shah, Hearst Magazines Digital Media’s executive director of growth and innovation, says the company is starting by letting users subscribe to notifications from Esquire. If this seems to be increasing engagement, Hearst will consider expanding it to other properties, such as Elle horoscopes, which are already available via the Assistant on a one-off basis.
“The idea behind notifications is, it really fills the gap in an area that voice has been very weak up until this point, across both Amazon and Google,” Shah says.
More To Listen To (Unless It’s Music)
Google is playing a bit of catch-up with Amazon by removing audio playback limits from third-party actions. Previously, Google only allowed for short audio snippets, which precluded users from listening to heavy rain sounds or a crackling fireplace on their smart speakers. Now, users can listen to these kinds of sounds indefinitely, and can pause, resume, or replay the audio.
“We’ve got a lot of interest from people doing things like meditation sessions, playing relaxing sounds, and news briefs,” Abrams says. The Daily Show also plans to use this feature to play extended audio interviews.
Abrams cautions that the new audio capability isn’t meant for music. Streaming music services are instead supposed to work directly with Google to get on the company’s white list, which currently includes Spotify, Pandora, TuneIn, iHeartRadio, and Deezer. But while Abrams says Google is happy to work with more music providers, getting set up is a time-consuming process.
“It’s a slightly different API because we need inventory data,” Abrams says. “We need to know what music they can play, because it significantly improves our ability to match what a user says with what they can provide.”
In the meantime, it’ll be interesting to see if companies like Plex use the new media playback capabilities to work around Google’s white list, or if local radio stations embrace Google Assistant in the same way that they’ve flocked to Alexa.
None of these new features will have a major impact on their own, but together they should make Google Assistant more hospitable to hardware partners and brands. Laying that groundwork now helps Google stay within range of Amazon–which still sells more Alexa devices and has more product integrations–and makes it harder for companies like Apple, Microsoft, and Samsung to catch up.