Fast company logo
|
advertisement

Amazon’s virtual assistant is caught between Apple-like privacy safeguards and Google-style openness.

Amazon’s Alexa Has A Data Dilemma: Be More Like Apple Or Google?

[Photo: Flickr user Crosa]

BY Jared Newman4 minute read

Devices like Amazon Echo could someday turn into a treasure trove for developers that make voice assistant skills, but first companies have to figure out where they draw the line when it comes to weighing data sharing against consumer privacy.

Now that dilemma is heating up: Citing three unnamed sources, The Information reported this week that Amazon is considering whether to provide full conversation transcripts to Alexa developers. This would be a major change from Amazon’s current policy in which the company only provides basic information—such as the total number of users, the average number of actions they’ve performed, and rates of success or failure for voice commands. Amazon declined to comment to The Information regarding the claims, but the change wouldn’t be unprecedented. Google’s voice assistant platform already provides full transcripts to developers.

The potential move by Amazon underscores how it is caught between two worlds with its Alexa assistant, especially in regards to privacy. By keeping transcripts to itself, Amazon can better protect against the misuse of its customers’ data and avoid concerns about eavesdropping. But because Alexa already gives developers the freedom to build virtually any kind of voice skill, their inability to see what customers are saying becomes a major burden.

Essentially, Amazon must decide whether it wants to be more like Apple or more like Google.

Treasure Trove Vs. Black Box

With Google Assistant, developers can view a transcript for any conversation with their particular skill. Uber, for example, can look at all recorded utterances from the moment you ask for a car until the ride is confirmed. (It can’t, however, see what you’ve said to other apps and services.) Google’s own documentation confirms this, noting that developers can request “keyboard input or spoken input from end user” during a conversation.

For developers, this data can be of immense utility. It allows them to find out if users are commonly speaking in the wrong syntax, or asking to do things that the developer’s voice skill doesn’t support. When Capital One launched an SMS banking bot in March, several months after releasing an Alexa, the company was particularly excited about its ability to get the raw data.

“You can imagine there’s a lot of learnings that can be applied when you’re seeing exactly what the customer asks, and you can build your product around that,” Ken Dodelin, Capital One’s vice president of digital product development, told me at the time.

Google appears to be aware of the value. According to The Information, the company has pitched transcripts as a major selling point as it tries to bring more developers onto its platform.

But turning over that data also has a couple downsides. Outside of Google’s and Amazon’s developer agreements, there’s no way to ensure that third parties would safeguard the conversation data they collect, or to stop unscrupulous developers from using that data in ways that violate users’ privacy. Opening up transcript access could also stoke more fears about eavesdropping on devices like Amazon Echo and Google Home.

Those issues could in turn become strengths for Apple, which has tried to wield privacy as a selling point.

advertisement

In terms of sharing data with developers, Apple’s Siri voice assistant is on the opposite side of the spectrum from Google. Developers who work with SiriKit get no information about usage from Apple, not even for basic things like how many people use voice commands to access an app, or which voice commands are most commonly used.

“Apple doesn’t provide any type of information about the usage of Siri,” says Enric Enrich, an iOS developer at Doist.

But keep in mind that Siri’s approach to third-party development is entirely different from that of Google and Amazon. Instead of letting developers build any kind of voice application, Apple only supports third-party voice commands in a handful of specific domains, such as photo search, workouts, ride hailing, and messaging. And instead of letting those apps drive the conversation, Apple controls the back-and-forth itself. The apps merely provide the data and some optional on-screen information.

Because these apps don’t communicate with users directly, there’s no need for them to have conversation transcripts in the first place. Instead, Apple can look at what users are trying to accomplish and use that data to expand Siri on its own.

The downside to this approach is that Siri just isn’t as useful as other virtual assistants. This, in turn, means Apple has a harder time using Siri as a selling point, even in voice-driven products like the upcoming HomePod speaker.

With Alexa, Amazon is in the position of trying to split the difference. The company wants to offer endless possibilities to developers, but seems to have realized that the necessary tools require a compromise on privacy.

For now, The Information reports that Amazon is approaching this dilemma with a whitelist, which at least allows trusted third parties to get the data they want. But it’s unclear if that approach will be sustainable as the virtual assistant wars escalate, and companies that don’t even exist yet come up with new applications we’ve yet to even dream of.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Jared Newman covers apps and technology from his remote Cincinnati outpost. He also writes two newsletters, Cord Cutter Weekly and Advisorator. More


Explore Topics