Fast company logo
|
advertisement

A report found that Apple is studying how to ascertain mood through iPhone usage, but emotion AI has a reputation for being pseudoscience.

Apple is studying mood detection using iPhone data. Critics say the tech is flawed

[Source image: Slim3D/iStock]

BY Ruth Reader5 minute read

New information about a current study between UCLA and Apple shows that the iPhone maker is using facial recognition, patterns of speech, and an array of other passive behavior tracking to detect depression. The report, from Rolfe Winkler of The Wall Street Journal, raises concerns about the company’s foray into a field of computing called emotion AI, which some scientists say rests on faulty assumptions.

Apple’s depression study was first announced in August 2020. Previous information about the study suggested the company was using only certain health data points, like heart rate, sleep, and how a person interacts with their phone to understand their mental health. But The Wall Street Journal report says researchers will monitor people’s vital signs, movements, speech, sleep, typing habits—even the frequency of typos, according to the report—in an effort to detect stress, depression, and anxiety. Data will come from both the Apple Watch and iPhone, utilizing the latter’s camera and mic. Data obtained through Apple’s devices will be compared against mental health questionnaires and cortisol-levels data (ostensibly retrieved from participants’ hair follicles).

Apple is also participating in studies that aim to detect cognitive decline and autism in children based on iPhone and Watch data, according to The Journal. These studies are an extension of Apple’s existing interest in individual health. The company has invested heavily in tracking exercise, sleep, hearing, physical stability, menstruation, diet, and other markers of a person’s daily health. It even integrates medical data and can send a health report to your doctor.

The depression study takes health data a step further—toward making an assumption about your emotional state. It is part of the growing number of apps that purport to passively gauge how you’re feeling utilizing what’s called emotion AI or affective computing. The field aims to use various data points, including facial expressions, to understand a person’s emotions, often for commercial purposes. Affective computing was a $20 billion market in 2019, according to Grand View Research.

Already, emotion AI is used as part of the recruitment process at major companies like Dunkin’ Donuts, Unilever, Carnival Cruise Lines, and IBM to understand a candidate’s personality. This kind of technology is also being used both experimentally and commercially within cars to detect drowsy drivers, in prison populations to detect stress, and, during the pandemic, in digital classrooms to understand whether online students were struggling with their lessons.

But the technology has faced criticism, not only for collecting vast amounts of sensitive data but also because it’s not very good. “In general the products that we’ve seen that do emotion detection or mental health [detection] have not proven to be super accurate,” says Hayley Tsukayama, legislative activist at the Electronic Frontier Foundation.

Scientists have spoken out about the problems with emotional artificial intelligence, saying that it relies on a flawed idea. Kate Crawford, a researcher at the University of Southern California Annenberg School for Communication and Journalism and author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, calls this “the phrenological impulse: drawing faulty assumptions about internal states and capabilities from external appearances, with the aim of extracting more about a person than they choose to reveal.”

In a 2019 paper published in the academic journal Psychological Science in the Public Interest, a group of researchers laid out the challenges with emotion AI. “Efforts to simply ‘read out’ people’s internal states from an analysis of their facial movements alone, without considering various aspects of context, are at best incomplete and at worst entirely lack validity, no matter how sophisticated the computational algorithms,” they write. In the paper, the researchers called for a deeper study of how people actually move their faces to convey and conceal emotion within an environmental context, and how people infer emotions based on the facial expressions of others.

How effective this technology is has enormous consequences, especially as more consumer tech companies like Apple invest in it. Last August, Amazon debuted a health wearable with a feature that detects the emotional tone of your voice. In an article for the prestigious journal Nature, Crawford called for robust legislation to ensure that the science behind emotional artificial intelligence applications is rigorous and transparent. In it, she cites the damages wrought from polygraph or lie-detector tests, which were used for decades and were ultimately found to be unreliable.

Tsukayama says it’s also important that companies are explicit about what their technology can and cannot do. In the case of detecting depression, a company may have to go beyond assuring users that the technology’s assessment does not constitute a diagnosis. She says companies need to take into account how the layperson may interpret the app’s capability. “Do they think they can bypass their doctor? Do they think it’s strong enough to seek their own treatment? That’s where you get into consumer harm,” Tsukayama says.

Tiffany C. Li,  a technology attorney and assistant professor at the University of New Hampshire School of Law, says she believes Apple’s research has the potential to be helpful medically. “What’s important is that we have built-in protections,” she says. Her concern about emotional artificial intelligence in general is that it uses an enormous amount of personal data. Without federal privacy legislation that includes protections for people’s biometric data, consumers don’t have much agency over how their data is used, processed, and stored.

Apple is testing out its depression detection in the context of an academic study, which has tight rules around transparency and participant consent. And as The Journal points out in its report, the UCLA-Apple study may never result in a consumer-facing app. The study is on a three-year timeline that began in 2020. In this next phase, it will expand to include 3,000 people. Still, Apple’s research into emotion AI is yet another indication that the field is growing. Critics believe that regulation will be needed as the technology reaches more people.

“You want to make sure if you’re going to use an algorithm like this for making some sort of critical decision that it’s transparent, that you can audit it, that people can dispute those decisions easily,” Tsukayama says. “You want to make sure that there’s basic accountability around it.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics