Earlier this week, Amazon announced that it had improved the accuracy of its machine learning system Rekognition, improving its “emotion detection” capabilities. Along with detecting the emotions of happy, sad, angry, surprised, disgusted, calm, and confused, Amazon says it has “added a new emotion: fear.”
Technically, the algorithm works by learning how people’s faces usually look when they express fear. Then, when you show it a new image, it can tell you with a certain probability whether that person’s face is communicating the emotion of fear, leaving the human to then decide what to do with the information. The company’s descriptions of the product claim that “Amazon Rekognition detects emotions such as happy, sad, or surprise, and demographic information such as gender from facial images.”
While Amazon sells its Rekognition product to advertising and marketing companies, Amazon has also been marketing the software to police forces and immigration agencies, according to an investigation by the ACLU.
“Facial recognition already automates and exacerbates police abuse, profiling, and discrimination,” said the digital rights advocacy group Fight for the Future deputy director Evan Greer in a statement. “Now Amazon is setting us on a path where armed government agents could make split second judgements based on a flawed algorithm’s cold testimony. Innocent people could be detained, deported, or falsely imprisoned because a computer decided they looked afraid when being questioned by authorities. The dystopian surveillance state of our nightmares is being built in plain site—by a profit-hungry corporation eager to cozy up to governments around the world.”
In the past, civil rights groups, AI experts, and even some of Amazon’s own investors have asked the company to stop deploying its facial recognition technology given industry-wide issues with accuracy, particularly when it comes to dark-skinned people who are already more likely to be discriminated against within the criminal justice system. As governments have started to realize the implications of the widespread use of the flawed technology, cities like San Francisco; Oakland, California; and Somerville, Massachusetts, have issued bans—even as many other places embrace the use of Amazon’s technology. Facial recognition is already used in public spaces, airports, and even in schools.
Earlier this week, the ACLU conducted a test of Rekognition where the nonprofit found that the service falsely matched 20% of California’s state legislators to mugshots found within the state’s database of 25,000 public arrest photos. More than half of the falsely identified legislators were people of color, demonstrating some of the algorithm’s bias problems. For the test, the ACLU used the default settings for Rekognition. (After this story was published, Amazon shared the statement it gave in response to the ACLU’s test, which Fast Company published this week, saying that its facial recognition technology can be used for “a long list of beneficial purposes” when used with the recommended 99% confidence threshold.)
The ability of any algorithm to accurately measure emotions like anger and fear using facial features is also disputed by scientists. A July paper published in the journal Psychological Science in the Public Interest reviews the actual evidence that emotion can be detected only by surveying people’s facial expressions, and finds that this isn’t necessarily the case. The paper does not specifically cite Amazon within the context of emotion recognition, though it does reference the way that tech companies like Microsoft are building emotion detection software.
“The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance,” reads the article’s abstract. “Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category.”
“So-called emotional expressions are more variable and context-dependent than originally assumed,” they write.
The authors also caution that using facial expression as a proxy for emotion is premature: “More generally, tech companies may well be asking a question that is fundamentally wrong,” write the researchers, who hail from Northeastern University and CalTech. “Efforts to simply ‘read out’ people’s internal states from an analysis of their facial movements alone, without considering various aspects of context, are at best incomplete and at worst entirely lack validity, no matter how sophisticated the computational algorithms.”
In any case, assessing the probability that someone looks like they’re afraid based on their face in videos and photos is now the default for customers in regions where Amazon provides AWS services. Amazon declined to comment publicly about this story.