Advocates against flawed facial recognition systems have pushed for limits or bans on the use of these controversial technologies by law enforcement for at least four years. Now, amid a global reckoning around racial injustice spurred by the killing of George Floyd by Minneapolis police, IBM, Amazon, and Microsoft declared decisions to end or pause sales of their facial recognition products to law enforcement.
The companies’ choice to step away from facial recognition received muted praise from some high-profile activists who’ve fought against facial recognition use for law enforcement and private surveillance. But other advocates for ethical and equitable tech approaches are skeptical of what they say looks more like pandering than meaningful action.
The parade of announcements from giant tech companies is an attempt “to virtue signal as a company,” says Rashida Richardson, director of policy research at the AI Now Institute.
Over the past several years, police departments’ increasing use of facial recognition has sparked criticism due to the technology’s inaccuracy. Several research studies, including one by the government, have shown that facial recognition algorithms fail to detect black and brown faces accurately. Cities across the country have banned its use by police departments and government agencies.
“Limiting the scope of these [announcements] even to law enforcement is insufficient,” says Safiya Noble, associate professor at UCLA’s Department of Information Studies and author of Algorithms of Oppression. “We need a full-on recall of all of these technologies.”
An opportunistic move
IBM came first. The company sent a letter on June 8 addressed to Congressional Black Caucus members and sponsors of the Justice in Policing Act, introduced the same day. IBM CEO Arvind Krishna recognized the “horrible and tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor,” and stated that the company “no longer offers general purpose IBM facial recognition or analysis software.”
We need a full-on recall of all of these technologies.”
The thing is, it appears IBM already stopped making its facial analysis and detection technology available in September 2019.
The IBM announcement is “not bad because it’s better than doing nothing, but that said I think it’s completely promotional and opportunistic,” says Richardson.
IBM’s letter got a more welcome reception from MIT researcher Joy Buolamwini and her organization Algorithmic Justice League. Buolamwini said in a Medium post that she “commends this decision as a first move forward towards company-side responsibility to promote equitable and accountable AI.” In 2018, Buolamwini and her colleague Dr. Timni Gebru published seminal research that revealed accuracy disparities for people of color and women in earlier versions of facial recognition software from IBM, Microsoft, and Chinese company Face ++.
Amazon, which makes a facial recognition product called Rekognition, swiftly followed on June 10, announcing a “one-year moratorium on police use of Amazon’s facial recognition technology.” At least one law enforcement agency using Rekognition—Oregon’s Washington County Sheriff’s Office—has said it will stop doing so. Amazon declined to comment for this story and did not provide any details about how it will enact and enforce the moratorium.
Buolamwini said in an email to Fast Company: “Given Amazon’s public dismissals of research showing racial and gender bias in their facial recognition and analysis systems, including research I coauthored with Deborah Raji, this is a welcomed though unexpected announcement.”
As criticism of police practices reaches a crescendo, Amazon’s two-paragraph statement made no mention of police abuse or racial injustice.
Data for Black Lives
This pause is the bare minimum when it comes to addressing the ways facial recognition has enabled harms and violence against Black people.”
“This pause is the bare minimum when it comes to addressing the ways facial recognition has enabled harms and violence against Black people,” data equity group Data for Black Lives said in a statement sent to Fast Company.
The next day, Microsoft emerged with its own statement. Microsoft President Brad Smith told The Washington Post the firm “decided that we will not sell facial recognition technology to police departments in the United States until we have a national law in place grounded in human rights that will govern this technology.”
It was an about-face from an earlier stance. In January, Smith told Seattle’s NPR affiliate KUOW the company did not want a moratorium on facial recognition because “the only way to continue developing it actually is to have more people using it.”
No end for predictive policing or other surveillance tech
Despite their limitations on facial recognition use by law enforcement, neither IBM, Amazon, nor Microsoft said they would stop the use of the other highly scrutinized predictive policing and surveillance tech they offer. Predictive policing systems in particular have been criticized for using historical data that contains inaccurate or racially-biased documentation of law enforcement incidents.
Microsoft’s statement was “a dodge,” says Liz O’Sullivan, technology director at the Surveillance Technology Oversight Project. Not only does Microsoft not appear to sell facial recognition to police in the U.S., she said; it has a $10 billion contract with the Pentagon, which could lead to the implementation of its augmented reality headsets and object detection for military use.
As for IBM, the company said nothing about ending sales of its predictive analytics tools to law enforcement. The company has provided predictive and “near-instant intelligence” to police clients including Rochester, New York, Manchester, New Hampshire, and Edmonton, Canada. IBM did not respond to requests for comment regarding its predictive policing technologies.
Meanwhile, Amazon’s year off from selling facial recognition to police does not limit its law enforcement partners’ use of video surveillance footage from its Ring connected doorbell system. The company feeds video footage into a data hub accessible to hundreds of law enforcement agencies, who use it as part of a warrantless community policing program. For now, Ring does not enable facial recognition.
O’Sullivan says Amazon’s moratorium is a partial victory because the company has actually sold facial recognition to law enforcement. “The reason I think this is a victory is we have a company who has a vested interest in having great relationships with local police departments take a stand and revoke access to something that otherwise they would profit from.”
A push for watered-down federal legislation
Both Amazon and Microsoft say they want federal legislation governing facial recognition, and both have attempted to influence rules for the technology at the state and local level.
In its statement, Amazon said it hoped its moratorium would “give Congress enough time to implement appropriate rules, and we stand ready to help if requested.” The company has already begun attempts to influence federal regulation. In September, its CEO, Jeff Bezos, said the firm’s public policy team was developing a federal facial recognition legislation proposal.
But some actions taken behind closed doors show Amazon does not actually want strict rules against facial recognition use. As recently as December, the company lobbied against a proposed ban in Portland, Oregon, which could prevent the use of facial recognition by government agencies, law enforcement, and private entities. Portland officials said the company hoped to stop or at least water down the legislation.
Microsoft also pushed against an ACLU-backed moratorium on government facial recognition use in its home state of Washington. Instead, the company supported a new law with weaker restrictions on the technology. The law was cosponsored by Senator Joseph Nguyen, who is also a senior program manager at Microsoft.
O’Sullivan says Microsoft wants a federal law that preempts tougher regulations such as the California Consumer Privacy Act and facial recognition bans in cities such as Oakland and San Francisco. “It’s a big part of why they’re backing away from this product now,” she says.
Going forward, to prevent technologies embedded with “racialized logic,” AI Now’s Richardson says all three firms should evaluate their hiring processes and incorporate nonwhite communities and employees in product conception.
“We don’t know about what’s in the R&D pipeline,” she says. “I’m sure there’s 10 other technologies we don’t know about that will come out in the next couple of years that use the same data or are embedded with the same problems.”