advertisement
advertisement

Congress demands Jeff Bezos explain Amazon’s face recognition software

As an “alarmed” bipartisan group of 25 Congressmen demanded answers from the Amazon CEO, the company suggested that regulation was needed.

Congress demands Jeff Bezos explain Amazon’s face recognition software
[Photos: Jimmy Gomez, California State Assembly/Wikimedia Commons; Jeff Bezos, Flickr user James N. Mattis]

This time it’s personal. Yesterday, the ACLU demonstrated how Amazon’s face recognition software, Rekognition, could literally mistake members of Congress for criminal suspects. Now folks on the Hill suddenly want to know a lot more about the potentially buggy tech being marketed to US law enforcement. Today a group of 25 members of Congress–Senators and Representatives from both parties–sent a letter to Amazon CEO and richest man in the world Jeff Bezos demanding a meeting to discuss the technology.

advertisement
advertisement

Jokes about politicians being crooked aside, if the ACLU’s test is indicative of how well (or poorly) Rekognition works in the real world, there are serious dangers for criminal justice. Aside from being inaccurate overall, Rekognition appeared to be especially flawed when it came to non-white Congress people, a finding that reflects a growing body of research showing racial “bias” in popular face recognition systems.

“Given the results of this test, we are alarmed about the deleterious impact of this tool–if left unchecked without proper, consistent, and rigorous calibration—will have on communities of color,” reads the letter, issued by the office of California Democrat representative Jimmy Gomez. It goes on to list other groups of special concern: immigrants, peaceful protestors, and “any other marginalized group.” The letter also recommends that Amazon test and consult with “diverse stakeholders–especially civil rights and civil liberty experts and advocates.”

Law enforcement should not use this technology until the onerous civil rights and civil liberties issues are confronted and accuracy is guaranteed,” wrote Rep. John Lewis, the civil rights leader whose face was one of those misidentified during the ACLU experiment. “If industry wants to engage in the public sphere, it needs to make the public good, not profit, a top priority. American families should not be collateral damage on the road to technological innovation.”

Today’s letter follows one yesterday from Democratic Senators Ron Wyden. Cory Booker, and Ed Markey asking 39 federal agencies to answer 10 big questions about their possible use of facial recognition tech. The agencies span the departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Interior, Judicial, State, Treasury, and Veterans affairs. If the agencies do or plan to use the tech, they are asked to explain, among other things, what companies they contract with, what databases they scan, if and how they use the tech in public, and how they test and audit the tech.

Amazon: “Reasonable” for government to weigh in

In a blog post Friday, Matt Wood, general manager of AI at Amazon Web Services, claimed that the ACLU used the software improperly, using a threshold for declaring a match at 80% confidence, instead of the 99% recommended for law enforcement purposes. He wrote, “machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza.” The ACLU contends it used the software’s default threshold.

In May, the ACLU and other civil rights groups launched a campaign demanding that Amazon stop selling its face recognition technology to government agencies. The company’s response has contrasted with that of neighboring tech giant Microsoft, which also sells face recognition software to government customers. In an unusual move, Microsoft president Brad Smith wrote a blog post this month asking the government to step in and regulate the technology.

advertisement

But Congressional pressure may have already had some effect on Amazon. Buzzfeed‘s Davey Alba noticed that on Friday afternoon Amazon reposted its original response to the ACLU, but with an added sentence at the end that appears to also call for regulation.

“It is a very reasonable idea, however, for the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work,” Wood’s post now concludes.

In a statement on Friday afternoon, Jacob Snow, an attorney for the ACLU of Northern California, said that Amazon was in “denial” about its impact on civil rights and underscored the “need for Congress to quickly step in with a moratorium.”

“Amazon should respond to members of Congress,” he wrote. “It should disclose every government agency that has already purchased this technology. And it should heed the calls of organizations and its own customers, employees, and shareholders and stop selling face surveillance to the government.”

While no US laws govern the police or private use of face recognition, the faces of over half of the country’s adult population are already thought to be stored in face databases used by the FBI and police departments. During a hearing about the FBI’s system last March, congressmen from both parties chastised the agency for not better policing the accuracy of the systems it uses, which have also been found to be flawed. “If you are black, you are more likely to be subjected to this technology, and the technology is more likely to be wrong,” said Elijah Cummings, a congressman for Maryland. “That’s one hell of a combination. Just let that sink in.”

advertisement

Related: From Amazon to Microsoft, tech companies faced yet another ethical reckoning this week


Bias in artificial intelligence has become an increasingly prominent problem. Following the “garbage in/garbage out” axiom, algorithms are only as accurate as the data fed them. Biased data yields a biased algorithm. The point was made painfully clear in a 2016 ProPublica report on faulty data and analysis used to develop a horribly inaccurate sentencing algorithm that mislabeled blacks almost twice as often as it mislabeled whites on their risk of being repeat offenders.

Bias in AI, and it’s effects on criminal justice and civil liberties, is a big issue that’s only going to get much bigger—and much more personal—in the coming weeks, months, and years.

This story has been updated to add the statement by Jacob Snow.

advertisement
advertisement

About the author

Sean Captain is a Bay Area technology, science, and policy journalist. @seancaptain.

More