A recent test by the American Civil Liberties Union (ACLU) of Amazon’s Rekognition image-recognition tool that matched 28 photos of members of Congress to other people’s mugshots has been “misinterpreted,” argues Matt Wood, who manages AI technology within Amazon Web Services, in a blog post on Friday.
Wood argues that the 80% match confidence threshold used by the ACLU is “not the right one for public safety use cases” since it can lead to false positives. Amazon recommends law enforcement users set a 99% threshold, and reported that its own test comparing congressional photos to a common data set of 850,000 faces had a 0% false positive rate, when set at the higher threshold. The ACLU has said it used the 80% threshold because it’s the program’s default and suggested Amazon has changed its position after an earlier statement to the media suggested 95% was a sufficient confidence level for police to use.
“In a matter of 48 hours, Amazon has gone from its own system default of an 80 percent match rate to saying yesterday it should be 95 percent, and then saying today it should be 99 percent,” said ACLU of Northern California attorney Jacob Snow in a statement. “At no time has Amazon taken any responsibility for the very grave impact that their face surveillance product has on real people.”
If mugshot database that the ACLU used was “not appropriately representative and therefore is itself skewed,” the accuracy of the test may also have been affected, Wood argues. The ACLU hasn’t said exactly what set of mugshots it used, only that the images were publicly available.
The blog post comes as members of Congress and privacy advocates increasingly question the growing use of Rekognition and other image-matching tools by law enforcement. Snow called for Amazon to respond to inquiries from members of Congress and to disclose which government agencies have used Rekognition.
“And it should heed the calls of organizations and its own customers, employees, and shareholders and stop selling face surveillance to the government,” he said in his statement.
Wood says Rekognition has already been used for good purposes, including to reduce payment fraud, help fight human trafficking and reunite lost children with their families. He also notes that the underlying algorithms continue to improve in accuracy.
“We continue to recommend that customers not use less than 99% confidence levels for law enforcement matches, and then to only use the matches as one input across others that make sense for each agency,” Wood writes. “But, machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza.”
This story has been updated.