advertisement
advertisement

Privacy groups want a federal facial-recognition ban, but it’s a long shot

The revelations about Clearview AI in the “New York Times” have reignited a debate that’s unlikely to lead to legislation—at least this year, with this administration.

Privacy groups want a federal facial-recognition ban, but it’s a long shot
[Photo: Rob Curran/Unsplash]

Some in tech and law enforcement believe this week’s call by dozens of privacy groups for the U.S. government to ban use of facial recognition tech by federal agencies is an overreaction to the work of one company, Clearview AI.

advertisement
advertisement

The New York Times’ Kashmir Hill recently profiled Clearview, whose technology matches facial-recognition scans with images of the same person scraped from social media. The company says that it has a database of more that 3 billion facial images scraped from social sites, including Facebook and YouTube. It claims that 600 federal and state law enforcement agencies are now using its product.

Clearview AI was likely the trigger for the letter to the Department of Homeland Security’s Privacy and Civil Liberties Oversight Board from the Electronic Privacy Information Center and 40 other privacy groups, calling for a ban on further implementation of facial-recognition technologies funded by the government.

The letter recommends “the suspension of facial recognition systems, pending further review,” saying U.S. citizens shouldn’t be subject to facial recognition surveillance in the course of their daily lives when they’ve done nothing wrong. The privacy groups also point to studies showing that facial recognition often misidentifies people of color in higher percentages. Several cities, including San Francisco, Somerville, Massachusetts, and Oakland, California, have already barred police from using facial-recognition technology.

There’s no safe way for governments to use facial recognition for surveillance purposes.”

The Clearview app may have set off alarm bells because of its broadness of scope. The app can scan the face of anybody in public and match it against a huge database of images scraped from the web.

But most law enforcement agencies use facial recognition tech in a more focused way than Clearview AI’s offering, says Jon Gacek, the head of government, legal and compliance for Veritone, which provides recognition technology to law enforcement agencies in the U.S. and Europe.

“I think what Clearview did was overreach,” he said. “Certainly that’s a capability. Minority Report is possible, but it’s not a pragmatic use of the technology.”

advertisement

Gacek says that Veritone’s Identify app uses facial-recognition AI to detect only specific criminal events captured by security or surveillance cameras, and matches the important faces in them with faces contained in smaller “known suspect” databases. “It’s just using technology to do what police already do, except far faster and at less cost,” he says.

Gacek also points out that the larger the database of images the AI searches for matches, the greater the risk of the system returning false positives.

Amazon Web Services and Microsoft’s Azure both provide services that can identify faces as faces, and match them in a database with faces published online, either in ads or social media. For instance, a law enforcement tool called Thorn uses Amazon’s Rekognition to scan the internet for images of children who have been sold into sex trafficking. But these tech giants don’t offer products that essentially capture a face in public and run a search through the internet for the same face, as Clearview AI does.

Amazon chose not to comment directly on Clearview AI or EPIC’s letter, but a spokeswoman told me that if Amazon sees any of its clients using its facial recognition services in ways that are harmful to people, it will investigate.

“We know that facial recognition technology, when used irresponsibly, has risks,” Amazon’s general manager of AWS AI, Dr. Matt Wood, wrote in a blog post. “This is true of a lot of technologies, computers included.‎ And, people are concerned about this. We are, too. It’s why we suspend people’s use of our services if we find they’re using them irresponsibly or to infringe on people’s civil rights.”

The political angle

Basing policy decisions on an extreme case like Clearview AI may be a recipe for regulatory overreach. In fact, the privacy organizations’ choice of an extreme case like Clearview AI as the basis of their sweeping complaint looks a bit political. Not only does the company’s technology sound scary, but the company is backed Donald Trump ally Peter Thiel, and one of the company’s cofounders, Richard Schwartz, was a staffer for Rudy Giuliani when he was mayor of New York.

advertisement

“This isn’t new; I saw a demo of this app last year,” says Garrett Johnson of the Lincoln Network, a conservative tech advocacy group in Washington, D.C. “They’re associated with Peter Thiel, so they provide a good bogeyman.”

Johnson argues that the government shouldn’t move too quickly to restrict emerging technologies. “Ultimately there is a responsibility on both sides,” he says. “The government shouldn’t preemptively kill innovation with heavy-handed regulations when we’re still not quite sure what the downside is–and we should be looking very closely at the downsides–or what the upside is going to be.”

“There’s also a responsibility on the tech side to be transparent about the innovations they’re building, and to communicate those more effectively to the government,” he adds.

On the other side of the debate some believe that facial recognition tech is irredeemable, that its benefits can never rise above its dangers.

“There’s no safe way for governments to use facial recognition for surveillance purposes,” said Evan Greer of Fight for the Future in an email to Fast Company. “That’s why there’s growing consensus that governments and law enforcement agencies should be banned outright from using this technology,”

“There should also be strict limits on corporate and private use of this technology, and it should not be allowed in public spaces or institutions like colleges, hotels, or airports,” Greer wrote.

advertisement

It seems unlikely that the White House would move quickly toward a facial recognition ban, if at all. The Trump administration rarely strays from its pro-business, hands-off regulatory approach. It also wouldn’t care for the optics of tying the hands of law enforcement to detect and ID criminals. The same stance has less to the administration backing the Justice Department’s efforts to pressure technology companies such as Apple to build encryption “backdoors” into their devices so that law enforcement can access the contents.

Johnson says that he believes there’s little chance of any major technology regulation passing in 2020. But I find it’s possible that some form of AI regulation may occur next year, depending on the results of the election.

advertisement
advertisement

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.

More