advertisement
advertisement

These crowdsourced maps will show exactly where surveillance cameras are watching

Amnesty International will produce a crowdsourced map showing the location of every camera capable of facial recognition in New York City before expanding out to other places like the West Bank and New Delhi.

These crowdsourced maps will show exactly where surveillance cameras are watching
[Photo: Benedikt Geyer /Unsplash; Michael Daniels/Unsplash]
advertisement
advertisement

Amnesty International is producing a map of all the places in New York City where surveillance cameras are scanning residents’ faces.

advertisement
advertisement

The project will enlist volunteers to use their smartphones to identify, photograph, and locate government-owned surveillance cameras capable of shooting video that could be matched against people’s faces in a database through AI-powered facial recognition.

The map that will eventually result is meant to give New Yorkers the power of information against an invasive technology the usage of which and purpose is often not fully disclosed to the public. It’s also meant to put pressure on the New York City Council to write and pass a law restricting or banning it. Other U.S. cities, such as Boston, Portland, and San Francisco, have already passed such laws.

Facial recognition technology can be developed by scraping millions of images from social media profiles and driver’s licenses without people’s consent, Amnesty says. Software from companies like Clearview AI can then use computer vision algorithms to match those images against facial images captured by closed-circuit television (CCTV) or other video surveillance cameras and stored in a database.

Starting in May, volunteers will be able to use a software tool to identify all the facial recognition cameras within their view—like at an intersection where numerous cameras can often be found. The tool, which runs on a phone’s browser, lets users place a square around any cameras they see. The software integrates Google Street View and Google Earth to help volunteers label and attach geolocation data to the cameras they spot.

The map is part of a larger campaign called “Ban the Scan” that’s meant to educate people around the world on the civil rights dangers of facial recognition. Research has shown that facial recognition systems aren’t as accurate when it comes to analyzing dark-skinned faces, putting Black people at risk of being misidentified. Even when accurate, the technology exacerbates systemic racism because it is disproportionately used to identify people of color, who are already subject to discrimination by law enforcement officials. The campaign is sponsored by Amnesty in partnership with a number of other tech advocacy, privacy, and civil liberties groups.

advertisement
advertisement

In the initial phase of the project, which was announced last Thursday, Amnesty and its partners launched a website that New Yorkers can use to generate public comments on the New York Police Department’s (NYPD’s) use of facial recognition. Later in the campaign, the website will let citizens generate Freedom of Information Act requests to discover where and how facial recognition systems are being used in their neighborhood or borough.

Amnesty International says it hopes to launch a crowdsourced mapping project similar to the one in New York for New Delhi this spring, for the West Bank sometime this summer or early fall, and in Ulaanbaatar, Mongolia, in the fourth quarter of the year.

FACIAL RECOGNITION IN NYC

Following the September 11 attacks, the NYPD spent $350 million in federal grants to develop surveillance infrastructure called the Domain Awareness System (DAS). It’s a sprawling network of license plate scanners, physical sensors, and 18,000 CCTV cameras, and it all feeds into a series of NYPD databases. In the years after its completion, the system was used for both counterterrorism and local law enforcement. It wasn’t immediately paired with facial recognition, but the surveillance hardware was in place.

In 2011 the NYPD launched its new Facial Identification Section (FIS), which could search for matches among images in the DAS database of CCTV and other surveillance footage.

The NYPD can use thumbnail stills from that video and run them through software like DataWorks Plus or Clearview AI, which compares them to other image databases, says Matt Mahmoudi, an AI and human rights researcher at Amnesty International. In the case of Clearview AI at least, images from the surveillance database can also be compared against images scraped from social networks.

advertisement

The NYPD’s own statistics show that it ran almost 10,000 facial recognition searches against its DAS database in 2019, leading to 2,510 “possible matches.” This may have been significantly understated. BuzzFeed News reported in February 2020 that the department had run more than 11,000 searches using the controversial Clearview AI technology. Clearview AI says it works with 600 law enforcement agencies in the U.S., and DataWorks Plus’s Face Plus technology is widely used by law enforcement agencies across the country.

In general, Mahmoudi says, the content and variety of the images used for facial recognition depends on what agency owns and operates the database. A federal agency may collect facial recognition thumbnails to compare against a database of drivers’ licenses for immigration enforcement. A database used by local law enforcement may use both license plate images and facial images from social media in an investigation of criminal suspects.

But in all of these cases, the public doesn’t know that their faces are being scanned for facial recognition when they pass through an intersection in New York.

“The purpose of doing this mapping is to make it clear to New Yorkers, and anyone who lives in a city with a police department that uses this technology, that there’s no way out,” says Michael Kleinman, director of Amnesty International’s Silicon Valley Initiative. If asked if people would be comfortable knowing that the police had a file on every citizen, “most of us would answer, ‘Of course not, that sounds like something that happened in East Germany’,” Kleinman says.

BUILT-IN BIAS

The NYPD used its facial recognition system last summer to track down Derrick Ingram, a Black Lives Matter activist accused of shouting into a police officer’s ear with a bullhorn during a demonstration. Dozens of police officers, some in riot gear, showed up at Ingram’s door in the Hell’s Kitchen neighborhood of New York on August 7. The event caused a public outcry because it was another example of facial recognition being used indiscriminately against people of color. At least three Black men have been wrongly jailed because of faulty facial recognition tech, and more cases may have gone unreported.

advertisement

“What we have to remember from the case of Derrick Ingram and the cases of many other Black Lives Matter protesters before and including George Floyd is that there was no due process,” Mahmoudi says. The Black Lives Matter movement of the past year was a reaction to systematic bias in the way police interact with Black people, a problem facial recognition can only exacerbate.

There were extremely egregious cases of police violence, and the way these cases were managed should make you question whether you want to take police forces that are already struggling with questions around justice and discriminatory policing and equip them with facial recognition techniques,” he says. “That’s the bigger question here—do you want to put that kind of discrimination on steroids?”

That’s the bigger question here–do you want to put that kind of discrimination on steroids?

Compounding the problem further is the fact that the computer vision AI used in facial recognition systems has shown itself to be less reliable when identifying the faces of people of color. This leads to higher rates of mismatch among Black or brown faces than among white faces.

After the Derrick Ingram incident, Mayor de Blasio’s office said it would “reassess” the police department’s guidelines for using facial recognition. But it’s questionable whether the NYPD can credibly police itself on the judicious use of facial recognition. That’s one reason Amnesty and other civil rights groups believe the city council needs to pass a law restricting use of the technology.

If the New York City Council doesn’t move quickly, the federal government may move first, providing a set of regulations on how law enforcement at all levels can use the technology. A federal bill might also supersede the patchwork of local and state bills that are growing in number now.

advertisement

Such a bill was introduced in the Senate last Junethe Facial Recognition and Biometric Technology Moratorium Act of 2020by Senators Ed Markey (D-MA) and Jeff Merkley (D-OR), but failed to advance in 2020. The bill, as written, would prevent federal funds from being spent on facial recognition technology or any other biometric surveillance system. Federal agencies, then, could not contract for surveillance systems for their own use, nor could Congress or federal agencies grant money to cities and states for such systems.

But it’s a new year, with a new president and a new congress. Markey and Merkley’s bill, or some version of it, could yet see the light of day.

“Democrats control the House and Senate and they campaigned on a message of racial justice,” says Evan Greer, who leads the progressive tech advocacy group Fight for the Future.

“They have absolutely no excuse not to pass the common sense moratorium legislation that stops the ongoing use of this racist, discriminatory technology so that we can have a real debate about what role, if any, it can play in a just society,” she says.

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.

More