Fast company logo
|
advertisement

As big tech companies press pause on developing this surveillance tool, others are racing to commercialize it.

Facial recognition technology is inevitable—it’s time we make it human-centered

[Photo: Alex Suprun/Unsplash]

BY Maelle Gavet4 minute read

On the back of the Black Lives Matters movement, IBM decided to get out of the facial recognition (FR) business altogether. Amazon announced a one-year moratorium on police use of its FR software, Rekognition, and Microsoft declared that it would do the same until there is a federal law to regulate the technology.

This was celebrated by many as an important step given how this technology is rapidly becoming one of the most powerful surveillance tools ever invented. In addition to the privacy implications of requiring mass databanks of faces in order for the technology to be effective, it has been proven to reinforce race and gender bias. The ACLU ran a test in 2018 on Rekognition that concluded that “nearly 40 percent of Rekognition’s false matches in our test were of people of color.” In 2019 a study of Rekognition by researchers Inioluwa Deborah Raji (University of Toronto) and Joy Buolamwini (MIT) came to the conclusion that the software was misclassifying the gender of darker skin women 31.4% of the time (against 0% for lighter-skinned men). And while it’s heartbreaking that it took the murder of George Floyd and massive worldwide unrest for big companies to question how their technology reflects—and contributes to—systemic injustices, later is always better than never. Significantly more needs to be done, though, to course-correct the use of FR.

To start, these companies were never big players in the police facial recognition market. The main providers (NEC, Clearview AI, Idemia, Cognitec, Vigilant Solutions, Rank One Computing) have either remained silent or have doubled down. The CEO of Clearview AI, for example, recently explained that the company will stay in the business because he “strongly believe[s] in protecting our communities,” though he said he’d work with policymakers to help develop protocols for the technology. There are many other companies working on similar products, in part because the technology itself is relatively easy to develop. The challenge is access to enough data to train an FR system, and governments and law enforcement control access to some of the largest available data sets. If Microsoft, Amazon, and IBM don’t want to provide their technology to law enforcement and government, others will and already do.

Equally troubling is the fact that consumers are giving away this data as they blithely integrate FR technology into their daily lives. When it comes to balancing ease of use with privacy concerns, consumers almost always choose the former. We leave our personal data on social networks and prefer biometric identification systems over passwords. And we are slowly but surely warming up to the use of the technology in retail spaces. When it comes to public safety, the situation is even more clear: a 2019 survey by the Center for Data Innovation, a nonprofit nonpartisan research institute, showed that “if it would come at the expense of public safety, then just 18 percent of Americans would agree with limiting surveillance cameras and the same percentage would agree for facial recognition.”

While I would like to see a complete ban of facial recognition for law enforcement agencies around the world, I’m afraid this cat is already out of the bag. A handful of cities have adopted bans, San Francisco and Boston among them, but they will likely be outliers. The pressure to adopt FR technology for law enforcement is just too great.

At the same time, Big Tech’s professed embrace of regulation seems to be more about avoiding popular backlash that may lead to more severe bans and hinder the commercialization of the technology. A Microsoft employee, after all, literally wrote Washington’s facial recognition law, otherwise known as SB6280, which the ACLU has condemned for “threaten[ing] to legitimize the infrastructural expansion of powerful face surveillance technology” and purportedly putting “safeguards around the use of facial recognition technology but do[ing] just the opposite.” It is revealing that none of the major companies involved in FR has signed the pledge put together by the Algorithmic Justice League, committing to address harmful bias, facilitate transparency, and ultimately show value for human life, dignity, and rights.

With this technology inevitably making its way into both public and private hands, it’s time to put certain guardrails in place to protect society.

First, we need a system that protects civil liberties: People need to receive clear information about when and how FR is being used. And they need to have the option to opt out and to appeal against algorithm harms/bias—or even hold the maker of the system liable, not just the governmental or commercial entity using it.

Second, we need to quickly improve the representativeness of the data these system use and the accuracy of their recommendations. Ideally each company should communicate openly and actively about how it is fighting the inherent biases of its data sets and eliminating false positives and negatives. The media and society at large should hold them accountable to it. Unfortunately, it will likely take governments requiring tech companies to make their code and data accessible to audits for some transparency to happen.

There is no doubt that the FR technology is coming. A lot of the tech companies working on it are trying to find the right balance between technological progress, commercial interests, and ethics. More than ever it is critical that we—as citizens, users, and consumers—remind them to keep the interest of humanity solidly at the center of the technology they’re building and the strategic choices they’re making. We don’t need less tech—we need more empathetic tech.


Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of a forthcoming book, Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

WorkSmarter Newsletter logo
Work Smarter, not harder. Get our editors' tips and stories delivered weekly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics