advertisement
advertisement

Meet the computer scientist and activist who got Big Tech to stand down

Joy Buolamwini’s research helped persuade Amazon, IBM, and Microsoft to put a hold on facial recognition technology. Through her nonprofit Algorithmic Justice League, she’s now battling AI bias in other realms.

Meet the computer scientist and activist who got Big Tech to stand down
[Photo: Shaniqwa Jarvis; photographed on location at Windy Films Studio]
advertisement
advertisement

Joy Buolamwini got Jeff Bezos to back down.

advertisement
advertisement

In June, Amazon announced that it was issuing a moratorium on police use of its controversial facial recognition software, called Rekognition, which it had sold to law enforcement for years in defiance of privacy advocates. The move marked a remarkable retreat for Amazon’s famously stubborn CEO. And he wasn’t alone. IBM pledged that same week to stop developing facial recognition entirely, and Microsoft committed to withholding its system from police until federal regulations were passed.

These decisions occurred amid widespread international protests over systemic racism, sparked by the killing of George Floyd at the hands of Minneapolis police. But the groundwork had been laid four years earlier, when Joy Buolamwini, then a 25-year-old graduate student at MIT’s Media Lab, began looking into the racial, skin type, and gender disparities embedded in commercially available facial recognition technologies. Her research culminated in two groundbreaking, peer-reviewed studies, published in 2018 and 2019, that revealed how systems from Amazon, IBM, Microsoft, and others were unable to classify darker female faces as accurately as those of white men—effectively shattering the myth of machine neutrality. 

Today, Buolamwini is galvanizing a growing movement to expose the social consequences of artificial intelligence. Through her nearly four-year-old nonprofit, the Algorithmic Justice League (AJL), she has testified before lawmakers at the federal, state, and local levels about the dangers of using facial recognition technologies with no oversight of how they’re created or deployed. Since George Floyd’s death, she has called for a complete halt to police use of face surveillance, and is providing activists with resources and tools to demand regulation. Many companies, such as Clearview AI, are still selling facial analysis to police and government agencies. And many police departments are using facial recognition technologies to identify, in the words of the New York Police Department, individuals that have committed, are committing, or are about to commit crimes. “We already have law enforcement that is imbued with systemic racism,” Buolamwini says. “The last thing we need is for this presumption of guilt of people of color, of Black people, to be confirmed erroneously through an algorithm.” (This isn’t a hypothetical: The ACLU of Michigan recently filed a complaint against the Detroit Police Department on behalf of a man who was wrongly arrested for shoplifting based on incorrect digital image analysis.)

[Photo: Shaniqwa Jarvis; photographed on location at Windy Films Studio]
For Buolamwini, Big Tech’s pause on developing this technology is not nearly enough. As the Black Lives Matter protests took hold during the summer, she used her platform to call on technology companies to donate at least $1 million each to organizations such as Data for Black Lives and Black in AI that advance racial justice in the tech sector. The AJL released a white paper exploring the concept of an FDA-like authority to oversee facial recognition technologies. And the nonprofit, which has received grants in the past from the Ford and MacArthur Foundations, just received fresh funding from the Sloane and Rockefeller Foundations to create a set of tools to help people report harmful AI systems. “There truly are no safeguards that can guarantee there won’t be an abuse of power of these tools,” Buolamwini says. That’s where the AJL comes in. 


Buolamwini was inspired to investigate algorithmic bias when she found herself donning a white mask to code at MIT’s Media Lab. She was creating a whimsical augmented reality project and the off-the-shelf computer vision technology that she was using had trouble detecting her face. When she put on a white mask, obscuring her features entirely, the computer finally recognized her.

The problem was familiar. As an undergraduate at Georgia Tech, Buolamwini had had to “borrow” her roommate’s face in order to teach a robot to play peek-a-boo. Later, she encountered robots at a startup in Hong Kong that had similar issues. At the time, Buolamwini thought that the tech companies would soon fix the problem. At MIT, she realized they didn’t even know there was a problem. 

advertisement

For her master’s thesis, she began testing facial-analysis applications with photos of parliamentarians from Europe and Africa. Though she found gender and racial disparities, she hesitated, at first, to publicize the results. “It’s not easy to go up against some of the biggest tech companies when you know they can deploy all of their resources against you. I am still very aware that I am a young Black woman in America. And in the field of AI, the people I was aiming to evaluate held all the purse strings,” she says. But after the 2016 election, she decided the stakes were too high. 

In 2018, she turned her thesis into her “Gender Shades” paper. Coauthored with Timnit Gebru (who is now part of the Ethical Artificial Intelligence Team at Google), the study showed that the error rates for widely used systems, including those of IBM and Microsoft, were significantly higher for darker female faces than for lighter males—up to 34% for IBM. A year later, Buolamwini and fellow researcher Deborah Raji, then an MIT intern and today a fellow at the AI Now Institute, published the “Actionable Auditing” study, which showed some improvement in the algorithms of the companies in “Gender Shades”—and revealed troubling flaws in Amazon Rekognition, which misclassified women as men 19% of the time, and darker-skinned women as men 31% of the time.  

Within a day of receiving the “Gender Shades” findings, in 2018, IBM committed to addressing AI bias. Microsoft’s chief scientific officer, Eric Horvitz, says, “It was immediately all hands on deck for us.” Amazon, however, responded to “Actionable Auditing” by trying to discredit the study, prompting more than 70 AI researchers to publish a letter last April supporting the research and calling on the company to stop selling the software to law enforcement.

With these studies, Buolamwini essentially helped found a new field of academic research. “She was the first person to realize that this problem exists, to talk about it, and do academic work around it until the powers that be took notice,” says Kade Crockford, the director of the Technology for Liberty Program at the ACLU of Massachusetts, who worked with Buolamwini to advocate for Boston’s recent ban on using facial recognition for surveillance. “Before Joy’s research, people would say with a straight face, ‘Algorithms can’t be racist.'” At the end of 2019, the Commerce Department’s National Institute of Standards and Technology completed its largest audit of algorithmic bias in commercially available facial recognition technology. It tested the algorithms of nearly 100 companies and found false-positive rates for one-to-one matching of Asian and African American faces that were between 10 and 100 times higher than for those of Caucasian faces.

Buolamwini’s research speaks for itself. But by lending her own voice to the cause, she has given it more urgency. She’s aware of the power of her personal story: that of a Rhodes Scholar, Fulbright fellow, and accomplished computer scientist, forced to wear a blank white face mask just to do her work. Indeed, images of her in the mask appear everywhere, from her 2016 Ted Talk, in which she introduced the AJL (it’s now been viewed 1.3 million times), to the new documentary Coded Bias, which traces Buolamwini’s evolution from MIT researcher to digital activist. 

“It’s almost a cliche in advocacy work that people don’t necessarily remember what you say, but they always remember how you made them feel,” says Crockford. “I think Joy instinctively understands this. When she talks about her auditing of these algorithms, she always begins by telling the story of how she discovered that there was a problem.” Crockford describes a recent meeting the ACLU had with a law enforcement official in Massachusetts to ask for support for a statewide moratorium on government use of facial recognition technology. According to Crockford, the first words out of the official’s mouth were, “I was just at a talk by this incredible woman who said that this technology couldn’t even see her face and she had to put on a white mask.” The official signed on. 

advertisement

Raji, who was the lead author of the “Actionable Auditing” report, credits Buolamwini with finding ways to translate academic research for the masses. “In the machine-learning community, even when researchers have findings that are relevant to policy or the public, it can be difficult for them to speak to a broader audience,” she says. She notes the attention that Buolamwini put into developing the slides for their 2019 report and the powerful images she used to illustrate “Gender Shades.” “Even the language that she uses—terms like ‘pale male’ data sets or the ‘undersampled majority’—that’s really poetic and intentionally designed that way,” says Raji.

When Buolamwini testified before the House Oversight Committee last spring, she spoke not just about problems embedded in algorithms, but the people and communities who are harmed by them, such as a Muslim college student who was misidentified as a terrorist suspect and a group of Brooklyn tenants whose landlord tried to install a facial recognition entry system for their rent-stabilized buildings. Her testimony culminated in an exchange with Congresswoman Alexandria Ocasio-Cortez, which succinctly made the connection between algorithmic bias and inequality in the criminal justice system. The clip of their conversation quickly went viral. “Joy has the rare ability to articulate not just the science, but why it matters,” says Coded Bias director Shalini Kantayya.

Buolamwini draws further attention to AI’s pitfalls through spoken-word pieces. Her 2018 video, “AI, Ain’t I a Woman?,” highlights iconic Black women, from Sojourner Truth to Michelle Obama, who are misclassified by AI. The work has more than 50,000 views on YouTube and was part of the AI: More Than Human exhibition that debuted at London’s Barbican Centre last April. When the EU Global Tech Panel wanted to raise the danger with defense ministers of using image-recognition technology to guide lethal autonomous weapons, in 2019, it played them the video. “Creating art gets the conversation going in a way that might not be achieved with an hour-long talk or a research paper,” Buolamwini says. 

She is now expanding her reach. The AJL, which had previously operated as something of a more casual coalition of researchers and advocates, recently hired its first director of policy and partnerships, Aaina Agarwal. And the organization is launching perhaps its most ambitious initiative yet: the Algorithmic Vulnerability Bounty Project, a set of tools that will help people report biases and harms caused by AI, inspired by the kinds of bug bounty programs that are used to find security flaws in software. “We want to minimize the harms of these systems,” says MIT professor Sasha Costanza-Chock, who serves as Senior Research Fellow at the AJL. “But we want that work to be led by individuals and community-based organizations that represent people [who have] historically been harmed by such systems.”

“We have a voice and a choice in the kind of future we have,” Buolamwini says. And when it comes to AI injustices, her voice resonates.