Fast company logo
|
advertisement

Years after she was ousted from Google, Gebru is still exposing the dangers of building language models that are quite large.

Why Timnit Gebru wants AI giants to think small

[Photo: Kimberly White/Getty Images; Marcel Strauß/Unsplash; rawpixel; Markus Spiske/Unsplash; Michael Dziedzic/Unsplash]

BY Issie Lapowsky3 minute read

Timnit Gebru used to believe it was possible to curb tech giants’ worst impulses from the inside out. 

She tried it herself at Microsoft, where in 2018 she coauthored a seminal paper that revealed just how bad commercial facial recognition systems—including Microsoft’s—were at detecting Black, female faces. The paper did prompt Microsoft to tweak the accuracy of its system, but Gebru viewed that action as a half-step when the company could have stopped developing facial recognition technology altogether.

She tried bringing that level of accountability to Google, where she went to work in the fall of 2018 as colead of the search giant’s ethical AI team. It was there that she cowrote another paper outlining the dangers of large language models—the potential for bias, the environmental costs of training them, the labor exploitation that goes into building them. Google infamously tried to force Gebru to take her name off the paper, a standoff that escalated to Gebru’s sudden and very public ouster from the company in December 2020. 

It was these experiences—witnessing firsthand the limitations of creating big changes from inside a tech giant—that inspired Gebru to launch her own independent research group, the Distributed Artificial Intelligence Research Institute (DAIR). The Institute is sponsored by the public interest nonprofit, Code for Science & Society, and has since attracted other ex-Googlers, including Alex Hanna, who is now DAIR’s director of research. Since 2021 when the Institute launched, Gebru has sought to create a haven where independent AI researchers can honestly and openly scrutinize their industry without worrying about any trillion dollar company’s bottom line. Over the past year, as companies like OpenAI, Meta, Google, and Microsoft have made ever greater claims about the capabilities of large language models, Gebru has emerged as a uniquely credible critic of the industry’s overpromises and the harm that is already coming from the recent surge in AI hype.

For Gebru, one of the biggest problems is the fact that these companies are trying to build language models that are quite so large in the first place, with no specified purpose in mind. “When you’re trying to build something like a one-size-fits-all model for every kind of scenario, you’ve already lost in terms of safety,” she says. “You can’t even ask the question: What is this for? What should it not be used for? What are the inputs? What are the outputs? You’ve not scoped it right.”

Any language model that claims to be all things to all people, Gebru argues, will almost certainly be less effective than one that is smaller, but was purpose-built for a given task or community. One example of this is in machine translation, where Gebru says smaller models, trained specifically on a given language, often outperform gigantic models that purport to translate hundreds of languages at once, but wind up doing a shoddy job with nondominant languages. “When you know your context and your population and you curate your datasets for that reason, you build small,” says Gebru, who recently coauthored a conference paper poking holes in Meta’s own machine translation claims regarding certain Eritrean and Ethiopian languages.

“Building small,” as Gebru puts it, stands to benefit not just the end users of AI technology, but also the much broader spectrum of companies working on these tools in and for communities around the world. That’s the harm behind the hype: When a select few companies in Silicon Valley promise more than their technology can actually deliver, Gebru argues, it makes it even harder for smaller companies to compete. But it doesn’t have to be that way. 

“I want to show people that there is another path,” Gebru says. “Wherever we are is not inevitable.”


This story is part of AI 20, our monthlong series of profiles spotlighting the most influential people building, designing, regulating, and litigating AI today.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Issie Lapowsky is a journalist covering the intersection of tech, politics, and national affairs. More


Explore Topics