Fast company logo
|
advertisement

A codirector of the university’s influential Institute for Human-Centered AI, Li has been a key voice for safety and alignment amid this year’s ‘AI boom.’

Stanford’s Fei-Fei Li is pushing the tech industry to build humanity into AI models

[Photos: Drew Kelly/Stanford HAI; ThisIsEngineering/Pexels;
Michael Dziedzic
/Unsplash; Rawpixel]

BY Mark Sullivan2 minute read

Years before generative AI became the buzziest term in tech, there was a smaller wave of interest that hit around 2012, back when the industry was achatter about AI image classifiers (that is, models that could recognize and label images). That year, a neural network called AlexNet beat out existing methods of classifying images by a wide margin. AlexNet and its successors were only possible because someone built a massive image dataset to teach them—that was ImageNet, a project started back in 2006 by Fei-Fei Li, then an assistant professor at Princeton. Suddenly everyone was talking about “deep learning.”

Li is now considered one of the brightest minds in AI, mentioned in the same sentence as Geoffrey Hinton, Yann LeCun, and the like. After developing ImageNet at Princeton, Li moved on to Stanford University in 2009, where she would later become director of the school’s Artificial Intelligence Lab. In January 2017, she took a 22-month sabbatical to serve as chief AI/ML scientist at Google Cloud, returning to Stanford in fall 2018 as codirector of its influential Institute for Human-Centered Artificial Intelligence (HAI), where she remains to this day.

“Deep from the bottom of my heart, I’m a scientist slash technologist,” says Li. “So it is still building the technology I love, especially now with my students, that really is the source of my energy. I’m still so curious about AI; it’s such an awesome field and there’s so many unanswered questions.”  

As the name suggests, HAI is all about working closely with the tech industry to champion healthy values of openness, transparency, safety, and explainability in the creation of AI. Promoting such considerations as central (and early) features of the AI research and development process has grown more urgent with the rapid advance of the technology over the past two years, as the short-term and long-term risks become better understood. 

advertisement

“The concerns are real. . . . So I’m not delusional about this at all,” she says. “It’s a very, very powerful technology, just like the inflection points that humanity has experienced in our civilization’s history, whether it’s fire or electricity or the PC—this is that scale and depth.”

Yet, Li is in no sense an AI doomer. She doesn’t advocate halting research on large models. Rather, her cause is keeping humans, and human values, at the center of the process—an ideal that doesn’t always take root in the profit-hungry world of Silicon Valley. “I don’t know where we collectively are going to come out,” she says, “but I think it’s so important to focus our energy on human-centered AI.”


This story is part of AI 20, our monthlong series of profiles spotlighting the most influential people building, designing, regulating, and litigating AI today.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics