Fast company logo
|
advertisement

TECH

Watch: Google’s AI boss focuses on ethics in wake of Project Maven controversy

Fei-Fei Li, the founder of AI4ALL, wants to educate the next generation to become socially responsible data scientists, but she’s dogged by her company’s ethics controversy.

Watch: Google’s AI boss focuses on ethics in wake of Project Maven controversy

Fei-Fei Li
[Photo: Hubert Vestil/Getty Images for SXSW]

BY Sean Captain4 minute read

The growing activism among tech workers in Silicon Valley reached a tipping point in June when Google–under intense pressure from employees–announced that it would not renew in 2019 a contract to provide image-recognition technology to the Pentagon’s Project Maven, a controversial military AI program that, among several goals, aims to improve the efficiency of drone attacks.

The tech giant’s internal debates over the issue were revealed in internal emails obtained by publications including the New York Times, showing that one of the company’s top minds, Google Cloud head AI scientist Fei-Fei Li, had early on raised alarms about Project Maven. But her motives appeared to be more focused on the potential for negative publicity rather than ethical concerns.

Fast Company recently asked Fei-Fei to clarify her views on the controversy but got only a partial answer that didn’t seem to bridge the gap between ethical aspiration and business reality.

[Photo: Sean Captain]

Ethicist on the hot seat

The attitude expressed in the emails seemed particularly disconcerting considering Fei-Fei’s longstanding commitment to a human-centered approach to new technologies. Holding a second job as head of Stanford University’s AI Lab, Fei-Fei has dedicated years to advancing ethical uses of artificial intelligence. Her nonprofit, AI4ALL, aims to increase gender, ethnic, and cultural diversity among the next generation of AI professionals, by recruiting high school students for summer training at top universities.

“Anytime humanity goes through a new wave of innovation and technological transformation, there are people who are hurt, and there are issues as large as geopolitical conflicts,” she told a few dozen tech and public service professionals at this week’s Summit on Artificial Intelligence and Its Impact on Communities, in Mountain View, California. The event was cohosted by AI4ALL and Dream Corps, a multifaceted public-service organization founded by political polymath Van Jones. (Fast Company attended the event and also spoke individually with Fei-Fei and Jones.)

But her concerns over Project Maven, as revealed in the emails, appeared to show more concern for optics than harmful effects. “This is red meat to the media to find all ways to damage Google,” she is reported to have written in an email exchange with Scott Frohman and Aileen Black from Google’s defense sales team. (Fei-Fei declined Fast Company‘s request to confirm or deny the veracity of the emails.)

At the summit, I asked Fei-Fei how she squares her concern about harmful technology and her involvement with Google and Project Maven. She declined to address the topic directly, but touched on the issue and launched into a general discussion about how to change the culture in Silicon Valley.

“It’s very important that academia, industry, nonprofits all join this conversation about AI and take part in responsible research and development of AI,” she said. Fei-Fei then complimented Jones, who had just interviewed her, for pushing the tech industry on issues of ethics and inclusion. “Whether it’s Google or other companies, they’re starting to roll out AI principles and do that self-introspection,” she said, adding, “I personally want to see more of that.”

Google has revised its AI principles to disavow lethal applications of the technology, and better image recognition could also have beneficial effects in drone warfare, such as decreasing the chance of hitting innocent bystanders. But critics are taking a wait-and-see attitude on the company.

One AI expert at the event told me that they had been “very disappointed” by the language in those emails. In one exchange, Fei-Fei is reported to have written that Google Cloud “has been building our theme on democratizing AI in 2017, and Diane [Greene, chief executive of Google Cloud] and I have been talking about Humanistic AI for enterprise,” adding “I’d be super careful to protect these very positive images.”

advertisement

A hero to many

Fei-Fei’s altruism seems like far more than posturing–with a faith in kids’ ability to succeed, rooted in her own inspiring, improbable success story. Coming to the U.S. from China with her parents at age 16, she spent her early years working odd jobs but progressed to earn a PhD from Caltech, specializing in image recognition. (She created the ImageNet database and benchmark that have been pivotal in advancing computer vision technology.) The only female faculty member at the Stanford Artificial Intelligence Lab when she joined in 2009, she became its director in 2014.

“Fei-Fei Li is the Rosa Parks of the new millennium,” Van Jones tells me after the event. “I mean, she is the person who is pulling us forward into a world of more inclusion, more wisdom, more appreciation, more understanding.” Fei-Fei began what became AI4ALL as a girls AI summer camp at Stanford in 2015. It progressed to a formal nonprofit organization in 2017 and now operates out of AI labs at six universities. “[UC] Berkeley targets the Oakland area school district . . . low-income students,” says Fei-Fei, giving one outreach example. “Princeton targets racial minority schools.” Students have come from as far away as Saudi Arabia.

“One of our alumni is the daughter of farmworkers, grew up in a very low-income family, first-generation Mexican-American,” says Tess Posner, Ai4ALL’s CEO, in a conversation she, Fei-Fei, and I had before the summit. “She’s using AI to detect water quality issues, which has been an issue that’s affected her community.”

But AI clearly has harmful uses, too, as Fei-Fei herself acknowledged. “In human civilization, we have seen so many times that tools and technologies have been used in ways that we’re not proud of,” she said at the event. Silicon Valley’s dilemma now is to decide what uses of AI it can feel proud of.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Sean Captain is a business, technology, and science journalist based in North Carolina. Follow him on Twitter @seancaptain. More


Explore Topics