Fast company logo
|
advertisement

As we train more and more AI systems, we need to make sure to mitigate against the biases that inevitably creep in.

When It Comes To AI, Facebook Says It Wants Systems That Don’t Reflect Our Biases

[Screenshot: Facebook]

BY Daniel Terdiman3 minute read

Artificial intelligence is a big part of nearly everything Facebook does–from image recognition to detecting objectionable content to helping people find jobs.

But with the power of this technology–especially at the scale of Facebook and its 2 billion users–comes responsibility, says Facebook data scientist Isabel Kloumann, and particularly the need to mitigate against bias creeping into its AI systems. Of course, the need for Facebook to take more responsibility for its technology has been front and center in the wake of its many security- and privacy-related controversies in recent weeks and months.

Taking responsibility for handing AI can’t, and won’t, happen automatically. It’ll take Facebook, and the countless other companies that are increasingly relying on AI across many industries taking proactive steps to, as Kloumann put it today at F8, Facebook’s developers conference, building fair and unbiased algorithms. And that begins with ensuring that the teams behind those systems are themselves as diverse as possible. “If AI only learned from a small group,” she says, “we will only see a narrow view.”

Even such efforts to increase diversity are difficult, she adds, because everyone has their own unconscious biases. For example, no one can eliminate the way they see other people–their skin color, their weight, their gender, and so on. And whatever associations with things we may have, they’re no doubt different from the people around us. “We need to understand and mitigate our biases,” Kloumann says, “so we don’t pass them on [and] so our AI can do better.”

That’s why Facebook uses an external review process to bring in a multitude of voices that help the company ensure there’s an ethical framework to its AI systems–people with expertise that ranges from technology to social science and beyond. The goal? To make sure Facebook’s AI systems have the most positive impact on people, Kloumann says.

Avatars And Jobs

As it advances its social virtual reality technology, Facebook is trying to build more realistic avatars, and it’s relying on AI to assist with that. But in order to have the most diverse range of possible avatars, it needs to train its AI on a huge diversity of actual faces, she explains.

That means doing substantial manual training–adding labels by hand to the many attributes our faces have–things like hair, skin color, mouth shapes, and so on. Facebook needs to make sure those labels are accurate and unbiased.

The same is true of the way it applies AI to things like ranking news articles, and many other areas. And that’s why the company has developed what Kloumann calls bias mitigation guidelines–to figure out where bias creeps in and try to keep it from doing so. “You need to ask what has your AI learned from” all the data you feed it, she says.

One area she says is important to get right is job recommendations. The majority of job creation in America is in small businesses, and people use Facebook’s job hunting tools to find employment opportunities at many such businesses. Kloumann says the trick with AI was to make sure that the tools weren’t biased in favor of any demographic groups over others. “We want to ensure our job algorithms are providing opportunity equally,” she says, across all genders, ages, sexual orientations, and other demographics.

One thing Facebook has done is built fairness into tools that are available through FBLearnerFlow, a system that all company engineers use to find libraries for their AI projects. That means that any engineer can use preexisting tools to evaluate the fairness bias of their projects, and draw on existing best practices.

But these are still early days, Kloumann says, and it will take much more work, both inside Facebook, and elsewhere to solve these problems. The most important questions that have to be answered sit at the intersection of different communities–mathematics, social science, ethics, and others. Everyone will have to work together to solve them, she says.

AI is a powerful and transformative technology, Kloumann says, and harnessing it for social good requires that everyone work together. “AI isn’t exactly our child,” she says, “but it is our responsibility, and that belongs to all of us. So let’s work together to teach it.”

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the extended deadline, April 12.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications More


Explore Topics