What happens when the training data that feeds our artificial intelligence is limited or flawed? We get biased products and services.
Imran Chaudhri, who created the iPhone’s user interfaces and interactions, illustrated this point bluntly onstage at the Fast Company European Innovation Festival in Milan today. “Siri never worked for me,” the accented British-American designer acknowledged, “and we worked on it.”
Chaudhri, who spent more than 20 years at Apple before cofounding the still-in-stealth startup Humane, was speaking on a panel about the pursuit of inclusive AI with Michael Jones, the senior director of product at Salesforce AI Research. As evidence of AI’s susceptibility to bias increases, the pair agreed that having diverse data sets is essential to creating automated systems that transcend—rather than replicate—the flaws of the real world.
“We have an implicit bias in society today,” Chaudhri said. “And because so much of [AI] is a mimicry of our world, [computers] inherit the same problems of our world.”
As Jones explained, if all your training data for an automatic speech recognition service comes from white men from the Midwest, you’re going to alienate anyone who speaks English as a second language or with an accent. And if you develop a hiring tool to look for résumés that resemble those of your current, mostly male C-suite executives, you’ll end up with a system that automatically penalizes someone who cites her achievements in a “women’s chess club”—an apparent reference to an internal test AI at Amazon that reportedly weeded out female job candidates before it was abandoned in 2015.
Jones and Chaudhri agreed on the importance of hiring diverse and multidisciplinary design and engineering teams to ensure that companies have people who can think through the potential consequences of what they’re building.
“You have to design [AI] knowing that it’s imperfect, that it’s in a very early stage right now,” says Chaudhri. “What that means is that the training [for an AI system] is very similar to the training we use as humans. The parallel is you training a child: The ignorance that a child has is very similar to the ignorance that a computer has, and a computer will take in a lot of [your] biases.”
There’s also the problem of how AI is deployed. Sometimes, Jones said, a company has to set hard limits on how it allows its tech to be used by others. “A lot of this comes down to the values of your company,” he said.
As details emerge of how ICE is using facial recognition to scour state driver’s license databases for undocumented immigrants, Jones cited Salesforce’s acceptable use policy, which stipulates that customers can’t use the company’s technology for facial recognition applications. “You could argue there are some great applications of facial recognition, perhaps reuniting people that have been separated,” he acknowledged, “but we’ve taken a stance, saying, ‘No, you can’t use our technology to power facial recognition.'”
Jones says this more cautious approach to developing algorithmic tools is a new concept in Silicon Valley. “In Silicon Valley it was always, ‘Can we do it?’ The tide is now turning, and a lot more people in the Valley are asking, ‘Should we do it?'”