Fast company logo
|
advertisement

TECH

Why The Military And Corporate America Want To Make AI Explain Itself

Understanding why AI makes certain decisions—and why should we trust them—is beyond most humans’ grasp. But experts are developing a system to explain it in simpler terms.

Why The Military And Corporate America Want To Make AI Explain Itself

[Photo: Flickr user Gabriel Saldana]

BY Steven Melendez4 minute read

Modern artificial intelligence is smart enough to beat humans at chess, understand speech, and even drive a car.

But one area where machine-learning algorithms still struggle is explaining to humans how and why they’re making particular decisions. That can be fine if computers are just playing games, but for more serious applications people are a lot less willing to trust a machine whose thought processes they can’t understand.

If AI is being used to make decisions about who to hire or whether to extend a bank loan, people want to make sure the algorithm hasn’t absorbed race or gender biases from the society that trained it. If a computer is going to drive a car, engineers will want to make sure it doesn’t have any blind spots that will send it careening off the road in unexpected situations. And if a machine is going to help make medical diagnoses, doctors and patients will want to know what symptoms and readings it’s relying on.

“If you go to a doctor and the doctor says, ‘Hey, you have six months to live,’ and offers absolutely no explanation as to why the doctor is saying that, that would be a pretty poor doctor,” says Sameer Singh, an assistant professor of computer science at the University of California at Irvine.

Singh is a coauthor of a frequently cited paper published last year that proposes a system for making machine-learning decisions more comprehensible to humans. The system, known as LIME, highlights parts of input data that factor heavily in the computer’s decisions. In one example from the paper, an algorithm trained to distinguish forum posts about Christianity from those about atheism appears accurate at first blush, but LIME reveals that it’s relying heavily on forum-specific features, like the names of “prolific posters.”

Developing explainable AI, as such systems are frequently called, is more than an academic exercise. It’s of growing interest to commercial users of AI and to the military. Explanations of how algorithms are thinking make it easier for leaders to adopt artificial intelligence systems within their organizations—and easier to challenge them when they’re wrong.

“If they disagree with that decision, they will be way more confident in going back to the people who wrote that and say no, this doesn’t make sense because of this,” says Mark Hammond, cofounder and CEO of AI startup Bonsai.

Last month, the Defense Advanced Research Projects Agency signed formal agreements with 10 research teams in a four-year, multimillion-dollar program designed to develop new explainable AI systems and interfaces for delivering the explanations to humans. Some of the teams will work on systems for operating simulated autonomous devices, like self-driving cars, while others will work on algorithms for analyzing mounds of data, like intelligence reports.

“Each year, we’ll have a major evaluation where we’ll bring in groups of users who will sit down with these systems,” says David Gunning, program manager in DARPA’s Information Innovation Office. Gunning says he imagines that by the end of the program, some of the prototype projects will be ready for further development for military or other use.

Deep learning, loosely inspired by the networks of neurons in the human brain, uses sample data to develop multilayered sets of huge matrices of numbers. Algorithms then harness those matrices to analyze and categorize data, whether they’re looking for familiar faces in a crowd or trying to spot the best move on a chess board. Typically, they process information starting at the lowest level, whether that’s the individual pixels from an image or individual letters of text. The matrices are used to decide how to weight each facet of that data through a series of complex mathematical formulas. While the algorithms often prove quite accurate, the large arrays of seemingly arbitrary numbers are effectively beyond human comprehension.

advertisement

“The whole process is not transparent,” says Xia “Ben” Hu, an assistant professor of computer science and engineering at Texas A&M and leader of one of the teams in the DARPA program. His group aims to produce what it calls “shallow models”: mathematical constructs that behave, at least in certain cases, similarly to deep-learning algorithms while being simple enough for humans to understand.

Another team, from the Bay Area research institution SRI International, plans to use what are called generative adversarial networks. Those are pairs of AI systems in which one is trained to produce realistic data in a particular category and the other is trained to distinguish the generated data from authentic samples. The purpose, in this case, is to generate explanations similar to those that might be given by humans.

The team plans to test its approach on a data set called MovieQA, which consists of 15,000 multiple choice questions about movies along with data like their scripts and subtitled video clips.

“You have the movie, you have scripts, you have subtitles. You have all this rich data that is time-synched in situations,” says SRI senior computer scientist Mohamed Amer. “The question could be, who was the lead actor in The Matrix?”

Ideally, the system would not only deliver the correct answer, it would let users highlight certain sections of the questions and answers to see the sections of the script and film it used to figure out the answer.

“You hover over a verb, for example, it will show you a pose of the person, for example, doing the action,” Amer says. “The idea is to kind of bring it down to an interpretable feature the person can actually visualize, where the person is not a machine-learning developer but is just a regular user of the system.”

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Steven Melendez is an independent journalist living in New Orleans. More