Technology was supposed to save the world. Instead, it might end it. Our precious artificial intelligence is a racist job thief that’s gotten us addicted to our phones, and may one day turn the world into goo if we aren’t more careful.
“There’s a question not being emphasized enough: How can we make AI useful to benefit society now, or in the very near term–not just [present] in your daily life, but addressing real challenges,” says Fang. “That’s the reason why I started this new course.”
Fang herself is no stranger to the topic of socially beneficial AI. Her research has helped the U.S. Coast Guard leverage principles of game theory to protect N.Y.C. from terrorism. She’s also worked with several wildlife agencies, using AI to predict where tiger poachers will strike next.
These kinds of AI problems aren’t just funded less than their counterparts in the Valley because they often have no immediate profitable goal in sight, they’re also typically much more difficult to research than, say, teaching a computer how to identify certain types of coats and chairs. “For these kinds of problems, researchers need to delve deep into the problem, the challenge,” says Fang. “It’s not like some problems that have a readily available data set, then you apply a publicly available code package, and the problem is solved. They need more in-depth understanding of the challenges.”
Social problems are messy, and often fueled by anecdotal evidence that can be hard to quantify. During her own work in preventing tiger poaching, Fang gathered data from authorities who would run patrol routes in Africa. The problem was, this data was shaped by human error–humans who might have missed, or not recorded clues that poaching had happened in the area. That all had to be accounted for in the code. From this data, they hoped to build a predictive model as to where a poacher would strike next. “Even if you can predict some sort of poaching activities, it’s not always good to just go to areas with high predicted poaching activity!” says Fang. “Because as you change your patrolling strategy, the poachers will react.” And finally, the AI had to craft new routes for patrol teams to take all of this other logic into account. But those routes, of course, were driven by vehicles that couldn’t pass every environmental obstacle. So the most optimal routes had to be defined within the limitations of the terrain.
“None of these aspects can be addressed with a publicly available commercial tool, or directly addressed by sitting in an office,” says Fang. “That means we need to talk to experts, understand the problem, and propose solutions to it.”
Fang’s approach might sound obvious, but in the burgeoning world of AI, it’s not. Contemporary AIs make decisions that we cannot even understand or deconstruct, and, more and more, these decisions hurt our society. They stop a person from getting a mortgage because of their race. Or they show a woman an ad for a lower paying job than a man. And if we want better AI, we must start with rethinking how AI is taught in schools. As one industry insider put it to me, when describing AI coders, “It’s all just math to them.” Clearly, these numbers need a conscience–which means the people programming them need a conscience, too.
When I ask if that means she’ll be partnering with nonprofits in student-driven pro bono projects, Fang hinted that all such ideas were on the table, but reminded me that the class is a new project of its own that will need plenty of optimization year after year. “This is the first semester, so we will keep developing the curriculum,” she says. “I hope this course can inspire the students to think big and deep into what we can help address–not just to amuse or entertain people–but address problems society is facing.”