As debate rages over how to regulate artificial intelligence, Google X Captain of Moonshots Astro Teller has some news: “We’re not going to regulate AI,” he said at MIT Technology Review‘s EmTech Digital conference on AI in San Francisco this week. “AI is counting. ‘It’s just statistics’ is about the fanciest [description] I can stomach. It’s just counting. It’s just counting.”
It’s a startling comment from a prominent tech leader, less than two weeks after one of Uber’s self-driving cars struck and killed a pedestrian, renewing calls for stricter policing of AI. But Teller’s larger point is that AI can’t be regulated in isolation–it exists within larger systems and must be regulated as part of the system in which it operates.
“For the most part machine learning and artificial intelligence is like saying, ‘Hey, we got electricity in our stuff! Hey, we have transistors in our projects!'” he says. “At X . . . we see [AI] as an enabling technology, and not the point of what we’re doing.”
When you understand AI in those terms, you can start to see the logic of adapting the existing regulatory structures of industries that embrace machine learning, such as transportation and healthcare, rather than the AI itself–an approach echoed by others in the field. Self-driving cars offer a compelling example. “What we’re going to regulate, what we have to regulate, is a shift from how we think about testing systems because [their safety measures are] strong enough–think of a car smashing into a brick wall to make sure the crash-test dummy is safe–to testing the smartness of these systems, Teller says.
But it could be an uphill battle, as Teller himself admits. “Society overall has some thinking to do about how do we regulate the safety of the system. But it’s going to be at a holistic level. Not going into a specific part of the code and saying, ‘You have too much AI in there, get some of it out.'”