The first fatality involving an autonomous car shocked the industry this week–even though we still don’t know exactly how it happened. And if the self-driving Uber car that stuck and killed pedestrian Elaine Herzberg in Tempe, Arizona, was not at fault (she wasn’t using the crosswalk), car companies can largely avoid such accidents by doing a lot more rigorous training in virtual environments before hitting the road.
So says Danny Lange, who developed the machine-learning AI platform that Uber uses across its operations, including in its self-driving car division. Lange left Uber in December 2016 to become VP of AI and machine learning at game engine maker Unity. In a conversation with Fast Company, Lange says he couldn’t comment on the accident in Tempe, since he has no inside knowledge on the circumstances, but one lesson is clear. “I think personally that there’s too much emphasis on getting cars out on the real street–over, let’s drive billions of miles in a simulated environment,” he says.
It’s not that autonomous carmakers aren’t doing simulations. Most or even all of them are using game engines such as Unity or Unreal to run the car AI systems though complex environments. (Imagine if your driving instructor took you to Grand Theft Auto’s Vice City.) “My point is they probably started a little late, and they’re probably not doing enough of it,” says Lange.
Automakers freely acknowledge that they can’t cover nearly as much mileage in real life as they can in virtual environments. But not only are road miles limited, much of that time is useless, says Lange. “When an Uber vehicle drives down the road, 98 to 99 percent of the data is irrelevant,” he says, “because it’s really just following the car in front of it.” Real-life driving just isn’t all that dangerous, which is fortunate for us humans, but not much help for robots that are trying to learn.
In a virtual environment, an AI driver could be bombarded with darting pedestrians, running dogs, falling trees, cars going the wrong way–anything imaginable. “You can experience the most adversarial situations, and not a single human life would be endangered,” says Lange. Real-life training and testing is still needed, but simulations could make the AI into much better drivers before they face life-and-death situations.
It wouldn’t just be up to humans to imagine virtual testing scenarios. Lange points to recent improvements in adversarial network and reinforcement learning AI. “Imagine taking machine learning to the next level, where you use machine learning to generate those episodes of traffic that challenge the self-driving car,” says Lange.
The two AIs would battle it out, one throwing every possible obstacle in the way of the car, and the other trying to steer the car around them. Reinforcement learning rewards the mayhem-creating AI whenever it causes an accident, encouraging it to dream up ever more harrowing scenarios. But such training also encourages the driver AI to become ever more nimble at avoiding accidents.
There’s already precedent for such large-scale systems, says Lange. Last year, Google’s AlphaGo Zero system (the successor to AlphaGo) mastered the ancient Chinese game of Go in three days, learning as much as humans had needed thousands of years to do. With massive cloud-computing resources, that’s a pretty doable scenario. It would be possible, says Lange, to reserve servers on Amazon’s AWS cloud service to run 100,000 driving simulations at the same time, at four times the speed of real life. “If you really do this at scale, you can end up driving more than humanity ever has driven,” he says.
Even the best driver imaginable is still subject to physics, though. A car of a certain weight, traveling at a certain speed, with certain brakes, needs a certain amount of time and distance to stop. Sometimes that’s just not soon enough. “I don’t know anything about the specific case,” says Lange about this week’s accident. “But I can imagine situations, and we did try that at Uber, where there is absolutely nothing that the vehicle can do.”