Recently, the leader of Google’s heralded autonomous-car project, Chris Urmson, suggested that computers can drive better than humans. According to the MIT Technology Review, Urmson told attendees of a robotics conference in Santa Clara, California, that “autonomous cars are safer and smoother when steering themselves than when a human takes the wheel.” He based this conclusion on usage data gathered from Google’s fleet of self-driving Prius and Lexus cars.
If you ask most other humans (outside of Google), however, they will vehemently disagree.
In 2011, Allstate Insurance surveyed 1,000 people about their driving habits. Two-thirds interviewed rated themselves as excellent or very good drivers. But the admitted actions of those drivers tell a different story:
- 40% admitted to driving more than 20 miles per hour over the speed limit.
- 5% drove while very tired, to the point of nearly falling asleep.
- 15% got behind the wheel while intoxicated.
- 34% sent text messages or e-mails while driving.
- 53% report having a speeding ticket or other moving violation on their record. Among these drivers, 44% say they have received three or more citations.
Bottom line: Most humans suck at driving. At the very least, they are inconsiderate and reckless. The truth hurts.
But it’s not about who’s the better driver. It’s about who we think is the better driver. To arrive at a future when we can feel confident to take our hands off of the wheel and enjoy the morning paper or engage in a video conference while our cars shuttle us to work, school, or our kids’ soccer practice, we have to believe robo-cars are super-human. Right now, we’re programmed to presume AVs will function a lot like our other gadgets: pretty good but susceptible to occasional bugs and hiccups. The challenge of engineers and designers is to show that they’d never settle for pretty good.
Nothing reminds us how much we rely on technology like a OS crash. When that screen freezes, whether it be on your smartphone, desktop, or tablet, we freeze. But your computers big and small don’t have to shut down entirely to remind you of their fallibility. You feel it when your iPhone takes a few tries to slide open and answer an incoming call when it’s supposed to; when your tablet lags behind your touch; or your desktop browser takes forever to load content. Even when everything goes as planned, your devices still probably aren’t delivering responses in actual real time.
“The true definition of a real-time system is one that responds to input immediately and reliably every time,” says Derek Kuhn, vice president of automotive for QNX Software Systems, makers of one of the most reliable and secure real-time computing platforms for the automotive environment. “Most general-purpose operating systems are not real time because they can take a few seconds, or even minutes, to react to input.”
Imagine if your car did that while you were speeding down the highway.
QNX’s real-time operating systems handle tasks such as navigation, in which the computer must react to a steady flow of new information without interruption. They are also used to control active safety equipment or driving aids such as blind spot monitoring, lane keeping, adaptive cruise control. They’re the systems making critical decisions in micro- and milliseconds. If they fail while operating, someone could get hurt or even die.
In the context of autonomous vehicles, the definition of real time goes even deeper. “It’s not necessarily about the exact amount of time it takes to do something, but a deterministic amount of time,” explains Andy Gryc, senior automotive manager for QNX. “You have to know exactly how much time it will take for something to react to a certain situation.” Think about how anti-lock brakes work, for instance. It’s not just about how fast the brakes apply during a possible collision that matters, but knowing the precise, consistent number of milliseconds between the time you mash on the brake pedal to when the pads begin slowing the turn of the wheels.
So while the computing and processing technology used in autonomous vehicles is different–more capable, robust, and quick to react–than, say, your iPad, the terminology we use to describe its capabilities isn’t. This is a problem, especially since real time in a car refers more to “life-critical time,” than it does in your office or on the street. When real-time technology is implemented correctly, it feels predictive in a uniquely human way.
Still, Gryc and Kuhn both say computers aren’t smarter or even more intelligent than humans. “If I asked the average person to multiply two really huge numbers, they probably couldn’t do it. A computer could,” says Gryc. But, he adds, “We can also reason and think on our feet and outside the box. Computers need to be trained to do those things.” When a computer surveys a scene–a car 50 feet in front of you, a person walking on the sidewalk, a red light, etc.–it has to examine it pixel by pixel and figure out, say, that’s the road, that’s the edge of the road, that’s a obstacle, the car is this far from the edge of the road, etc. There are tons and tons of algorithms that go into figuring that stuff out.
“A computer that drives a vehicle is programmed to understand as many different situations and combinations of those situations that it can and then it makes decisions based on the variables in that certain situation,” says QNX’s Kuhn. “Humans, on the other hand, can process the same data and use our judgment to formulate a reaction.” We are more equipped for qualitative analysis and are better at recognizing and adapting to complicated patterns like three-dimensional shapes. Both of which are essential to the act of driving. “That is very difficult to synthesize or emulate at this point,” Kuhn says.
That doesn’t mean it can’t happen. So much about the self-driving experience or the future of robotic cars hinges on the piloting software. But when you talk about a car being able to react in life-critical time, the amount computational horsepower available to that car is just as important, contends Danny Shapiro, director of automotive at Nvidia, a processor manufacture. It’s simple: The more complicated the software and the larger amount of data that needs to be fed to it, the more processing power is needed to pull off complicated computations so that the car can get you from point A to B without any hiccups.
“We are experimenting with ways to integrate super computer technology, specifically parallel processing, into a very energy efficient platform and install it in a car,” says Shapiro. Such processors will allow the car to survey its environment a minimum of 30 times a second and deliver life critical information, collected from multiple sensor points, to the driver or control systems faster than the blink of an eye.
The big challenge is to teach the car to make complex decisions based on all of the input it senses. “There will be some outlier areas where it will be difficult to get the computer to mimic the same behavior as a human, like thinking outside certain set of parameters or reasoning out why one course of action is better than another,” says QNX’s Gryc. We’re not there yet. But it’s no small feat that Gryc and his colleagues have isolated what’s involved in engineering trust.
“Autonomous systems give the car superhuman powers,” says Shapiro. “I don’t know any human that can process radar. Computers can see and collect more data than a human, sense it, and process it faster. Thus, the vehicle will be safer.”