Thanks to laser-depth sensors and 360 degrees of camera vision, self-driving cars see better than humans can. They can accelerate with an evenness that no fleshy calf can match. And thanks to software capable of simulating millions of traffic patterns—and accidents—they develop more experience than a person could acquire over several lifetimes.
But that doesn’t always translate to trust from the people riding in them. For the person climbing into the back seat of an autonomous driving vehicle, should the car drive with the cautious patterns of a human driver, or the confidence of a more omniscient robot? That’s the question posed by researchers from the University of Warwick along with Jaguar Land Rover. Surprisingly, researchers found that people have no real preference as to whether a car should drive like a person or a robot; they just want to know how it thinks.
The study invited over 40 subjects to ride in small AVs inside a miniature indoor city, which was designed to be full of intersections. Half the vehicles mimicked human behavior, by creeping up to intersections and pumping the brakes to check for cross traffic. The other half leveraged the full capabilities of AVs, and either stopped confidently at an intersection or drove full speed ahead through it. (In reality, these vehicles weren’t actually making any decisions at all. They were programmed to run on a loop, much like a Disneyland ride. But this bit wasn’t disclosed to the subjects.)
At the end of four separate trips, riders rated how much they trusted their robot drivers. And by the end, any measured difference in trust between human-like AVs and robot-like AVs was deemed statistically insignificant. In other words, the style of ride ultimately made no difference to riders.
It’s an important finding that seems to imply that humans are more adaptable to automated driving than originally thought. Ironically, some robot driving already mimics human driving anyway, simply because humans often do drive in a rational way. In the case of Waymo—the world’s first robotic taxi service—it has modeled its vehicles to inch up to intersections at times, but a spokesperson contends that this behavior really is to let the vehicle see more before proceeding forward. It’s not just a ruse to make humans feel better.
In any case, what researchers ultimately learned was that it wasn’t the way the car drove that the industry needs to be focused on but the information the car conveys to the user. In post-ride interviews, many subjects expressed confusion and rationalizations about why and how the cars made the decisions they did. Were the cars networked to one another? Could the vehicles see around corners somehow? Any ensuing distrust wasn’t necessarily how the car drove but the gap between the decisions the vehicle was making and how well the rider could understand those decisions.
It’s exactly why—to come back to Waymo as the most visible commercial comparison—the company dedicated so much energy to building a screen UI that depicts and annotates what the vehicle can see, and then supplements all of that with occasional vocal commands. Riding Waymo for myself, I encountered a moment when, during a left-hand turn at an intersection, the car slammed its brakes harder than any human I’ve ridden with ever has. And then it slammed them again. Ultimately, there was nothing in our way! But an accompanying onscreen prompt that there was an “object detected” (even though it was a mistake) at least explained the anomaly.
Without that notification, I would still be scratching my head about what happened, unsure if I could trust it for another ride. Instead, I was just left thinking, well, we all make mistakes. Even the robots.