I open an app. I place a dot. Minutes later, a white minivan saunters to the curb to pick me up. In the post-Uber world, this is far from remarkable. Except that this minivan is driving itself.
I’m in Chandler, Arizona, on a crystal clear, 75-degree day, amid strip malls and desert palm trees, as one of the first people to try Waymo One just days before it goes public. It’s the world’s first self-driving taxi service–like Lyft, but automated. I should feel like I’m living in futureworld! A freakin’ four-wheeled robot is driving me through the city! But really, I’m just a grown man sitting in the backseat of a minivan. It’s equal parts comfortable and infantilizing, as riding in the backseat of a minivan always is and always will be.
That is until the van tries to make a routine left turn at an intersection. There’s no one coming, so it eases in and then brakes–hard. I don’t see anything. It eases on the gas again, then brakes, even harder this time, so hard my seat belt wedges itself painfully between my shoulder and collarbone.
That’s when I glance at the screen in the backseat to see what’s really going on. A message with the no-nonsense aesthetic of NYC subway signage reads, “Object Detected.” To be clear, this means a van I have zero control over hallucinated an object in the middle of the street, then made a driving mistake, twice!
It is precisely such outré robot behavior that Waymo’s designers and engineers have spent the past few years trying to humanize. And it’s a big job.
Waymo One is the company’s new, autonomous self-driving car service born almost 10 years ago out of Google’s highly experimental X lab. In 2014, Google unveiled its Firefly car, a cute, some might say dweebie, autonomous car without a wheel in Silicon Valley. Over time, they’ve grown up, spun off to be run by Alphabet instead of Google, and moved to the suburbs of Phoenix. Today, Waymo is a fleet of more than 600 modified Chrysler Pacificas, retrofitted with proprietary vision cameras, a giant Lidar (laser) scanner that harkens back to 1980s conversion vans, and ever-whirring secondary Lidar, which spin like zoetropes on the front and rear bumpers. It’s a dizzying array of technology that Waymo has worked tirelessly to render downright dull–a design strategy the company believes will be key to dominating the $7 trillion self-driving car industry.
When cars drive themselves
The arrival of self-driving cars doesn’t just mean we’ll one day lease Kia crossovers that can drive the family to Disneyland on their own. Autonomous vehicles are poised to disrupt transportation of people and goods alike into a post-ownership, post-Uber society. Small vehicles will deliver pizzas, groceries, and Ikea dressers. People will have sex in-transit. Cities will be planned differently, with real estate prices shaken, simply because a transit system has no limits, and a cab can appear at your door on a whim. Traffic fatalities will decrease by as much as 90%. Heck, buildings might even go mobile. With all this in mind, it’s almost an afterthought to consider that the automobile industry itself will change, as people may no longer buy cars at all.
But the design of these self-driving technologies–most of which still fall far, far short of actually replacing human drivers–will play a crucial role in how, and how much, they disrupt transportation. Take computers and smartphones. Neither could go mainstream before they adopted the graphic user interface of the Macintosh, and iPhone, respectively. So too do self-driving cars have to leverage design to make autonomous driving approachable. But the tactics with which the auto industry has presented self-driving technologies thus far have varied wildly.
Most car manufacturers have offered semi-autonomous tools, like auto-breaking or auto-steering that can keep you in a lane, as safety features–all while slowly trying to figure out the best way a driver can hand over control to the car, then take it back at will. Elon Musk put Tesla “Autopiloting” cars on the road like a time machine into our blindingly bright future, but Tesla Autopilot can really only handle parking and highway driving, plus the cars require you always keep a hand on the wheel, otherwise they beep like an alarm and force you to take over. (Tesla’s Autopilot has been involved in three driver fatalities to date.) It’s a model that makes sense for Tesla, since its business model is to sell cars.
Uber and Lyft have both been investing considerably in their own top-secret driving programs to sell a service. Lyft employs 300 engineers–a staff it plans to double shortly, with $200 million in new investments from self-driving technology partner Magna. It also runs the booking platform for 30 vehicles by Aptiv, which circle the Las Vegas Strip offering autonomous rides between the hotels and major attractions. Uber has over 1,000 employees working on self-driving vehicles. Until recently, Uber was even in Arizona; the state has a perfect climate, wide roads, light traffic, and generous regulations, making it a hotbed for the autonomous driving industry. However, Uber company pulled out of Tempe earlier this year after one of its vehicles fatally struck a pedestrian at night. Uber has suspended all 200 of its self-driving vehicles while the company resets from the tragedy.
Waymo, meanwhile, is the first to market with what’s dubbed “Level 4” self-driving technology. That means you can say, “pick me up here, and drop me off there,” and the car is smart enough to handle the rest. Because Waymo is the first through the breach during a time autonomous vehicle fatalities are still front-page news, public perception of self-driving cars will largely be shaped by this one company–at least to start. Which may help explain why Waymo is positioning autonomous vehicles not as exciting and futuristic, but as a nonthreatening public utility here to prevent 1.25 million deaths in auto accidents a year.
And so Waymo’s designers and engineers have worked to make the service “courteous and cautious,” in the words of Dan Chu, the head of product at Waymo. That narrative is reinforced across the full Waymo One experience. It’s everywhere from the car’s interface, which cautions and reassures you at every turn, to the way it drives, which brakes early and often to avoid the whiff of an accident. To assuage people’s fears of climbing into robo cars (and to prevent a truly catastrophic media event), Waymo One vehicles still have a human “driver” inside. It’s an employee who actually just sits awkwardly at the self-steering wheel with their hands on their lap, prepared to hit a red “stop” button that’s secured in the car’s center cup holder with what looks like beer koozie.
More than once during my visit, Waymo employees suggest how “boring” the experience is for their riders after one or two times. They’re right; by my second trip across the suburbs of Phoenix, I feel like I’m running pesky Saturday morning errands, rather than being whisked through Phoenix by minivan Optimus Prime. It’s a remarkable feat of usability design: Waymo has made the technology so mundane, it no longer feels like a novelty.
Mitigating fear through familiarity
The Waymo van I’ve hailed drives up a deserted street, then pulls up slowly–oh so slowly–to the curb. There it sits idling in the sun like my loyal pet.
And it does look like a pet, or perhaps a toddler’s toy, retrofitted as it was with soft, round corners. “You won’t see any harsh angles or aggressive lines,” Ryan Powell, head of the UX research and design team at Waymo, points out. “We really want them to feel approachable.” The Lidar scanner atop the car has a black dome, reminiscent of an evil Dr. Who Dalek. But it’s been smoothed into the roof line with custom white paneling. The spinning sensor on the rear bumper gives the van a tail, and hints of anthropomorphism.
But the attempt to make the experience feel familiar starts even before I see the car. Booking a vehicle with the Waymo app is a cinch, and it will feel like second nature to anyone who has called an Uber or a Lyft. It’s the same map and pin interface, but with a few key updates. Because the driver is a robot that has optimized pickup points, you can’t actually place the pin wherever you like. Instead, the app suggests a pickup point near you. If you don’t like it, you can drag your thumb across the screen, and it will eventually snap to another option in the area. You then repeat the process for drop-off, since you can’t tell the driverless car, “Hey, pull up to the curb right here” at your destination.
This semi-placeable pin is the first instance I notice of how Waymo works hard to give me even a small sense of control over an automated system. Solutions like this one weren’t immediately obvious to Waymo; they were born from loads of user testing. Since April, 2017, Powell’s team has been carefully studying 400 “Early Riders” invited as Waymo’s first beta testers in Phoenix. They conduct interviews, go on ride-alongs, and, of course, dig through piles of data gathered from the platform and app itself (which goes so far as to force the user to rate the ride, and specify what they liked or didn’t like at the end of each trip). All of this feedback shaped the Waymo experience.
Pickups and drop-offs continue to be a vexing challenge for both the design and engineering teams. After moving the majority of its testing from the Bay Area to Phoenix to scale the program, Waymo realized that Arizona had sprawling parkings lots, unlike Mountain View, where most pickups can be done at a curb. And when you have a load of bags outside a Target on a busy Saturday, just where do you want that car to pick you up?
“Clearly, it’s right in front of the entrance, right? But it’s not,” says Chu. “If you have six bags you want to load, you don’t want to be the obnoxious person stopping traffic. Our early riders were like, ‘Don’t send me there, send me to the side road so people aren’t honking at me!’ The way we address that is giving more control in the mobile app.”
As I walk up to my waiting van, my smartphone silently syncs up with the vehicle, verifying my identity and unlocking the door. (Ideally, the door will magically open for you. But that feature is still in the works, I’m told.)
Climbing in, I hear a chiming soundtrack reminiscent of an airline safety video, or the Tokyo subway. It’s meant to relax me, and to help me transition from the hectic outside world to this automated bubble where I can sleep, socialize, or get work done. Other than the start or the end of the ride, Waymo is conservative with the soundscape. The team has found that audio notifications increase the passenger’s likelihood to check the screen by 50%, so it’s used judiciously. They don’t want the ride to “feel like a video game,” Powell says.
Inside, the van has dedicated buttons to pull the car over if you want, or to call for “immediate assistance.” They’re positioned clearly, right above your head like OnStar. Waymo knew that the assistance button would be necessary, lest riders feel trapped inside a machine, but they had no clue it would prove so popular. Customers on their first rides call for assistance frequently, as it turns out, with all sorts of questions about the robot driver like, “Does the car know I’m in a construction zone?” A few rides in, the riders stop calling. Waymo was initially surprised by how frequently calling was used, but now it’s started looking at these 1:1 chats as an onboarding cost.
Meanwhile, people almost never hit the pullover button, Waymo says. As it turns out, the robot does a good enough job driving, so riders don’t feel they have to. But seeing that little button, right within my reach, I’m quietly comforted when I should really feel like I’m held hostage by a robot.
Screens give a peek into the algorithmic mind
There’s only so much that experience design can do to make a rider comfortable inside a machine. One of Waymo’s biggest challenges is helping users understand what’s happening inside the computer’s mind–a driver that can make billions of calculations a second but lacks a face–so that they can trust its decisions.
“We learned pretty early on from user research that we needed a proxy for a lot of communication that happens between you as a passenger and human driver,” says Powell. “There’s direct communication–you ask where you’re going–and indirect, you see the driver shift their hands on the steering wheel because you know they’re going to turn.”
To give you a literal look inside these algorithms, Waymo installed two screens hanging from head rest of the front seat. These screens have a daunting job: They have to replace all of the subtle cues and conversation you get from a person. They display all the stuff you’d expect: Your name. Your destination. Your arrival time. And a big blue “start ride” button. (Starting a ride can’t be automated, I’m told, because how can a car know that everyone is actually inside?)
Hit start, and you’re ushered to a new interface–the Waymo design team’s masterpiece. It’s a GPS-like view of everything the car sees, to give the rider a way to look inside the black box algorithm that is its computer driver.
“One of the questions we got early on was, just how much can our cars actually see?” recalls Powell, who was most recently grilled on this topic when visiting his family over Thanksgiving. The answer is, a lot, and a lot more than humans can–a full 300 yards in 360-degrees at all times. Engineers can see this view in its raw output for debugging, which is an awesome but nonsensical collection of sensor waves and objects wrapped in boxes.
“It’s a very overwhelming look of what the car is able to see. And it’s not very trust-inspiring,” says Powell. “So we spent a lot of time on the design side thinking how to curate the scene.”
The final result is a view of your car–which gleams with as much sparkle as a minivan can on the screen–then ahead and behind the vehicle, a long green line denotes the car’s path on the road. (Why green? It’s Waymo’s brand color, and yes, because green is shown to be calming.) Other cars appear as blobby rectangles. The screen also shows crosswalks and curbs. And unlike the UI-fudging we’ve seen with Uber’s phantom car–in which the Uber app will place fictional vehicles driving fictional routes on your screen–everything in Waymo’s interface is real to reinforce trust in the machine.
Occasionally the interface gives TMI to really demonstrate its instrument sensitivity in moments where a rider might be nervous. Riders often wonder if the car can spot a construction site. It can. And it will render 12 traffic cones, spaced unevenly on the street, because the car sees 12 traffic cones that some construction worker dropped unevenly in the street. “It’s a very honest representation of what the car is seeing,” says Powell.
Humans in the car’s vision get an even higher-fidelity treatment than those traffic cones. Rather than being presented as bathroom-like icons, as you might expect, they’re drawn as pointilist clouds, constructed from the 3D dot map built by the car’s lasers. Look closely enough, and you can notice their legs and arms moving, slight visual intricacies that seem to whisper to the rider, “It’s okay, we know these are people, we know they are vulnerable, and we know they are more important than buildings and cars.”
Teaching robots to drive less like robots
Waymo’s next challenge is greater than anything a user interface alone can solve. Because if Waymo wants to make robot cars feel safe and comfortable, sooner or later, it has to teach them to drive in a way that feels more human.
Simply put: When you ride in a Waymo vehicle, it just doesn’t feel like you’re being driven by a human. At all. The vehicle has a tendency to accelerate evenly through turns. It stops painfully early for yellow lights. Once it jerked the wheel out of the way of a pickup truck in the next lane–with a staccato sharpness I’ve never felt a human driver execute. And often, it will gently tap the brakes or gas when I simply don’t know why.
On one such occasion, I glance down to the screen. Why were we speeding up and slowing down so often? It ended up that my vehicle was responding to a car, maybe four lengths ahead. I knew this because anything actionable–anything that the Waymo bot is considering in its driving decision–is highlighted in green on the screen. That goes for pedestrians, bicyclists, even cars that are far removed from you in traffic. If they glow green, Waymo is taking them into special account.
So in this case, with that car so far ahead of us being highlighted, I quickly inferred that my Waymo was just keeping what it considered a safe distance, even if that would be “too safe” by my measures. Waymo was mitigating the spectacular weirdness of being driven around by a robot through a slight touch of UI. “It does feel a little different than how a human would drive,” Powell says. “It has all that rich data, way more than we as humans do, so it’s going to navigate its environment differently than a human. So we try to get across [that] fact.”
Waymo knows this design stopgap isn’t enough, though. Up next, it needs to mitigate the uncanny feeling of a robot driver, and to design all of the micro moments of acceleration and turning to feel more human. That’s easier said than done. As Waymo’s team explained to me, it’s hard to identify just which moment during a 15-minute trip bothered a rider, since they generally just rate the ride at the end. Secondly, Waymo has learned from its user feedback and data it has that we don’t actually all agree on what a normal human driver feels like.
“You have one rider that feels like it was way too conservative, and almost the exact same ride, another rider is like you were way too aggressive,” says Chu. “How do you tune it for riders when rider preferences are so different . . . Can we get to a place where we can almost predict that, and look at a ride and say, this type of user probably wouldn’t like it, this user would like it? That’s an interesting challenge to tackle, because it’s not one-size-fits-all, at all.”
So what will the future of Waymo feel like? Will we eventually have hyper-personalized rides, with algorithms that tune to our preferences from Sunday drive to Dale Earnhardt Jr.? Probably not, Chu says. Instead, they imagine finding some middle ground that’s 80% there for everyone–perhaps a car that’ll leave a bit of extra space between itself and other drivers, but still jam on the gas to make a yellow light.
Waymo One service goes live today to the public, and as its service ramps up in the coming weeks, it will allow anyone in the Phoenix area to book a robot taxi for the first time. The news should be either terrifying or terribly exciting. Instead, the transportation revolution starts, not with a gasp, but a yawn.