This week, the self-driving car company Waymo placed an order for thousands of driverless cars that will hit the streets of Phoenix later this year. Ubiquitous autonomous vehicles are on their way, even as car companies, lawmakers, and ethicists struggle to answer questions about how they should behave in the real world.
For example, if a vehicle has to choose between injuring or killing either a pedestrian or its passenger, what does it do?
It’s a question that’s already being addressed by automakers, and unsurprisingly, they tend to prize passenger safety over pedestrians. Mercedes announced in 2016 that its vehicles will be designed to always protect passengers over bystanders. But what solutions might design offer to ensure that no one gets hurt, whether they’re driving a car or just walking across the street?
For our new conceptual design series Provocation, we asked several design firms to address this ethical conundrum. Two of the design teams we talked to took a similar approach–they both posited that the key to ethical self-driving cars was wresting control over how algorithms make life-or-death decisions away from carmakers and putting it in the hands of people.
For the British firm MAP Project Office, the solution is a dial on the dashboard–think of it as a steering wheel, or a shifter–that lets passengers decide how much they want their car to prioritize their own safety over that of other people. Meanwhile, the Seattle-based firm Teague designed a voting platform that lets every member of a community vote on the ethics of self-driving cars, rather than letting individual drivers or automakers decide.
A New Kind Of Steering Wheel
Inspired by an ethical self-driving car simulation by the creative technologist Matthieu Cherubini that shows what happens when cars have different value systems, MAP created a dial that acts almost like a steering wheel or gear shifter for autonomous vehicles–but instead of steering the car’s wheels, it allows the driver to adjust the car’s behavior based on their values.
There are four options: “Protectionist,” where the car protects the passenger at all costs; “humanist,” where the car tries to reduce the total amount of injury and save the most number of people; “altruistic,” where the car prioritizes pedestrians above its occupants; and “random,” where the car chooses one of the options and removes the decision-making power from the passenger altogether. “Random kind of equates to behaving instinctively,” says MAP’s design director Jon Marshall. “When humans are in this situation, they don’t think, they instinctively do something.”
MAP began the project by thinking about how to simply minimize the damage of a crash or prevent it from happening in the first place. But the team realized that should be the purview of the car manufacturers, and instead decided to focus on designing an interface for the “invisible” software embedded within the vehicle. “That was what most interested us,” Marshall says. “Maybe what design is most good at doing . . . is making the interaction with the invisible technology clear and simple. That was the stepping stone for us, where you have some sort of control within the car that makes it very clear.”
The dial shifts between these different algorithms, and you can adjust it so that it straddles the line between each. For instance, perhaps you want a mostly protectionist car that also leans toward wanting to minimize injuries for everyone. Perhaps you’re willing to be injured up to a certain extent–but not killed–if that means saving a pedestrian.
Marshall and his team imagine that there might be zones within a city that require that all cars be set to a certain mode–similar to a congestion zone, where low-emission vehicles can enter for free but those that are not must pay. In a school zone, perhaps all cars should be required to be in “altruistic” mode. On the other hand, if you’re on the freeway and traveling fast, perhaps all cars should be mandated “protectionist.”
The strength of MAP’s idea is that people choose how algorithms act. But its greatest flaw is that in allowing passengers to make these judgment calls, pedestrians still don’t have any say over their fate. That’s determined by the moral whims of the passenger. Unless, of course, those modes are regulated by policy–which is exactly what the design firm Teague proposed.
Letting People–And Cities–Decide
In response to our brief, Teague created a concept called “Moral Compass.” It’s a platform where citizens of a certain area–be it a neighborhood, city, or state–vote collectively on what kind of morality they want their vehicles to have. Teague’s designers illustrated this by showing a person voting on the ethics of vehicles in the city of Seattle; then, when a person in their self-driving car enters city limits, a notice pops up on a dashboard screen, reading: “We put children first.”
In this theoretical example, Seattle’s democratically decided policies mandate every car to have a certain ethical alignment that always prioritizes children’s safety over anyone else. Other cities or states could vote to be entirely self-preservationist, or always save the most number of people in any given situation.
Teague’s designers are interested in a world where safety isn’t solely in the hands of automakers–it’s also every community’s collective responsibility. Teague technical director Warren Schramm believes that we have to “decide that it’s not a product decision and it’s our moral standard.”
He thinks this democratic approach to self-driving cars is something that Americans will want to participate in. “In the U.S., people are passionate about this, controlling what their community standards are,” he says. “If they understand this is something they can influence, they’d want to.”
The two concepts aren’t entirely compatible. One relies primarily on the choice of the driver, while the other is focused on regulating and enforcing a collective majority’s stance toward a car’s ethical decision-making. But they could also coexist, especially since laws will vary from place to place. The dial would enable people to choose within the confines of regulation–and force passengers to take a more active role in a driverless world.