Fast company logo
|
advertisement

The famous thought experiment has been applied to the development of autonomous vehicles, but it’s considered extremely flawed.

Why the Trolley Dilemma is a terrible model for trying to make self-driving cars safer

[Photo: Will Langenberg/Unsplash]

BY Marcus Baram7 minute read

The flowers and stuffed teddy bears of the makeshift memorial may be gone, but grief and trauma still haunt the intersection of 5th Avenue and 9th Street in Brooklyn’s Park Slope.

It was the scene of a horrific tragedy last year, in which a seizure-prone woman ran a red light and ran over and killed two young kids in a crosswalk. Compounding the tragedy, the pregnant mother of one of the victims lost her baby, and the driver committed suicide months later. The intersection is cursed: Just a few years earlier, an unlicensed cab driver jumped the curb and plowed down an elderly woman and our building’s handyman, sending them both to the hospital. I saw the footage captured by a security camera at a local bodega, and the suddenness and brutality of the accident is like something out of a silent movie without the musical accompaniment.

The incident shocked the neighborhood, prompting safety improvements and the occasional presence of a traffic cop. Yet every time I drive through the intersection with my family, my heart skips a bit and I drive extra cautiously.

Those tragic deaths were just two of the more than 18,000 traffic fatalities in the first half of 2018, the leading cause of unintentional death for Americans aged 4 to 24. Such depressing statistics are one of the main drivers behind the development of self-driving vehicle technology, which promises to sharply reduce the number of traffic fatalities. Rarely has a new technology so promised to fundamentally improve all of our lives by removing one of the biggest risks in society.

It’s almost certain that such optimism is justified, considering the tiny number of accidents involving the self-driving test cars cruising the streets of California, Arizona, and a few other states in recent years. We may have to be patient–some of the early expectations for the mass adoption of such technology have been delayed by years, if not decades–but that glorious future is definitely on its way. In addition to Tesla, almost every major automaker in the world has invested billions in the technology, which is rapidly advancing when it comes to remaining challenges, like 3D mapping and car sensors that work in hail and rain and snow.

But are our minds and morals developing just as rapidly?

The way we think about self-driving vehicles still seems stuck in the past. And it’s not just our frontal lobes, which still seem to be expecting a Jetsons future of personalized self-driving cars in every garage rather than the far more likely early adoption of the technology by mass transit (trains, buses, vans) and shipping needs (trucks, ships).

It’s also about the ethics of autonomous technology, how we think about ethical decisions and how those choices get baked into the algorithms that guide those self-driving vehicles, whether 18-wheelers or two-seaters.

Take the Trolley Dilemma. It’s one of the key thought experiments in ethics that’s been debated in some shape or form since 1905, when it was included in a moral questionnaire handed out to students at the University of Wisconsin.

You know the dilemma: You’re standing by a trolley track and a runway trolley is about to run over five people tied to the tracks. Next to you is a lever that controls a switch that could redirect the trolley onto another track, where a single person is tied up. What do you do? Nothing, and five people get killed. Or pull the lever and one person gets killed.

There have been many variants of this thought experiment over the decades (including the Fat Man and the Man in the Yard), but they all tend to emphasize the difference between utilitarianism and normative ethical theory, whether actions should be based on their inherent morality or on the consequences of those actions. Basically, should the morality of your actions be assessed due to their inherent goodness, or on the good that they do in real life?

A 21st-century twist

The Trolley Dilemma has also been applied to autonomous vehicles, since in the face of a potential accident, the software may be required to decide between several courses of action. In early 2017, MIT’s Media Lab created a platform, Moral Machine, that was inspired by the thought experiment, in which members of the public were invited to choose the options available to autonomous vehicles to do the least harm in a crash scenario. And the paradigm has been cited in countless discussions of autonomous vehicles and safety issues.

That’s despite the fact that it’s considered an extremely flawed way to think about a complicated problem by prominent ethicists and researchers. (In addition to the most obvious problems with the paradigm, which are clear to any sentient being: Outside of cartoons, who is tying people to train tracks? And why wouldn’t you go untie them, instead of redirecting the oncoming train?)

advertisement

In 2014, in an article for Social and Personality Psychology Compass, researchers wrote that such sacrificial dilemmas as the Trolley Dilemma are unrealistic and “unrepresentative of the moral situations people encounter in the real world.” They warned that the absurdity and artificial settings of such paradigms may “affect the way people approach the situation and decide what to do.” In other words, someone influenced by the Trolley Dilemma may make some dangerous choices when faced with a real-world scenario.

For example, the other afternoon, I drove through that intersection in Park Slope and had to negotiate one of the most stressful and fraught situations faced by drivers every day: idling in the middle of the intersection to make a left turn, while simultaneously waiting for a pause in the oncoming traffic and for pedestrians in the crosswalk to walk to the other side. There are so many potential risk factors and ways that things could go terribly wrong. If you start to turn and an oncoming car doesn’t slow down, you need to quickly push through the intersection and somehow avoid that mother pushing a baby carriage.

It might seem like the equivalent to the Trolley Dilemma: Either sacrifice yourself, or run over two people in the crosswalk. But it’s actually much more complicated because there are many more things going on and potentialities involved. If a crash seems likely, there are many options: You could go in reverse, you could jump the curb and smash into the corner of the discount store, the mother could notice your distress and pull her baby carriage back onto the sidewalk (or alternately run fast to make it to the other side), you could try to pull your car into the narrow space between the oncoming car and the sidewalk, or the likeliest scenario of all: You stay put and the oncoming car screeches to a halt or swerves to avoid you. They’re all extremely risky options, but it’s not just a binary choice–especially when you throw in all kinds of curveballs, from snow and sleet to drunk drivers and unpredictable pedestrians.

Beyond the binary

And that situation illustrates why the Trolley Dilemma is such a deficient way of thinking about autonomous cars and making sure that they cause the least harm to society. “Self-driving cars have huge problems with unprotected left turns, and such situations where they need to negotiate with humans,” says Sam Anthony of Perceptive Automata, a company developing perception software for autonomous vehicles. He and Julian De Freitas, a doctoral candidate in psychology at Harvard University, coauthored an article, “Doubting Driverless Dilemmas,” that touches on how the paradigm is inadequate for discussions of safety issues with autonomous vehicles.

In the real world, you almost never get these types of “force-choice dilemmas,” explains Anthony. “The trolley dilemma can be useful when you’re picking apart people’s intuitions, where you can isolate one or two factors, but it’s a mistake to think that you can really apply that to the real world in all its complexity.”

As illustrated in my driving anecdote, the Trolley Dilemma doesn’t apply because it requires a “perfect 50-50 chance of killing each individual in the same amount of time, with no other location to steer the vehicle, and no other possible steering maneuver but driving head-on to a death,” as Anthony and De Freitas write in their article.

Rather than focus on the ethical concept of intentionality, the paradigm guiding the development of self-driving cars should be focused on teaching them how to avoid harm in the first place, say the researchers. They’ve interviewed engineers at companies involved in developing autonomous vehicles who are focused on achieving that goal. “From the industry side of things, when we think of self-driving vehicles, it’s a question of, What can they see?” says Anthony, outlining the multiplicity of information, such as how children behave, that autonomous vehicles (AVs) need to process.

Anthony and De Freitas interviewed employees at May Mobility, nuTonomy, Perceptive Automata, and a global automotive company that preferred to remain anonymous, all of which drive AVs on real roads every day, and none of them “have teams or budgets devoted to solving trolley-like dilemmas.” Engineers at these companies are focused on developing the AVs so that they make decisions with the right information about the world, so that “when we see someone who wants to cross the road, the AV understands that they want to cross the road,” says De Freitas.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics