In An Accident, Who Will A Driverless Car Be Programmed To Kill?

When ethics are automated, life-and-death decisions may be in Google’s hands.

So far, self-driving cars haven’t caused any accidents. They’ve been side-swiped and rear-ended, they’ve been ticketed, and they’ve faced down show-off cyclists, but they have never yet hurt anyone.


But that will inevitably change. And when an autonomous car one day mows down and kills a human being, its software will be to blame. In a thought experiment that mirrors the classic Trolley Problem, Patrick Lin looks at the ethics of self-driving cars.

When an accident happens or is about to happen, your car needs to do something, and what it does will be determined by how it is programmed by it’s makers. Lin points out that an ethical difference lays between a human driver’s reaction and a driverless car’s decision, a pre-meditated programming choice to value one life over another, even if the precise owner of that life is not known at the time. In the exact same situation, even if the car reacts exactly the same way as a human would, Lin says that the decision could be viewed as “premeditated homicide.”

If your car is programmed to “minimize harm” by choosing to swerve into a helmet-wearing cyclist instead of the helmet-less cyclist on the other side, then aren’t responsible people being penalized? In this world, bike users might skip wearing helmets to avoid becoming victims of robo-cars.

These ethical programming decisions will be made as a matter of company policy, and buyers may find themselves forced to buy into the brand whose ethics most closely align with their own. “If you had to choose between a car that would always save as many lives as possible in an accident, or one that would always save you at all costs, which would you buy?” asks Lin.

Tech blogger Jason Kottke sums it up like this: “Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.”


Even if the company leaves the decisions to us–imagine an open-source car that can be configured by its user–then we still have to make decisions about who will get injured or killed in a future accident. Race and religion could figure into this, as bigoted car owners download patches that tell their car to prefer veering towards people of the “wrong” color.

The answer will likely be a mix of engineering tricks, politics and good old consumer choice, which may force us to make some uncomfortable decisions.

About the author

Previously found writing at, Cult of Mac and Straight No filter.