advertisement
advertisement

A groundbreaking study reveals how we want machines to treat us

A self-driving car is crashing on a busy street. Who does it kill? Who does it save? Your answer will be shaped by where you live, says an MIT study.

A groundbreaking study reveals how we want machines to treat us
[Photo: The Moral Machine Team]

In 2016, researchers from the MIT Media Lab launched an experiment. They wanted to understand how people wanted self-driving cars to act, so they built a website where anyone could experience 13 different self-driving car scenarios: Should a self-driving car that’s in the midst of a crash spare young people instead of old? Or women instead of men? Should it spare physically fit people, or overweight people? Should it not make any decisions at all, and simply take the path of inaction?

advertisement
advertisement

Two years later, the researchers have analyzed 39.61 million decisions made by 2.3 million participants in 233 countries. In a new study published in Nature, they show that when it comes to how machines treat us, our sense of right and wrong is informed by the economic and cultural norms of where we live. They discovered three general geographic areas with distinct ethical ideas about how autonomous vehicles should behave: West (which includes North America and Christian European countries), East (which includes Far East countries and Islamic countries), and South (which includes much of South America and countries with French influences). These groups also have their own subclusters, like Scandinavia within the West and Latin American countries within the South. As the study’s interactive graphic shows, Brazilians tend to prefer sparing passengers over pedestrians; Iranians are much more likely to spare pedestrians; Australians are more likely to spare the physically fit than the average.

But the study also found that there are three areas in which people all over the world tended to agree: Many people want to spare humans over animals, to spare more people rather than fewer, and to spare young people over the elderly. Those insights could provide the foundations for an international code of machine ethics–which, the researchers write, will be a necessity when these “life-threatening dilemmas emerge.” Those conversations shouldn’t be limited to engineers and policymakers, because they will affect everyone.

In that light, Moral Machine provides a fascinating glimpse into how millions of people think about machine ethics for the first time.

There’s a divide between individualistic and collectivistic cultures

[Screenshot: The Moral Machine Team]
While there are general trends that most cultures align on, the researchers found a large split between countries that have more individualistic cultures (mostly in the West) versus countries that are more collectivistic. Western countries tend to show a greater preference for sparing children over the elderly, while Eastern countries tended to value older lives more–Cambodia, for instance, is far below the world average when it comes to the general preference to save children over the elderly. Similarly, people in Western countries also showed a stronger preference for sparing more people, regardless of the makeup of the group.

The researchers believe this divide could be the biggest challenge to developing global guidelines for how self-driving cars should act. “Because the preference for sparing the many and the preference for sparing the young are arguably the most important for policymakers to consider, this split between individualistic and collectivistic cultures may prove an important obstacle for universal machine ethics,” they write.

A country’s economic state predicts which lives are more “important”

[Screenshot: The Moral Machine Team]
Along with cultural differences, the study found that economics play an important role as well. For instance, a country’s level of economic inequality is a predictor of to what extent people prefer to spare those of higher status versus lower status. That means that people from countries with high inequality–based on the country’s Gini coefficient, the World Bank’s measurement of inequality–would be more likely to spare a business executive over a homeless person.

advertisement

You can see this in the Moral Machine interactive by comparing Sweden and Angola. The lower the Gini coefficient, the closer the country is to equality. The more egalitarian Sweden (with a very low Gini coefficient of 29.2) is less likely than the global average to spare a high status person over a low status person, while Angola (with a high Gini coefficient of 42.7) values sparing those with high status over any other category.

“Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally in the Moral Machine,” the researchers write. “This relationship may be explained by regular encounters with inequality seeping into people’s moral preferences, or perhaps because broader egalitarian norms affect both how much inequality a country is willing to tolerate at the societal level, and how much inequality participants endorse in their Moral Machine judgments.”

On a similar note, a country’s GDP per capita as well as the strength of institutions correlates with a preference to spare people who are following the law, when it comes to jaywalking. On the other hand, those from poorer countries tend to be more tolerant of pedestrians who jaywalk.

Ultimately, culture should inform machine ethics

[Screenshot: The Moral Machine Team]
While there is some baseline agreement on machine ethics when it comes to age, number of people, and human life, the nuanced differences between cultural groups are more important to understand–and they’re often not so clear-cut as the individual versus collective divide. For instance, countries in the Southern cluster tend to have a very strong preference for sparing women over men and sparing physically fit people over non-fit people.

The researchers believe that self-driving car makers and politicians will need to take all of these variations into account when formulating decision-making systems and building regulations. And that’s important: “Whereas the ethical preferences of the public should not necessarily be the primary arbiter of ethical policy, the people’s willingness to buy autonomous vehicles and tolerate them on the roads will depend on the palatability of the ethical rules that are adopted,” the researchers write.

Despite all of these variations between different cultures, the Moral Machine team still believes that we need to have a global, inclusive conversation about what our ethics are when it comes to machine decision-making–especially because this reality is fast approaching.

advertisement

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” the researchers write. “We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

advertisement
advertisement

About the author

Katharine Schwab is an associate editor based in New York who covers technology, design, and culture. Email her at kschwab@fastcompany.com and sign up for her newsletter here: https://tinyletter.com/schwabability

More