In 2016, researchers from the MIT Media Lab launched an experiment. They wanted to understand how people wanted self-driving cars to act, so they built a website where anyone could experience 13 different self-driving car scenarios: Should a self-driving car that’s in the midst of a crash spare young people instead of old? Or women instead of men? Should it spare physically fit people, or overweight people? Should it not make any decisions at all, and simply take the path of inaction?
Two years later, the researchers have analyzed 39.61 million decisions made by 2.3 million participants in 233 countries. In a new study published in Nature, they show that when it comes to how machines treat us, our sense of right and wrong is informed by the economic and cultural norms of where we live. They discovered three general geographic areas with distinct ethical ideas about how autonomous vehicles should behave: West (which includes North America and Christian European countries), East (which includes Far East countries and Islamic countries), and South (which includes much of South America and countries with French influences). These groups also have their own subclusters, like Scandinavia within the West and Latin American countries within the South. As the study’s interactive graphic shows, Brazilians tend to prefer sparing passengers over pedestrians; Iranians are much more likely to spare pedestrians; Australians are more likely to spare the physically fit than the average.
But the study also found that there are three areas in which people all over the world tended to agree: Many people want to spare humans over animals, to spare more people rather than fewer, and to spare young people over the elderly. Those insights could provide the foundations for an international code of machine ethics–which, the researchers write, will be a necessity when these “life-threatening dilemmas emerge.” Those conversations shouldn’t be limited to engineers and policymakers, because they will affect everyone.
In that light, Moral Machine provides a fascinating glimpse into how millions of people think about machine ethics for the first time.
There’s a divide between individualistic and collectivistic cultures
The researchers believe this divide could be the biggest challenge to developing global guidelines for how self-driving cars should act. “Because the preference for sparing the many and the preference for sparing the young are arguably the most important for policymakers to consider, this split between individualistic and collectivistic cultures may prove an important obstacle for universal machine ethics,” they write.
A country’s economic state predicts which lives are more “important”
You can see this in the Moral Machine interactive by comparing Sweden and Angola. The lower the Gini coefficient, the closer the country is to equality. The more egalitarian Sweden (with a very low Gini coefficient of 29.2) is less likely than the global average to spare a high status person over a low status person, while Angola (with a high Gini coefficient of 42.7) values sparing those with high status over any other category.
“Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally in the Moral Machine,” the researchers write. “This relationship may be explained by regular encounters with inequality seeping into people’s moral preferences, or perhaps because broader egalitarian norms affect both how much inequality a country is willing to tolerate at the societal level, and how much inequality participants endorse in their Moral Machine judgments.”
On a similar note, a country’s GDP per capita as well as the strength of institutions correlates with a preference to spare people who are following the law, when it comes to jaywalking. On the other hand, those from poorer countries tend to be more tolerant of pedestrians who jaywalk.
Ultimately, culture should inform machine ethics
The researchers believe that self-driving car makers and politicians will need to take all of these variations into account when formulating decision-making systems and building regulations. And that’s important: “Whereas the ethical preferences of the public should not necessarily be the primary arbiter of ethical policy, the people’s willingness to buy autonomous vehicles and tolerate them on the roads will depend on the palatability of the ethical rules that are adopted,” the researchers write.
Despite all of these variations between different cultures, the Moral Machine team still believes that we need to have a global, inclusive conversation about what our ethics are when it comes to machine decision-making–especially because this reality is fast approaching.
“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” the researchers write. “We are going to cross that bridge any time now, and it will not happen in a distant theater of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”