Moral Relativism


I’ve mentioned WEIRD on this blog before. For those who haven’t heard, the basic idea is that college students in North America are very unlike most people on Earth, yet psychology usually considers them type specimens for our entire species.[1] This calls into question a lot of “universals” proposed in psychology papers.

You might think morality would be a clear exception to that. Young people are fitter, old people have already contributed most of what they will to society; if one of each group is put in danger, we should try to save the former first before the latter. Right?

We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well-being they create, and the harm they cannot eliminate. Distribution of well-being and harm inevitably creates tradeoffs, whose resolution falls in the moral domain. Think of an autonomous vehicle that is about to crash, and cannot find a trajectory that would save everyone. Should it swerve onto one jaywalking teenager to spare its three elderly passengers? Even in the more common instances in which harm is not inevitable, but just possible, autonomous vehicles will need to decide how to divide up the risk of harm between the different stakeholders on the road. […]

… we designed the Moral Machine, a multilingual online ‘serious game’ for collecting large-scale data on how citizens would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The Moral Machine attracted worldwide attention, and allowed us to collect 39.61 million decisions from 233 countries, dependencies, or territories.

Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. “The Moral Machine Experiment.” Nature 563, no. 7729 (November 2018): 59. https://doi.org/10.1038/s41586-018-0637-6.

Well, the data is in. I could do an entire blog post on just their summary, but for now merely note the benevolent sexism,[2] focus on punishment, classism, deontology, and cat hatred. That left bar chart is confusing; the bar between the elderly and the young isn’t indicating that both would be spared equally often, but that children would be spared 49 percentage points more often.

Figure 2 (global preferences) from Edmond et. al (2018).

Sure enough, there’s a clear preference for sparing the young over the elderly. But hold on here; this was an online survey, and the map of people playing the “game” shows a definite skew towards North America and Europe. This summary is “global” in that it aggregates all the data together, but not in the sense that it represents the globe’s preferences. We would do better to break down the responses into countries and analyze that.

First, we observe systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters (…). Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters (…). Because the preference for sparing the many and the preference for sparing the young are arguably the most important for policymakers to consider, this split between individualistic and collectivistic cultures may prove an important obstacle for universal machine ethics. …

We observe that prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally (…). In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation. This observation limits the generalizability of the recent German ethics guideline, for example, which state that “parties involved in the generation of mobility risks must not sacrifice non-involved parties.” …

… we observe that higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally in the Moral Machine. … the differential treatment of male and female characters in the Moral Machine corresponded to the country-level gender gap in health and survival (a composite in which higher scores indicated higher ratios of female to male life expectancy and sex ratio at birth—a marker of female infanticide and anti-female sex-selective abortion). In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable in Moral Machine decision-making.[1]

Just consider the consequences of all this: do we have to change the moral calculus of a self-driving car if the owner sells it to someone in another country, or if they merely drive into one? If we tweak the calculus to remove all benevolent sexism, people will feel these cars are unfairly harming women; either we need to pair driver-less cars with a global education campaign to eliminate sexism, or there’ll be a mass movement to bake sexism into our cars. At the same time, self-driving cars will save quite a few lives no matter what moral system they follow; should we sweep all this variation under the rug, and focus on the greater good?

Our moral code depends strongly on where we live and how well we’re living, so how could we all agree to a universal moral code, let alone follow it? Non-normative moral relativism, contrary to the name, is the human norm, and imposing a universal moral code on us will cause all sorts of havoc.

Except when it comes to cats.

[HJH 2018-12-05: Huh, where did that graphic go? I’ve popped it back into place.]


[1] Henrich, Joseph, Steven J. Heine, and Ara Norenzayan. “Beyond WEIRD: Towards a Broad-Based Behavioral Science.” Behavioral and Brain Sciences 33, no. 2–3 (June 2010): 111–35. doi:10.1017/S0140525X10000725.

[2] Glick, Peter, and Susan T. Fiske. “An Ambivalent Alliance: Hostile and Benevolent Sexism as Complementary Justifications for Gender Inequality.” American Psychologist 56, no. 2 (February 1, 2001): 109–18.