Moral Flexibility: Why Ethicists Are Wrong About Why Things Are Wrong

It is hard to say that I work as a professional ethicist as their are few jobs that are framed in just this way. To the extent “professional ethicist” jobs are known as such, they are largely professorial positions. I’ve never held such a position, even when I was teaching in a university. However, many jobs include making ethical recommendations as an important part of the total role. Though some lobbyists would not want their jobs to be connected with ethics in any way (typically for fear of scrutiny), those who craft public policy proposals are actually in the business of morality and ethics. Implementations of a proposal might depend on a host of practical questions, but the motivation for a public policy proposal is very often moral or ethical in nature. Also moral or ethical in nature are many of the arguments for a legislator to vote on a proposal, or submit a bill, or act to move a bill forward procedurally. The same is no less true when lobbying an administrative official for regulatory or enforcement action (or inaction). Understood in this way, it’s quite clear that I (and many, many others) have experience working as a professional ethicist. The full number of people working professionally on questions of ethics dwarf the subset whose job titles explicitly include ethics. It is this larger set of ethicists to which I indisputably belong that imposes a moral responsibility upon me to question and critique ethics as a profession and ethicists as a group.

But even this larger group does not sum up all people who think seriously about ethical questions. In our non-professional lives, too, we must frequently engage quite explicitly with questions of ethics. Anyone with a child in the “Why?” phase of conversational development certainly spends more than 40 hours a week on ethical questions.*1 Anyone who takes the responsibility of voting seriously must also engage in questions of ethics. It is precisely the ubiquity of ethical reasoning in human life that inspires me to write today about an important shortcoming in the field of ethics.

For those of us not professional philosophers or otherwise working professionally as ethicists, it is normal to deliberate on questions of ethics, despite very, very few of us being given explicit training in how to make ethical decisions. Instead, we absorb ideas about ethical decision making incidentally. As children the adults responsible for us are more interested in us avoiding unethical behavior than they are in learning the process by which a behavior might come to be understood as ethical or unethical. The way these adults communicate, then, is much less about process than it is about outcome, less about questions than it is about answers. For us as children, it can be very difficult to determine whether, “Don’t touch the stove!” is an ethical command or a practical one, and even harder to determine why some commands have ethical dimensions and others do not.

For instance, “Don’t touch the stove!” may be a practical command, an ethical one, or both. Not touching your grandma’s new stove because she’s about to have a visitor and your dirty hands will create more work for someone other than you, well, that’s a command that grows out of an ethical consideration of the impact of your behavior on others. Not touching a stove that is, despite your ignorance, hot enough to cause injury, well, that’s a practical command made to prevent injury. Not touching a stove because your announced plan to burn yourself as a way to gain access to narcotics for the purpose of a suicide attempt may be both a practical command and a command that grows out of ethical consideration of the impact of your behavior on others.

The lack of clear training on how to categorize questions of ethics and how and why we might use some facts in ethical reasoning while ignoring or minimizing others leads directly to the problem I want to discuss and to solve. Without clear training, it is common for individuals to develop eclectic, flexible approaches to moral decision making. Some of us with a bit of formal training might identify a primary allegiance to one school of moral thought (“I’m a consequentialist, though I value free will more highly than physical health when evaluating consequences,” or “I’m a Christian situational ethicist focussed on acting out agape in everyday life”), but in practice this may describe only a small minority of our ethical judgements.

Failing to understand how we decide what is good can be a huge impediment to actually achieving good. I want to use this space not only to help make our communities better by working together to come up with strong answers to important moral questions but also to help each of us understand what moral reasoning processes we use under what circumstances with the goal of becoming better at communicating both our answers and our reasons for believing those answers are correct. This last part, I hope, will enable faster moral change through more directly communicable reasoning that will ultimately be more persuasive.  I think this is achievable even while many of us still reason differently, even while we still practice different metaethics, so long as we understand the processes others are employing to arrive at an answer to a particular question.

This is both harder than it might seem and easier. While research suggests not only certain trends in what humans consider moral, but also inter-cultural differences in those trends, there is very little practical research exploring the consistency of individual metaethics. In my opinion, however, what we do have is pretty clear that single individuals employ multiple different strategies of ethical reasoning. On the one hand, this means that it is harder to predict what reasoning will convince someone on a given issue. On the other hand, it implies that you are more likely to have practice employing similar reasoning strategies (and thus better understanding of those strategies) than if all people who called themselves Talmudic deontologists really derived all their applied ethics from 613 specific rules.

The best research on metaethical inconsistencies involves Utilitarians who evaluate different situations using different Utilitarian goods.*2  The prototypical “good” to be increased is “happiness”. However, there are many cases where the same person who seeks to increase happiness in some circumstances and justifies morally prohibiting actions because they may cause decreases in happiness (Do not assault others: injuries decrease happiness) may prohibit an action that increases behavior by appeal to a different quality that must be maximized (Do not take MDMA/ecstasy: regular MDMA use will decrease your performance in school).

Nonetheless, this decision making can still be described as utilitarian. A good (school performance) is to be increased wherever possible, and the behavior about which one is attempting to make a moral judgement (taking MDMA) is still assessed according to how the behavior increases or decreases the presence/amount of that good among a population.

Unfortunately for moral philosophers and for our general, reflexive categorization into moral schools of thought, there is some experimental evidence (unfortunately mostly ignored) that individual people can use appeals to consequences when making some moral judgements, appeals to rules or duties regardless of consequence when making other moral judgements, appeals to nature or evolved tendencies when making yet more moral judgements, and appeals to virtue paradigms or virtue exemplars when making still other moral judgements.

These are typically considered entirely different forms of moral reasoning, entirely different ways of justifying decisions about what is or isn’t moral, what is or isn’t ethical. Each is frequently referred to as “a metaethical approach”, “an ethical methodology”, or simply “a metaethic(s)”.*3 Yet none is a complete description of the “approach” of any moral agent. We all appear to use different forms of moral reasoning at different times.

I will continue to write about specific moral/ethical questions on this blog and to encourage you to comment on them. As we explore particular ethical questions, I’d like to use more than one ethical methodology to examine each question. This can serve multiple purposes. For one, not all of us will employ the same metaethical approach as our default approach. In cases where different people use different reasoning to examine the same question, making those different processes explicit can aid communication and help us understand how we come to different conclusions. But another, more interesting, and to me more important reason to do this is that I am convinced by the available research that individuals use different approaches at different times to answer the same question and use different approaches at the same time to answer different questions.

On the same day the same person might in the morning appeal to evolution and evidence of a widespread incest taboo to justify moral condemnation of incest or even cousin-marriage and in the afternoon appeal to a divine command to justify moral condemnation of the decisions not to marry and/or not to have children. The first rationale is sometimes called primate ethics (because studies of behavior tendencies in non-human primates is assumed to provide support for the thesis that widespread human behavioral tendencies are evolved, are natural, and – regardless of whether or not we understand what the consequences might be – therefor possible to label as “good” or “bad”). The second rationale is an example of deontological ethics, the ethics of duties we must uphold or laws we must obey, again, regardless of consequences.

Now, note that consequences are not ignored in ethical systems that aren’t included in the category consequentialist ethics. Typically consequences are important, but judgements are made according to rules that are believed to produce good consequences on average even if there is evidence that a better consequence might result in one specific exceptional circumstance. Consequentialist ethics, then, appeal only and/or finally to consequences and is in principle always open to evidence that indicates a particular situation is likely to have different consequences from other, superficially similar situations. An ethical system based on duty notes that it is sometimes hard to compare consequences, and that strict rules that apply regardless of (likely) consequences prevents both moral paralysis (when a person is actively considering multiple consequences but is unsure how to rank them) and also certain moral abominations that appear to be permissible or at least insufficiently condemned by appeals to consequences.

It is exactly these benefits of duty or rule based systems that make them an attractive fall back for people who frequently reason according to consequences. Spiking the punch at a party with MDMA may very well cause an increase in happiness, but it might also cause negative interaction with medicines or help push a person toward addiction or cause some other consequence that drastically decreases happiness. A 5% chance of a very bad consequence can be difficult to compare to a 95% chance of a good though transient consequence. In these cases, it might be useful to rely on a strict rule of informed consent for use of drugs and medicines. There may even be research showing that a society benefits on average and over time when its members follow a strict rule of informed consent in these cases, but the consequentialist falling back on deontological ethics while considering spiking the punch is not likely to look up that research or consider evidence that in this case consequences are likely to deviate from the average. In this case, the nominal consequentialist is likely to simply consider spiking the punch an evil or unethical act (as I think it should be considered) without much extra consideration of the rule in question.

This real-life metaethical flexibility is very different from academic ethics. Academic ethicists tend to latch on to a single approach to ethical decision making, a single metaethics. The career of an academic ethicist then typically proceeds to elaborate how the chosen metaethical approach functions or applies that metaethical approach to abstract situations (normative ethics) or specific situations (applied ethics). All three are functionally defenses of a particular metaethics, either explicitly in rarer cases, or implicitly by working to show that a favored metaethics can provide answers to ongoing ethical questions relevant to more humans than merely those with “Professor,” “Ethics,” or “Ethicist” in their job titles. In certain cases, someone will attempt to determine how people make ethical judgements when they aren’t being paid to make them and don’t have the luxury of reading multiple books on a subject before coming to a decision. Even in these cases, however, it is typical for a career to focus on the use of a single metaethical approach and not on how a single person might use multiple metaethical approaches or the rules (or tendencies) which govern when one approach is preferred to another.

This tendency to fixate on a single metaethical approach and not on human metaethical flexibility is simply not nearly as relevant to everyday life as professional ethicists would like to suppose. The downside of metaethical flexibility is that it makes multiple answers to a single question even more likely than when one’s metaethics are constrained and the questions are thus confined only to the correct application of a decision making approach. Metaethical inflexibility thus provides ethical certainty, and ethical certainty has a great deal of emotional appeal. There is an upside to metaethical flexibility, however. Metaethical flexibility allows us to understand how actual people make actual ethical judgements in the course of actual lives.

To the extent that metaethical inflexibility is institutionally encouraged among professional ethicists, the profession is encouraging ethicists to think of most people as acting in ways that most people do not act. Even if it can be said that the profession as a whole encompasses writers and researchers studying all the ways that humans ultimately conclude that something is wrong (or right), methodologically we do not understand how people come to such conclusions because we are encouraged to think very differently. This too often places ethicists in the position of believing that they know how a person makes judgements once they have labeled that person a consequentialist or deontologist or virtue ethicist or placed the subject in some other category of metaethical thinking. Yet, everyday metaethical flexibility tells us that this categorization of persons into schools of thought, communities of like-minded ethical decision makers, is ultimately a false and misleading categorization that reflects only certain decision making, at certain times, on certain subjects.

In short, too often ethicists speak as if (and may even think as if) we know why a person has concluded some act is right (or wrong), and too often we are in error on exactly that point. This cannot continue, not least because if we really do have good answers on what is right, we need to be able to understand others’ states of mind in order to convince them to join us is acting on the knowledge provided by those answers. Successful ethics must not only tell us what ought to be, but it must tell us what is now so that we can identify the direction in which we should be traveling.

Ultimately an inflexible metaethics prefers ought to is. This blog values both ought and is, but so long as the larger community of ethicists strongly prefer one, this blog will seek to emphasize the other.


*1: Though that may not be true if you subtract time spent deliberating on the question, “At what point is it ethically permissible to kill my child?”

*2: If you’re not familiar with the language used here, Utilitarianism attempts to increase a particular good thing amongst a group of interacting human beings (or, more generally, a group of interacting beings that are each individually capable of employing moral reasoning and coming to moral conclusions). “A good” in the context of utilitarianism is therefore presupposed to be good. Something that one has concluded (after deliberation) is a morally praiseworthy action is not “a good” in Utilitarianism. The “good” is that thing to which you compare an action’s consequences to determine if the action is morally right or wrong.

*3: Yes, in certain contexts philosophers treat “metaethics” as a singular noun. I’ve learned to deal with it, though it took a decade or two.


  1. says

    But another, more interesting, and to me more important reason to do this is that I am convinced by the available research that individuals use different approaches at different times to answer the same question and use different approaches at the same time to answer different questions.

    Yes, yes, a thousand times yes. It’s also a question of at what time the decision is made, versus when it’s justified. I’ve observed people present pretty good arguments for why they did something, that were far too complicated and consequential to have been what they were thinking, at the time of the decision (or that depended on unrealistic certainty about post-event outcomes)

    I look forward to reading what you have to say.

  2. khms says

    I’m pretty certain I often use several different metaethics at the same time. You know, attempting to reach what’s sometimes called a “compromise”. Though the weighting given to the different strategies might well differ overtime.

    Of course anyone devising a rule (be it a law or just an agreed-upon way to play a game) does so because they want (or pretend to want) to change someone’s behavior (not always the obvious someones) – why else make a rule in the first place? And changing people’s behavior is an ethical decision.

Leave a Reply

Your email address will not be published. Required fields are marked *