You’re supposed to bridge Hume’s gap, not dive into it


Hume’s gap, Hume’s law, Hume’s guillotine, the “is-ought” problem, the naturalistic fallacy–they’re all phrases for the same observation: That a moral prescription (an “ought” statement) cannot be derived from an empirical observation (an “is” statement) by itself. The gap that you ought to bridge, if you want other people to clearly see your reasoning and thus evaluate your claim more accurately, can be done with the use of an “if” statement, which will delineate a specific goal or intention and which provides the avenue for empirical investigation. Which, astute readers will note, I just did with that exact sentence: “You ought to bridge the is-ought divide if you want your moral reasoning to be understood clearly because the ‘if’ will provide a logical avenue of investigation.” We could do a poll and ask which argument is more convincing: “trees produce oxygen, I need oxygen to breathe, and if I want to breathe, I ought not to cut them all down” or “trees occur spontaneously in nature, nature is good, therefore trees are good” and thus shed some light on whether my premise is accurate.

Of course, even that formulation assumes “I want my moral reasoning to be understood clearly” and so it carries a few weaknesses: If I am a charlatan, my actual moral reasoning is likely related to my immediate material gain, but being a charlatan I’d want to convince you my moral reasoning is something else, in which case my argument falls apart–the charlatan doesn’t want their moral reasoning to be clear, so they have no incentive to bridge the is-ought divide and instead pretend you can make it from one side to the other with a judicious application of creative thinking.

And so we jump feet first into moral skepticism, the intellectual quagmire in which I have been stuck waist-deep for a few years. My arms are outstretched, if any theorists from other moral schools care to grasp them in a bid to free me from my prison. I invite you to heave-ho and extract me from this intellectual quicksand in the comments, though I suspect my colleague Marcus will likely try sabotage your efforts.

All of which was a rather long-winded introduction to one of the more stark demonstrations of the is-ought divide I’ve seen in trans-antagonistic arguments: Society hates trans people, transition “cures” gender dysphoria but marks us as “trans,” therefore we should (somehow) get rid of gender dysphoria without transitioning. I’m not the first trans feminist to see this proposed to them, either–here’s Zinnia Jones: (emphasis original)

What’s being proposed here certainly does not constitute a coherent “treatment” in any sense, and is generally lacking in any details beyond the unwavering insistence that trans identities should be rejected. But what if we were to evaluate this as if it were a treatment? I’ve spent plenty of time examining the measurable results of acceptance and affirming care for trans people – so how does rejection stack up in comparison? What outcomes can be expected in trans people’s lives when they’re pushed to live as their birth sex, and their genders are rejected and denied by those around them?

And Natalie Reed:

That is, in fact, part of what drives the fascination behind this thought experiment. The idea that transition is something so hideously awful that wouldn’t we do everything we could to avoid it? That transition is and only ever should be an absolute last resort, if things are just so completely horrible and you’re so miserable that you have absolutely no other choice? That we’d take the “easy” way out, even if it meant annihilating an essential aspect of who we are, if there were any other way?

The premise is based on a myth. The same myth that would be used to enact such a genocide if this “cure” ever came along as a “better” alternative. The actual reality is that transition DOES work. That it isn’t a horrible terrible fate that we should avoid at all costs, while spending our time daydreaming of appallingly unethical alternatives.

…who both take note that the is-ought divide is not resolved in the slightest, albeit not using that vocabulary. This question, rather than taking a moment to bridge the divide, straps on a pair of parachute wings and sky dives straight into it. (emphasis original)

What actually gets me angry is this:

The question is typically posed in terms of what would make things easier for us trans people. But that’s not what it’s about. It’s about what would make things easier for cis people.

I’ve found that people who pose this question tend to really dislike it when I say “no”. And it tends to be used as a springboard for them to go on little rants about how I can’t possibly be satisfied with my body, how I’ll never menstruate or be pregnant, how I’ll never look quite like a cis woman, how I’ll never ever be a “real” woman, and how there’s no way I could ever actually be happy being a transsexual woman and that OBVIOUSLY everything would be better “for me” if I’d just take the stupid fucking hypothetical trans-annihilation pill!

Reed wrote that in 2012, five years ago as of writing this post. Compare it to how that exact question played out when it was posed to me on Medium…

Content Notice for extreme ableism:

However that ‘relatively’ is pretty important, because it won’t be exactly “normal”. By this I mean the average experience, before anyone hounds me in the comments about it. Transition isn’t perfect, you don’t become 100% biological female or male, you don’t sprout ovaries or testes, your voice won’t unbreak and you won’t ungrow tits. Oh and of course, in some cases, there’s sterilisation. There’s a lot of problems with medical transition which I think are great grounds on which to try and seek something better.

When it boils down to it, really the argument is that we should accept that what we have isn’t ideal.

…and how my response…

You continue to insist that medicine is “less than ideal” without explaining how or why, you just flatly assert that this is true. But I do not think so. If I am injured, I want the full breadth of medical technology to enhance my healing. If I am ill, I want the full breadth of medicine to increase the extent of my recovery. And when the rubber meets the road, I suspect this is true of most people. If you are ill, you do not want a witch doctor to stab a voodoo doll, you want a frackin doctor who knows how your body works and knows the strategies to bring you back to health.

This is why I ask why you single out estrogen. It’s medicine like any other. I see no obvious moral component, nothing to indicate “good” or “bad,” as it is simply a matter of necessity. If I didn’t need it, I wouldn’t need it, but you have not established how this would constitute a moral “good” without falling back on the naturalistic fallacy. To me it’s not any different than if I break my leg and need a cast. A cast is not a moral good or a moral evil, it is simply a tool. I take estrogen as a tool to resolve my gender dysphoria and caffeine as a tool to help me think. The equivalency [the author made] is moral, not practical. How are they morally different such that we could apply words like “ideal” which are moralistic judgements? You haven’t established this.

…was met with a litany of abuse:

Please stop being such an idiot now.

Christ woman, what even is your problem?

Stop wasting my time.

So because cisgender people do not need transition-related healthcare and have no anxiety specifically caused by their sexed attributes, trans people ought not to require it either because…

because… (second CN for extreme ableism)

I don’t care about your morality on the subject or how uncomfortable it might make you feel to accept the fact that disabled people are physically inferior, or developmentally inferior to the typical, healthy, human being.

because “fuck you,” I guess? How’s the wind on the way to the bottom of Hume’s gap?

It goes like this: Transitioning shifts several sex characteristics in a myriad of ways. A certain set of sex characteristics are more commonly held than others. Having characteristics more commonly held makes one “superior,” and if I want to be considered among them, then I ought not to shift my sex characteristics.

And so the question to blow this thinking wide open is thus: Who the fuck defines “superior? The answer to this question was simply taken for granted, a splendid example of how cisgender supremacy is not actually the sole monopoly of cisgender people. Here was a trans person arguing that she herself was morally defective, and could not on any level engage logically with another trans person who doesn’t view themselves that way–in this case, because I myself am not presently convinced moral knowledge can even be known.

Natalie Reed was just as right five years ago as she is now:

That reaction, and that clear disapproval with my answer, is not about concern for me. It also makes it very clear that the people asking weren’t asking out of interest in my “unique perspective” or getting to understand me better, or understand trans people better. They were trying to make a point, and daydreaming about a world they’d prefer.

It was never a question, but always a thesis statement: The world has trans people, and it ought not to. I could just as easily say the world has transphobes, and it ought not to, so we should offer people pills to cure their transphobia, but what do I know? I’m a moral skeptic after all.

-Shiv

Comments

  1. jazzlet says

    That last paragraph just sent me into one of those thought cacade daydreams, this one where there really was a pill to get rid of phobias (all phobias) which was welcomed by people with things like fear of flying or arachnaphobia, but abhored by racists, fundamentalists,and the like so there was a war about making them take the pill with covert oprative sneaking the pill into phobics food and phobics bombing research facilities …

  2. says

    My arms are outstretched, if any theorists from other moral schools care to grasp them in a bid to free me from my prison. I invite you to heave-ho and extract me from this intellectual quicksand in the comments

    Hopefully I can help :)

    the charlatan doesn’t want their moral reasoning to be clear, so they have no incentive to bridge the is-ought divide

    A few things here.

    First, in this case, the goal of being a charlatan gives them no incentive to be truthful to the people they want to deceive. But if they want to be successful (even in the endeavor of deception), they themselves, in their own mind, will have to bridge such a gap, by knowing what will successfully achieve being a charlatan (or else they risk failing to achieve that goal). It would be what they ought to do if what they’re really after is to be a charlatan. So even on this incomplete analysis, we have moral truth and thus moral skepticism is defeated.

    But enough of that speculation about a hypothetical. The question remains: is that “if” statement true? Is being a charlatan really what they are after? No. Real charlatans in real life have other goals. In fact, they have goals that are far more important to them than “achieve being a charlatan”, because being a charlatan is just a means to an end (even the capital gains achieved by being a charlatan are just a means to an end: trying to have a satisfying life). And so there can be a conflict between “being a charlatan” (or even “getting rich”) and what they themselves would rather achieve.

    This can be done in at least two ways:

    1) show that, in actual fact, there is currently a better way for them to achieve what they really want, better (more likely to be successful) than resorting to such deception

    2) work to make #1 true if it isn’t currently

    (I even think #2 there is interesting. That’s the kind of community I want to be a part of. One that will work to achieve such things if they aren’t currently the case. That’s the kind of community that good people would be drawn towards, the kind of people I’d want to be with, and the kind of place where there would likely be more safety for me and the people I care about. This is interesting because a good community can reliably get you a pretty good life. And so it might already be that option #1)

  3. colinday says

    What, if anything, are the facts of ethics?

    Also, didn’t Hume say that he had never heard of a valid argument from an is premise to an ought conclusion? Wasn’t he merely making an inductive generalization (which would be ironic coming from Hume)?

  4. consciousness razor says

    Hume himself wasn’t a what people today would call a moral skeptic or nihilist. He had plenty to be skeptical about back then (lots of shit flying around in the 1700s), but not that. Although this is just an anecdote about a famous philosopher (perhaps an interesting one if you thought otherwise), it might help to understand that this “is-ought gap” (I’d rather call it a “distinction”) doesn’t itself support such views.

    Hume was always good about expressing himself clearly, so here it is from the horse’s mouth, right at the beginning of An Enquiry Concerning the Principles of Morals:

    DISPUTES with men, pertinaciously obstinate in their principles, are, of all others, the most irksome; except, perhaps, those with persons, entirely disingenuous, who really do not believe the opinions they defend, but engage in the controversy, from affectation, from a spirit of opposition, or from a desire of showing wit and ingenuity, superior to the rest of mankind. The same blind adherence to their own arguments is to be expected in both; the same contempt of their antagonists; and the same passionate vehemence, in inforcing sophistry and falsehood. And as reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles.

    Those who have denied the reality of moral distinctions, may be ranked among the disingenuous disputants; nor is it conceivable, that any human creature could ever seriously believe, that all characters and actions were alike entitled to the affection and regard of everyone. The difference, which nature has placed between one man and another, is so wide, and this difference is still so much farther widened, by education, example, and habit, that, where the opposite extremes come at once under our apprehension, there is no scepticism so scrupulous, and scarce any assurance so determined, as absolutely to deny all distinction between them. Let a man’s insensibility be ever so great, he must often be touched with the images of Right and Wrong; and let his prejudices be ever so obstinate, he must observe, that others are susceptible of like impressions. The only way, therefore, of converting an antagonist of this kind, is to leave him to himself. For, finding that nobody keeps up the controversy with him, it is probable he will, at last, of himself, from mere weariness, come over to the side of common sense and reason.

    Certainly a strong statement, right out of the gate, that he has no patience and no sympathy for such shenanigans. It’s always really weird to see people referencing Hume, as if he were some kind of 18th-century postmodernist hipster or I don’t know what. He’s not the same dead British dude as G. E. Moore either — two different dead British dudes, right there.

    Hume wanted to give a naturalistic account of how moral claims are grounded in the world. He wasn’t claiming there simply are no such things, thus no need to ground them, as you should (and presumably would) approach the subject of ghosts or leprechauns or whatever. And his theory was very basically that these are rooted in our “passions.” (You can of course read more Hume at the link if you like.)

    There are facts about people and other conscious agents like us — our needs, feelings, which factors allow us to flourish and succeed, etc. — which make certain moral claims true and others false. We don’t always express those claims literally in those terms, sometimes the reasoning underlying them is completely obscure or taken for granted (or some may be wrong), but the idea is that we can make sense of them in this way as statements about the world. That’s pretty much how I think of it, although there’s plenty more useful philosophical work to be done beyond that (as could be said of practically any worthwhile issue).

    Of course, it’s not like you can validly derive any old (true) “ought” statement from any old (true) “is” statement, as many are tempted to do when reasoning fallaciously, or otherwise trying to pull a fast one on you. A pointless and unhelpful naturalistic fallacy would be to say that there are rocks, therefore rocks are good. Or, we live in the best of all possible worlds, with the best rocks and the best flesh-eating viruses, not to mention all of the best ways to burn puppies alive. Or Trump uses words, thus they’re good words — the best words! That shit is patently stupid and everybody knows it.

    You do need to be sensitive to what matters to people (also certain other animals, sentient robots, aliens, etc.), not selfishly concerned only with yourself (or dealing with some other crazy notion about what God commands, etc.) but concerned for anybody who’s likely to be affected by a particular course of action. Failing to do that, being ignorant about such things in one way or another, and so forth, will lead you off track. So you want to know those sorts of facts, understand them as well as you can, listen carefully to other people when they try to tell you about their perspectives of the facts, and make proper use of them, by which I mean using good solid reasoning and evidence.

    If you think that “morality” has to do with some entirely different set of things, then it sounds like you just don’t get what the discussion has actually been about, all over the planet for thousands of years, for both the most-educated and least-educated people. You can have a conversation about some entirely different topic if you like, but that is clearly not any kind of serious critique of moral realism.

    What’s striking is that it seems as if some people think there aren’t facts about what I’m feeling right now, for example. (Am I feeling pain right now? Yes, in fact, my back is sore, and in fact, that doesn’t trace back to any other agents doing anything immoral like harming me. And now both of us know both of those facts. So you probably have no reason to care, and okay with that.) But when it’s put in such plain terms, it’s obviously a wacky idea, so people tend to shy away from biting that bullet and the whole thing turns to mush.

    It’s not really clear how these people are thinking about it. Maybe there just isn’t any one consistent way they’re thinking about it…. But there’s often a strong sense that they somehow want to split things into our shared “physical” world over here and their private “mental” world over there. It’s certainly true that I can’t easily tell what’s going on inside your head, so it is private in that sense, although that’s only a statement about what’s easy for me or what you’re able to hide from me easily. The obvious way to get that kind of distinction for real is if dualism were true. But it isn’t, so I’m always left scratching my head wondering exactly which sorts of facts a skeptic or nihilist thinks the world is (or may be) lacking, such that the world doesn’t (or may not) have the facts necessary for grounding moral claims.

    We may not know them — you or I or any particular person may not — but it’s another thing entirely to say that those aren’t a feature of the objective, mind-independent, physical world, that there just aren’t any. Maybe you just think it’s astonishing that matter moving around in space could perform such feats, and I don’t know what to tell you except that I guess this shouldn’t be the first time you’ve been astonished, since this style of argument works just as well for beliefs in vitalism, magical beings/forces, miracles and all sorts of other superstitious crap. (That is, it doesn’t work.)

    It’s usually a big puzzle figuring out what their (perhaps confused or misguided) expectations were, about what’s required for a claim to be true. Often, positivist notions are tossed into the mix, or certain ideas that we need some manner of “proof” instead of just fairly compelling evidence. Of course, none of that helps at all. Or, if the whole project of moral philosophy doesn’t resemble their idea of what scientific methods should be like (which itself is a “should” statement that I guess they think isn’t or can’t be true…?), then that’s somehow supposed to cast doubt on the idea that it has any relation with the truth. I’m not even sure how that argument’s supposed to get off the ground, since they presumably don’t even think it’s true, but even if it did somehow, it’s not hard to think of non-scientific disciplines and such that also deal in the truth.

    I honestly don’t get how people are presented with this confusing maze of unanswered questions and problems and apparently self-contradictory bullshit, then think to themselves “yeah, that sounds about right, I can work with that.” It’s really baffling to me. I know many of these people think somehow that they’re taking the thoughtful and well-supported and sophistimicated sort of view, but I do not grok how that kind of thought occurs to them.

  5. johnhodges says

    Hume was skeptical of “categorical oughts”, asking where they would come from. One response is to build your ethical system entirely from “hypothetical oughts”, of the form “if you want X, then you ought to do Y”, which are not mysterious. Hypothetical oughts are a statement about cause-and-effect relationships, a claim that Y is a necessary or efficient way to achieve X. They are also a form of advice. A consequentialist ethic is one where there is some overarching ultimate X to be achieved, and all the oughts in it are recommended means to that end. An “objective” ethic is a consequentialist ethic in which the ultimate goal is something objectively measureable; all of the recommended oughts then become testable hypotheses, where it is an objective question whether that Y does in fact lead to achieving X, or whether some other Y would be more effective.

    I propose that, because we are social animals evolved by natural selection, who survive by cooperating in groups, we have a “natural default” set of values that can be expected to be widely popular across all societies and cultures: the Good is that which leads to health (defined as the ability to survive) and the Right is that which leads to peace. Natural selection selects for “inclusive fitness”, those who seek to promote the health of their families and/or larger social group will on average leave more kin in the next generation. The great majority of us will thus be expected to intuitively define a “good person” as a desirable neighbor, desirable from the point of view of those who seek to live in peace and raise families. Hence the “social contract” approach to ethics: If you want to maintain peaceful and cooperative relations with your neighbors, don’t kill, steal, lie, or break agreements, and more generally follow at least the Silver Rule, “Do not do unto others what you would not want them to do to you.” The Golden Rule is worth trying out often, to see if your neighbors will reciprocate.

    All of these are only the broadest of generalities; much of the details will be culturally relative, depending of the locally accepted answers to “Who is my neighbor?” and “Who are my kin?”

  6. Siobhan says

    Sorry about some of the delayed moderation. I had interwebs troubles yesterday, so hopefully people are still checking the thread.

  7. Hj Hornbeck says

    My arms are outstretched, if any theorists from other moral schools care to grasp them in a bid to free me from my prison. I invite you to heave-ho and extract me from this intellectual quicksand in the comments, though I suspect my colleague Marcus will likely try sabotage your efforts.

    I know a general method to pull an absolute from a relative, to go from “I’m X percent confident the sun will rise tomorrow” to “the sun will rise.” Mathematically, what you’re doing here is transitioning the certainty of a hypothesis from a number very close to zero or one to precisely zero or one, once it crosses a threshold. We can probe where that threshold should be by doing a cost-benefit analysis where the cost of one item is practically infinite. For most living things, there’s an obvious choice of cost: death. We can all agree that’s a huge price to pay and barring a handful of corner cases we’d do anything to avoid it.

    Well, I have bad news: the odds of you dying today aren’t zero. At best, for a person in peak health who is not pregnant, the odds of dying today are roughly one in a million; for everyone else, it’s substantially higher. Yet I doubt you altered any plans today to accommodate for the possibility that you’d die. Despite carrying the ultimate cost, the odds were treated as if they were zero, when we know they are not.

    Conclusion: if the odds of something happening today fall below one in a million, you instead treat the odds as if they were zero. You’ve done this repeatedly for every day you’ve been living, and I challenge you to find someone who thinks that’s a bad thing.

    Don’t like my example? We can always tweak things slightly: what if we’re less than 50% certain that something will happen before the Earth is consumed by the Sun? Or the heat death of the universe arrives? It’s silly to worry about things that are highly unlikely to happen before you die, so treat them as impossible events.

    We can easily extend this to cover issues of morality. Would the given rule be a good idea 999,999,999 times out of 1,000,000,000? Treat it as good 100% of the time.

  8. says

    consciousness razor@#4:
    What’s striking is that it seems as if some people think there aren’t facts about what I’m feeling right now, for example.

    It is a fact that you have opinions and feelings. It is also a fact that a lot people have similar opinions about some things, and similar feelings about other things. Doubtless there are some skeptics that will adopt the stance of withholding judgement about anyone’s opinions and feelings but their own (on the basis that they may be lying, or something) – in my opinion that’s a rather extreme view, since I don’t think it makes sense to be highly skeptical if someone says they do not want their nose tweaked. It’s probably even a fact that a majority of people don’t like having their nose tweaked; I’m willing to go along with that.

    So you want to know those sorts of facts, understand them as well as you can, listen carefully to other people when they try to tell you about their perspectives of the facts, and make proper use of them, by which I mean using good solid reasoning and evidence.

    I agree with you. And I like the way you phrased that “perspectives of the facts” – it acknowledges that, while there are facts, we are always dealing with people’s interpretations of those facts, and opinions about those facts.

    But there’s often a strong sense that they somehow want to split things into our shared “physical” world over here and their private “mental” world over there. It’s certainly true that I can’t easily tell what’s going on inside your head, so it is private in that sense, although that’s only a statement about what’s easy for me or what you’re able to hide from me easily. The obvious way to get that kind of distinction for real is if dualism were true. But it isn’t, so I’m always left scratching my head wondering exactly which sorts of facts a skeptic or nihilist thinks the world is (or may be) lacking, such that the world doesn’t (or may not) have the facts necessary for grounding moral claims.

    I don’t speak for all nihilists, of course, so my opinions are only my own. So, with regard to the shared world versus the private world, I’d say that there’s the world of facts that don’t change whether we agree about them or not and then there’s the world of opinions, which are generally based on facts, but may not be, or that may be different based on the same fact. There’s also a boundary of communication – we can generally be pretty confident that we understand what another person is saying, but we’ve all probably had the experience of misunderstanding someone about something or other. So it’s not a private mental world, exactly – dualism isn’t the issue – but it’s kind of close. It’s not a matter of denying other people’s experiences and opinions are real, so much as it’s that we’ve got a problem being sure that we are sharing our opinions about facts correctly, that we understand one another.

    That’s actually not an all-destroying way of looking at it (in my opinion) because it forces me to question myself as to how confident I am that I understand other people’s opinions, and sometimes to be extra careful, if it’s about something that seems important to me. As an aside, I think this problem/approach is relevant when we are or are interacting with people who may have mental health problems – when we are talking with someone who may be experiencing mood swings or even delusions, we do not simply accept their opinions about facts without examining them a bit more closely.

    There is definitely a hard position of nihilism, that argues that we cannot ever be sure we know someone else’s opinions about facts, and therefore we should withhold judgement about everything. I, personally, do not hold that position because (as you say) if we absolutely doubt everything, all the time, we’re stuck being rather arbitrary or solipsistic. My opinion is that there’s a soft position of nihilism which equates to acknowledging that our ability to understand one another is variable, situational, and individual, and if we want to make decisions that affect one another it’s going to take a little more work to run down our mental checklist separating facts from opinions and our understanding of other people’s opinions. I still call that “nihilism” because it is ultimately a rejection of the naive idea that we can and do understand one another about facts, including “moral facts.”

    It’s usually a big puzzle figuring out what their (perhaps confused or misguided) expectations were, about what’s required for a claim to be true.

    I do think there’s a lot of bad epistemological skepticism playing itself out as nihilism. “Oh since you can’t convince me we share knowledge, morality is bumf!” is a short form. It seems to me that we just go a step or two further and acknowledge that understanding one another about some kinds of problems can be particularly hard. In my case, that equates to an instant eyebrow-raising when I encounter someone stating something I believe to be in the realm of opinion as though it were a fact. It doesn’t mean that I immediately and outright reject what they are saying, or that I challenge their statement expecting some standard of “proof” it’s rather that I step back from accepting their statement, provisionally, while I am trying to understand it better.

    It is my opinion that that is the “moral” thing to do, when we encounter differences of opinion about matters of fact where another person may have strong feelings about the outcome. We are less likely to over-generalize or misunderstand. Perhaps I was raised to believe that tweaking people’s nose is a form of greeting, and everyone I have encountered so far has put up with it out of a sense of obligation – until I run into someone who was raised with the opinion that it is a deadly insult. By questioning the assumption that there are absolutely held opinions, we tend to be more considered in our actions and our choices and are less prone to making wrong assumptions.

    By the way, I’m also of the opinion that “opinions about facts” or “opinions about moral facts” are just a way of making assumptions and a shorthand for jumping to conclusions about one another’s beliefs and opinions. It seems to me that’s one common cause of conflict: I assume you’ll want your nose tweaked, and you assume I am being disrespectful. When I see someone trumpeting that there are “virtues” that we can all subscribe to, I see someone encouraging people to jump to conclusions about what others believe – that’s why I push back. I’ll observe as well, that in a crowd like FtB where there is a diversity of opinions and a pretty diverse crowd of people, it’s especially risky to jump to conclusions about how other people will interpret facts.

    Or, if the whole project of moral philosophy doesn’t resemble their idea of what scientific methods should be like (which itself is a “should” statement that I guess they think isn’t or can’t be true…?), then that’s somehow supposed to cast doubt on the idea that it has any relation with the truth. I’m not even sure how that argument’s supposed to get off the ground, since they presumably don’t even think it’s true, but even if it did somehow, it’s not hard to think of non-scientific disciplines and such that also deal in the truth.

    I believe that what you are encountering is a poorly articulated version of Agrippa the skeptic’s “mode of dispute”: considering a given moral question, we observe that there are disputed answers, therefore we don’t appear to be dealing with a matter of fact. Then the ‘skeptic’ starts asking for “proof” (which is actually a skepticism fail) or the armchair nihilist wants to reject the entire question. As you say, rejecting the entire question doesn’t get us anywhere – though I’d say that asserting “there are moral facts” doesn’t, either, because then we have to build some kind of consensus opinion based on those (and that project has not gone very well).

    I know many of these people think somehow that they’re taking the thoughtful and well-supported and sophistimicated sort of view, but I do not grok how that kind of thought occurs to them.

    It’s taking the easy way out of a very complicated problem in interpersonal communications and relations. In my mind, it’s just as fruitless an approach as those who assert that they get their morals from a cracker jack box, or a god, or whatever.

    In my opinion, saying “I’m a moral nihilist” can either be a way of getting out of having difficult conversations about important things, by claiming there are no possible important things. But it can also be liberating – by acknowledging that there are only opinions about facts, and that everything is a matter of opinion, then we are forced to try to understand both out own opinions, and those of the people we interact with. Yeah, it gets complicated – I might either have to give up tweaking people’s noses entirely, or build a very elaborate set of rules for when I tweak what list of noses – but my observation (along the mode of dispute) is that, to get along, we have to make that effort anyway, so I find nihilism to be a way of reminding myself to think about my opinions in every circumstance and not to leap to the conclusion that my inner rules apply to everyone else.

  9. says

    By the way, the nihilists who claim that morals are untenable, therefore (mumble mumble, I do whatever I want) are ignoring the obvious fact that even if you don’t believe in someone else’s morals, they do, and they may or may not have the power to retaliate against you if you transgress on their beliefs.

    The “strawman nihilist” position does over-simplify too much.

    In my previous, I said:
    In my opinion, saying “I’m a moral nihilist” can either be a way of getting out of having difficult conversations about important things, by claiming there are no possible important things. But it can also be liberating

    That’s a badly mangled sentence. I forgot the “or…” or it liberates us to have those conversations with great depth and honesty.

    When I encounter someone dogmatically saying “I am a nihilist, your morals are bunk” (the strawman nihilist) I think they’re probably just as full of shit as the dogmatist who asserts “there are moral facts, therefore: humanism” (the strawman Richard Carrier) I’m comfortable operating in a landscape that is subjective and complicated, because it appears to me that that’s all anyone does, anyway.

  10. says

    Hj Hornbeck@#7:
    Would the given rule be a good idea 999,999,999 times out of 1,000,000,000? Treat it as good 100% of the time.

    A problem with that approach is that it may not guide us adequately when there is a disputed opinion, because the parties in dispute may assign different probabilities to their assumptions. Unfortunately that appears to be more common than exceptional.

  11. EnlightenmentLiberal says

    Let me address the general problem of Hume’s is-ought problem.

    We need to attack a deeper problem first: How do we know anything at all? Specifically, the Münchhausen trilemma,
    https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma
    and equivalently the regress argument.
    https://en.wikipedia.org/wiki/Regress_argument

    It’s widely accepted that the regress argument is a real argument that deserves a proper answer for anyone who is philosophically inclined and wants a firm “grounding” of their beliefs.

    I have been led to believe, by reliable sources, that up until about 100 years ago, the foundationalism answer was just taken for granted by practically all philosophers. Foundationalism, otherwise known as axiomatic belief systems / knowledge systems, otherwise known as presuppositionalism. Today, there are still many foundationalists, but several people also seriously consider the circular reasoning response, otherwise known as coherentism.

    https://plato.stanford.edu/entries/justep-foundational/
    https://plato.stanford.edu/entries/justep-coherence/

    Personally, I subscribe to a certain kind of mix of the two. Offhand, I’ve heard Matt Dillahunty (of The Atheist Experience) describe this idea as foundherentism (a portmanteau of foundationalism and coherentism). Among other places, Matt Dillahunty defended this position in his quite tedious debate with Sye Ten Brugencate (a well known Christian presuppositionalist).

    I have a small basic set of beliefs, which I might call my foundation, my axioms. These beliefs include certain values. These beliefs form the foundation of the rest of my beliefs. This foundation is also circularly reinforcing. However, outside of this circularly reinforced foundation, all of my beliefs are derived with proper deductive and inductive logic, and observation aka evidence.

    My foundation includes the usual suspects: a commitment to holding logically consistent beliefs, a commitment to conforming my beliefs according to the basic scientific principles and available evidence, a few basic moral propositions including something like Rawls’s Veil Of Ignorance, a generalized form of the Copernican principle to defeat solipsism. Throw on some very basic rules of inductive inference, which through refinement, testing, and iteration, will lead to the mathematics of Bayesian equation.

    Notably for some conversations, my foundation does not include anything explicitly about gods, goblins, ghouls, ghosts, spirits, souls, or any other supernatural stuff. I am a materialist, but I arrived at my materialism as a reasoning on top of my foundational beliefs and the available evidence.

    Of course, for some certain people, they would complain that I haven’t answered the question at all. “You haven’t bridged the gap at all! You just asserted some values by fiat.” I would agree with that assessment. No one can do what they ask. They ask the impossible. The question has been carefully designed to be unanswerable. (Similarly, the “hard problem of consciousness” has also been carefully designed to be unanswerable. However, we know a lot about consciousness. See the work of Daniel Dennett.)

    One cannot use proper logical arguments to argue that one foundation is better than another foundation. However, one can use persuasion to convince someone else to use other values under the assumption that someone else shares enough of a foundation with you.

    My favorite and clearest example of this sort of reasoning is from Sam Harris. (Yes, Sam Harris is a horrible monster, but I liked some of his very earliest work, and he had a great influence on me, before he went off the deep end.) The short version is this: Imagine a hypothetical world where every conscious creature suffers as much as possible, for as long as possible. This is bad. In other words, we should act to avoid this outcome. Every reasonable person should agree to this. Of course, a contrarian might not accept this premise, because after all I’m doing nothing but assert it by naked fiat. However, this should be common ground with practically everyone. With that premise granted, one can then make the next leap, which is the obvious conclusion that morality, aka “the actions that we should take”, is about lessening the suffering of conscious creatures, and improving the well-being of conscious creatures.

    Now, someone might argue that “well-being” is underspecified, but I don’t think that’s a genuine argument. Again, I really like Sam Harris’s answer here, that “healthy” is similarly underspecified, but it’s specified enough for most purposes. His example: We might not know all of the dimensions of health, and we might all have slightly different understandings of what it means to be healthy, and maybe there’s some room for legitimate disagreement, but if someone is vomiting regularly and unable to do normal activities, this is not healthy. Similarly, there might be some disagreements about human well-being, but some things are just blatantly obvious, like murder, rape, etc.

    So, if you accept that we should take actions to avoid the worst possible world, and if you accept that we should take actions to improve the well-being of conscious creatures, then there are objectively right and wrong answers to some moral questions. If some choice makes someone worse off, and no one better off, then that is an objectively bad thing to do (to use the concept of Pareto efficiency).

    To be clear, under this incredibly basic beginning, if someone tortures someone else, and gains enjoyment out of the torture, then I do not yet claim that this is an objectively bad thing to do, under this basic Pareto efficiency standard. To reach the conclusion that this torture is bad, I need to do a much more complicated analysis, and I need to invoke another moral value from my foundation, something like Rawls’s Veil Of Ignorance, which I think can be used to derive some sort of variant of John Stuart Mill’s “harm principle”.

    Tangent: The “harm principle” is not libertarian, and anyone who says has obviously not actually read Mill. I chose this name explicitly because John Stuart Mill is a personal hero of mine, and especially his work “On Liberty” where he defends free speech and his harm principle.

    For practical considerations, i.e. we should punish people who torture and who gain pleasure from torturing someone else, I would need one more basic moral value from my foundation, specifically a basic moral theory of punishment (and specifically punishment for deterrence, quarantine, and rehabilitation, but not retribution).
    https://plato.stanford.edu/entries/punishment/

    PS:
    Science is a belief system. Science is a value system. To use science and to practice science depends on holding certain values, such as the values “I should hold logically consistent beliefs”, “I should conform my beliefs to the evidence”, “publicly available evidence is the highest authority for ending debates regarding the nature of our shared reality (as opposed to human authorities)”, “I should extrapolate from patterns in the past in order to predict the future”.

    Specifically, try to define “effective” in the sense “this approach is an effective way to predict the future”. You should quickly find that it’s recursive, or implicitly depends on assumptions like the uniformity principle, the principle of induction, or some other effectively-equivalent assumption like “the most parsimonious explanation is likely correct”.

    In other words, this whole “fact – value” dichotomy is ultimately a sham. This fact – value dichotomy can only exist by privileging certain values: the necessary values for the foundation of science. The fundamental problem that you have to solve is “how can you have any justified beliefs at all?”. Morality is just a special case. Facts, aka science, is just a special case. Again, thanks Sam Harris for pointing this out to me.

    Also, I know that Sam Harris is not the first person to make such a case, but he was the first person to make the case to me, via youtube lecture, and he made it well enough, and properly enough, and in a properly forceful manner.

    So, earlier I said that there are some objectively right and wrong answers to some moral questions. What do I mean by the phrase “objectively right” ? Something is objectively right if it’s undeniably right, where I choose to use the word “undeniably” to mean “no reasonable person could honestly deny”, and I assume that every “reasonable person” must, more or less, agree with my arguments and positions in this post. Can a contrarian deny my positions? Yes. Can someone who is actually detached from reality, “insane”, deny my positions? Yes.

    PPS:
    I implicitly invoked some sort of consequentialist analysis. I know that Marcus claims that a consequentialist analysis is actually impossible. As best as I can determine, and as honestly as I can relate, Marcus actually claims and believes that it’s impossible to ascertain the moral consequences of one’s actions, even to 51%-49% odds of confidence, in every scenario whatsoever. I think Marcus is being incredibly, incredibly silly in this position, and I put Marcus into the “unreasonable” camp for holding such a silly position.

    Also, regarding consequentialism vs Kantian ethics, etc., I don’t have much respect for the differences. For me, they’re all the same things, but with different window dressing. I’ve written too much already, and so I’ll just leave this. Concerning Richard Carrier, I’m also sorry for linking only to (reportedly) horrible people, but it’s the best essay on this topic that I know of.
    > Open Letter to Academic Philosophy: All Your Moral Theories Are the Same
    > by Richard Carrier on November 11, 2015
    http://www.richardcarrier.info/archives/8903

    Tangent: I have some differences with Richard Carrier on this topic. As far as I can tell, Richard Carrier adopts a sort of ethical egoist approach to morality, which is substantially different (IMHO) from the sort of moral approach that I take which is based on Sam Harris’s approach.

  12. says

    EnlightenmentLiberal@#11:
    he short version is this: Imagine a hypothetical world where every conscious creature suffers as much as possible, for as long as possible. This is bad. In other words, we should act to avoid this outcome.

    Minor nit: in a world where suffering is equal to existence, does “suffer” have anything that distinguishes it from existing? If not, then the creatures in that world probably won’t make a distinction, because there is none. While an outsider might see their existence as suffering, they probably would say “nice weather we’re having today, eh?”

    In other words, I am unconvinced whether we can say something is “bad” without forklifting in a system that allows comparisons – and right there’s your presuppositionalism. Things are “bad” because we know they are “not good” – well, what is “good”? Oh, it’s the absence of badness.

    Now, someone might argue that “well-being” is underspecified, but I don’t think that’s a genuine argument.

    Really? Since you’re opposing “well-being” and “suffering” as opposites, we have to know what “well-being” or “suffering” are without getting into a circular argument.

    Joking aside, that’s where the presupposition comes in. Once you assume we know what “well-being” is then you can attempt to build an entire system atop that assumption, including things like “good people want well-being for everyone” and then you have a very nifty system of ethics built on an assumption that we agree what “well-being” is.

    You don’t get to dismiss that as not being a genuine argument. The non-genuine argument is the one that sneaks “well-being” onto the table when nobody’s looking.
    (Note: see how I am defining “non-genuine” as the opposite of “genuine” whatever that is)

    Similarly, there might be some disagreements about human well-being, but some things are just blatantly obvious, like murder, rape, etc.

    I don’t see how you can assert they are blatantly obvious, when those are practices that people engage in all the time without accepting that they are obviously unhealthy. Ask Bill Cosby, for one example. I happen to agree that they are “unhealthy” but I don’t see how you can claim it’s “obvious” when, clearly, not everyone agrees about its obviousness.

    This is not a minor quibble! If you’re going to assert that there are “obvious” bad things, then you’ve got “obvious” bad things and you may as well just outright assert that you have an objective set of “moral facts” like the virtue ethicists do, and be done with it – you may as well assert that you get your “moral facts” from god, while you’re at it.

    I need to invoke another moral value from my foundation, something like Rawls’s Veil Of Ignorance, which I think can be used to derive some sort of variant of John Stuart Mill’s “harm principle”.

    I’ve got to withhold judgement on that, since I’m not sure what argument you’d make, though I have to observe that Rawls’ Veil is problematic if the person you’re talking to happens to be a sadist whose idea of proper behavior is to maximize the pain of others while minimizing their pain. Or perhaps a masochist who wants to give others pleasure through their pain.

    I’m a big fan of Rawls but I’m afraid he’s trying to get a lot of work done by the usual trick of assuming that people share a lot of values and will feel remotely similarly in similar circumstances. All that is required to refute that is one Bill Cosby, one Eric Raeder, one Lavrenti Beria – and there’s a whole lot of them. (I am also a huge fan of Mill, for what it’s worth)

    Science is a belief system. Science is a value system.

    I agree with you there. By the way, you might enjoy Cizko’s “Without Miracles” – it’s a very interesting attempt to argue that scientific epistemology works because of a sort of evolutionary “survival of ideas that aren’t refuted yet”

    I know that Marcus claims that a consequentialist analysis is actually impossible.

    I’m not sure if I used the word “impossible” – if I did, I shouldn’t have. I usually put more waffle-words in, e.g.: “effectively unknowable” I think my argument there is pretty solid, but of course I could be completely barking up the wrong tree.

    Let’s try:
    Marcus actually claims and believes that it’s impossible to ascertain the moral consequences of one’s actions, even to 51%-49% odds of confidence, in every scenario whatsoever.

    I didn’t put a probability range on it, unless I am mis-remembering badly. And I’m not sure I’d characterize my observation as a “claim” – it’s more like I feel I’m pointing out some serious problems with that particular idea.

    I think that people have some reasonable ability to predict the near-term future. E.g: if I drop a bowling ball off a bridge over a busy highway, the ball is going to land on the highway. I can make some guesses about the possible outcomes of doing that. And they’re pretty bad – I am comfortable with saying “that is a really bad idea.” Where I have problems is with the longer-term outcomes and their values. What if that bowling ball was going to kill the next Hitler? I don’t know, and neither does anyone else, and I would say it’s effectively unknowable what would be the long-term consequences of a decision. like that.

    I’ll grant you that I can say with certainty that it’s my opinion that dropping bowling balls off bridges into busy highways is a “bad idea” but that’s about it. I think it’s dishonest to pretend to have a higher amount of certainty about things than the short-term (though I definitely acknowledge that our short-term decisions are the origin of our long-term consequences)

    I think Marcus is being incredibly, incredibly silly in this position, and I put Marcus into the “unreasonable” camp for holding such a silly position.

    Shall we engage in mutual accusations of silliness? Simply labelling someone’s position as “silly” shouldn’t be mistaken for a refutation. In fact, I generally find it slightly suspicious when someone starts throwing labels around without trying to offer a refutation.

    Also, regarding consequentialism vs Kantian ethics, etc., I don’t have much respect for the differences. For me, they’re all the same things, but with different window dressing.

    I tend to agree with that. Kantian ethics presuppose that we can make an accurate prediction of our actions’ future effect, just like the consequentialists’ – I think Kant made a noble effort indeed, but… eh. Now, if we could add the assumption that the mid-term future was going to nearly always, or even 51% of the time, work out the way we predicted it would, then I think we could revive consequentialism.

    As far as I can tell, Richard Carrier adopts a sort of ethical egoist approach to morality, which is substantially different (IMHO) from the sort of moral approach that I take which is based on Sam Harris’s approach.

    Carrier and Harris are both using a variation of the virtue ethicists’ gambit, which is to beg the question by sneaking one’s existing opinions into the discussion in the form of “virtues” or “obvious moral facts” Obviously, I find that unconvincing for reasons I have already explained.

    http://www.richardcarrier.info/archives/8903

    Yeah, that’s quite a piece of work. I have trouble making out Carrier’s arguments, often, but it dates from his early days of being in love with Philippa Foot’s virtue ethics. I’ve read Foot and I noticed the part Carrier appears to have ignored, where Foot outright says that she doesn’t try to address nihilism though she pokes at a strawman of Nietzsche, which I don’t think was particularly fair. Beating up on old Fred for making unsupported assertions is too easy.

    Short form on virtue ethics: the virtue ethicist appears to be sneaking their opinions about ethics onto the table by labelling them as “virtues” and claiming that it is a moral fact that they exist. It is a fact that they have opinions, but it is not a fact that they are universally shared – so one can build a moral system based on one’s opinions, which is what I think we all do, whether we realize it or not. But, if that’s the case, we can just shorten our moral system to “I have a personal system which is a bunch of opinions” (which is what I’ve been saying all along) I just don’t kid myself into assuming that you share my opinions. By the way, you may notice that my approach is not contradicted by the world as it appears to be – that there is a wide range of opinions about morals – whereas Harris and Carrier’s assertion that there are moral facts doesn’t map onto reality very well.*

    (* #include )

  13. Siobhan says

    I don’t see how you can assert they are blatantly obvious, when those are practices that people engage in all the time without accepting that they are obviously unhealthy. Ask Bill Cosby, for one example. I happen to agree that they are “unhealthy” but I don’t see how you can claim it’s “obvious” when, clearly, not everyone agrees about its obviousness.

    This is not a minor quibble! If you’re going to assert that there are “obvious” bad things, then you’ve got “obvious” bad things and you may as well just outright assert that you have an objective set of “moral facts” like the virtue ethicists do, and be done with it – you may as well assert that you get your “moral facts” from god, while you’re at it.

    The length of the responses here likely warrants another blog post all on its own, when I have the time. But Marcus has neatly captured one of my biggest sticking points: Regardless of how we define morality, there are always other actors breaking our code who will no doubt possess a host of rationalizations for how their behaviour is moral.

    I have no means to compel Cosby to stop drugging and raping women if my only tool is moral philosophy. I can drum up an argument for “rape is bad” but Cosby can side-step that by saying “I haven’t raped anyone, so your argument doesn’t apply.” I can also drum up an argument for why I, personally, also do not want to be raped–Cosby can do the same thing. At some point it boils down to who has the biggest stick to enforce their moral code, and whether one of those codes insists on encroaching on the other.

    I don’t consider that a satisfying conclusion, as I did characterize this as a “quagmire” and a “prison.” But it is where I’m at. I certainly have an internal moral code but I don’t pretend to have resolved the self-referential problem of arguing by definition. So if you want to do something to me that I don’t want done to me, my argument isn’t “don’t do that because [moral philosophy],” it’s “don’t do that because I’ll try to hurt you.”

    Is it a cop out? Abso-fucking-lutely. It’s subjective, and totally inadequate, but then again I’ve admitted as much.

  14. says

    Shiv@#13:
    it’s “don’t do that because I’ll try to hurt you.”

    I promise I will not drone on and on about this point, though I’ve been meaning to do a post “in defense of retaliation” for some time.

    One of the things consequentialists fairly conspicuously tend to leave out is that often, one of the consequences of our actions is retaliation. I haven’t firmed up my arguments on the issue, yet, but I think one could make a good case that retaliation is an important form of communication; sticking a knife in a would-be assailant’s kidney is an unequivocal way of saying, “no, do not want.” There is a complex problem around at what point retaliation becomes aggressive pre-emption (i.e.: punching a nazi because you believe that they would be happy doing something nasty to you, so you’re telling them “no, not like nazi.” in advance of that) versus pure retaliation (i.e.: finding out which cop pepper-sprayed you and shooting them with a scope-sighted rifle) The latter example is deliberately extreme but one could actually throw together a pretty good-sounding bullshit consequentialist argument that it was a benefit to all the other cops, to help show them how to be better cops, and thereby improving civilization.

    That’s one of the reasons that Philippa Foot’s rather sad strawmanning of Nietzsche was so annoying. She doesn’t really do a fair job of presenting the opposing argument (neither do Carrier and Harris, but I think that’s because they’re not thinking deeply about the problem) – I believe one can construct a “virtue ethics” and consequentialism based on taking absolutely appalling actions while justifying them with potentially beneficial outcomes. The ur-form of that, of course, is the stupid “ticking time bomb” scenario. Harris doesn’t consider the results of building an entire ethical system on that doctrine, does he? It’s … interesting.

    —–
    When I was in high school, in our long-running D&D game, I played a paladin who called himself “Fist of God” who invariably made the most appallingly violent and destructive decision in any given situation, yet had very plausible reasons for why doing so was to the benefit of everyone. His coat of arms read “dead men cannot sin” and it went downhill from there. I guess I’ve been trolling consequentialism for a long time!

  15. EnlightenmentLiberal says

    But Marcus has neatly captured one of my biggest sticking points: Regardless of how we define morality, there are always other actors breaking our code who will no doubt possess a host of rationalizations for how their behaviour is moral.

    I tried to be clear that I was simply describing my approach, and also what is the widespread approach throughout most of history, and even a common modern approach. However, I also tried to be clear that it’s just presuppositionalism (with some circular reasoning through in), aka foundationalism + coherentism.

    To Marcus
    About your extreme retaliation example. Again, this might be due to the difference between us regarding our ability to predict the consequences of our actions. It seems quite simple and obvious to me: Suppose you try to contrive an example where your D&D paladin would kill someone for a trifling affair for the greater good. I take it that you might argue that the consequences are beneficial by killing this person for jaywalking.

    First, let’s ignore the afterlife, which is a relatively undisputed thing in the setting, which greatly changes the calculus.

    I would need to see a proper example, but I strongly suspect the consequence that you are missing is: Even if in your contrived scenario it happens to work out, your paladin would not want someone else applying similar logic to similar scenarios, especially not against you. An extremely important element of any practical consequentialism is trying to foster a culture of rule of law, and how any particular act might degrade the rule of law. Thus, any proper consequentialist, IMAO, will sometimes do acts with clearly bad outcomes, just to support and foster rule of law, because having a culture that respects rule of law is such a huge positive outcome that it can often outweigh bad outcomes in a particular case.

    For example, I can contrive a scenario where torture is morally acceptable. Hell, I can contrive examples where, according to a naive consequentialism, it’s morally mandatory to torture. However, the key sticking point is that such scenarios are so wildly unrealistic, and therefore rare, that we as a society would be better served by creating a law that says that torture shall be unlawful in any case. We need to compare it to the alternative: creating a law that allows torture in some cases where “it’s really justified”. However, we know enough about sociology and psychology and politics to know how that sort of rule can and will be abused, a slippery slope problem, and therefore it’s just better to ban it in all cases, to avoid lots of acts of unjustified torture, at the sacrifice of the extremely rare events of justified torture.

    In other words, I consider it likely that your D&D paladin used consequentialist reasoning and logic that your paladin does not want other people to use. That’s the problem. We need to create rules, guidelines, laws, on the assumption that someone else will be judge, a relatively fair and impartial judge, but still a flawed human being, and importantly someone who is not us.

  16. EnlightenmentLiberal says

    PS:
    I’m not trying to sneak torture in the backdoor, like Sam Harris does. I actually mean it when I say that there ought to be a law, and it ought to be vigorously enforced, including against the torturers of Khalid Sheikh Mohammed (whom IIRC Sam Harris said it was justifiable to torture for intel by name).

  17. says

    EnlightenmentLiberal@#15:
    your paladin would not want someone else applying similar logic to similar scenarios, especially not against you

    Of course not!

    I think we’re on the same page, really – along with Kant, who’s basically saying “don’t do unto others what you’d hate if it was done to you.” And that’s also in line with Rawls’ Veil, really – they are attempts to get people to see their actions in terms of what the consequences would be to themselves. I really like that angle and I think it’s a really good attempt (it almost makes it!) because it’s forcing the other party to look at their own opinion, and how they’d feel about it. That works, it really does, as long as their opinion is not something that’s really off the charts. But the real problem with that approach is, as I’ve observed elsewhere, people are quite comfortable with creating a world in which what applies to others does not apply to them. I wonder if Jeffrey Dahmer’s last thoughts were “this is unfair!” …. probably. And I wonder what Bill Cosby would think if he was in Hamurrabbi’s court and was given drugs and tossed to the palace guards for play-time, so he could see how he likes it?

    There’s this solipsistic moral opinion that we keep bumping up against, which is “it’s OK for me to do that to you, but don’t you dare do it to me!” As long as there are people who can hold that view, I don’t see how any moral codes built on parity are able to work, unless we can come up with some other way of arguing that certain opinions are categorically always wrong. (“thou shall not kill”, if granted divine weight, neatly cuts apart the Dahmer question)

    An extremely important element of any practical consequentialism is trying to foster a culture of rule of law, and how any particular act might degrade the rule of law. Thus, any proper consequentialist, IMAO, will sometimes do acts with clearly bad outcomes, just to support and foster rule of law, because having a culture that respects rule of law is such a huge positive outcome that it can often outweigh bad outcomes in a particular case.

    Yes, that’s my understanding, too. Then we’re left with some kind of amalgam of popular opinion deciding the norms (which I believe is what happens, anyway) There are also problems with the rule of law argument, for example: where were all the consequentialists hiding when slavery was legal and the norm for hundreds of years? If it was normative, was it “moral” at the time? This is a question I have been asking for years, under the heading of “is morality time-invariant?” It would be time-invariant if there were some way of concluding that certain behaviors were always wrong. We observe that doesn’t happen, therefore it seems we can reject any moral system that would produce absolute guidance across time. (For example: if Aristotle were so ‘virtuous’ wouldn’t his virtue of kindness and fairness led him to loudly reject slavery as it was practiced while he was alive? Of course it didn’t – so either Aristotle’s virtues are historical garbage, or Aristotle knew he was being horribly immoral and just chose a life of vice instead of virtue. And thus, I dispense of virtue ethics..) (closes the trashcan lid)

    In other words, I consider it likely that your D&D paladin used consequentialist reasoning and logic that your paladin does not want other people to use. That’s the problem. We need to create rules, guidelines, laws, on the assumption that someone else will be judge, a relatively fair and impartial judge, but still a flawed human being, and importantly someone who is not us.

    I agree completely with that. If we said “our moral system is ‘whatever Jane Goodall says'” then we have a consistent and fact-based moral system. It’s a bit arbitrary, but it’s OK as long as we didn’t pick Mike Pence as our authority.

    By the way, I would call that a legal system not a moral system. And in general I favor not spending much time on moral systems, when we can try to establish legal systems, instead. I don’t argue we can conflate legal systems with moral systems, because of the problem of time-variance: it was legal to own slaves, and then it wasn’t: was slavery always wrong, even when it was legal?

    I am not advocating the approach to morals that my D&D paladin used. He was a lot of fun to role-play but basically he was a troll with a huge sword.

    I can contrive examples where, according to a naive consequentialism, it’s morally mandatory to torture. However, the key sticking point is that such scenarios are so wildly unrealistic, and therefore rare, that we as a society would be better served by creating a law that says that torture shall be unlawful in any case.

    Yes, that’s the “ticking time bomb scenario” problem. By the way, that whole problem can be resolved fairly neatly if Harris framed it as:
    “Jack Bauer learns of a ticking time bomb, and tortures the terrorist, saving all the lives in LA. He then turns himself in to the police for committing a capital crime, and is found guilty and executed. Justice is served, LA is saved, and the terrorist is thwarted.”
    It’s odd to me that we never see it framed that way – could it be that the time bomb scenario’s consequentialism is deliberately imbalanced? That’s one of the other problems with consequentialism, BTW, we can’t see far enough into the future to know what all the consequences are, so we can’t make a real decision. What do I mean? If Bauer offered himself up for justice believing that he would be granted a presidential pardon, then is he really doing the right thing? I don’t even know what the right thing is, so I can’t even reason about what the correct outcome would be.

    One point that I think needs to be made: It seems to me that one can be a moral nihilist while still trying to be what is, in our own opinion, a good person. I try to do whatever I think is right at the time, and often that means doing things that are contrary to my own perception of my self-interest. When I think about people like Raeder, Cosby, Dahmer, Trump, I mentally classify them as moral nihilists, also. They just don’t realize it, but they have codes of behavior that amount to “do whatever I want.” I accept that someone may believe that they have a moral system anyway, yet behave that way, but it’s my opinion they’re just fooling themselves and they sure as hell aren’t fooling me.

  18. says

    EnlightenmentLiberal@#16:
    I’m not trying to sneak torture in the backdoor, like Sam Harris does.

    I can tell you weren’t. I believe you are being honest.

    I actually mean it when I say that there ought to be a law, and it ought to be vigorously enforced, including against the torturers of Khalid Sheikh Mohammed

    I agree.

    Besides, as I mentioned above, if those torturers really believed they had gotten valuable intelligence, they’d be confident that they would be exonerated in court, or found guilty and given a suspended sentence. Strangely, they don’t seem to be confident enough in their belief that they are stepping forward…

  19. says

    Perhaps I should name that the “Jack Bauer as Sidney Carton time bomb scenario”

    I don’t know about you, but if I was called upon to give my life to save LA, I’d go to the guillotine with a smile. ‘Tis a far, far better thing and all that. I could probably name a dozen people off the top of my head who’d do likewise. I’m not sure whether that’s extreme consequentialism, or what, though.

  20. EnlightenmentLiberal says

    I agree with practically all of that, except your IMHO weird fascination with the point that it can be hard to predict consequences of our actions.

  21. Siobhan says

    I am not advocating the approach to morals that my D&D paladin used. He was a lot of fun to role-play but basically he was a troll with a huge sword.

    You say that as if us nihilists are not trolls with huge swords.

    /snerk

  22. says

    Yuge swords. The best. I know a lot about swords. We’re going to take North Korea’s swords away. Hillary Clinton gave them swords. Consequentialism is fake philosophy!

  23. says

    EnlightenmentLiberal@#20:
    except your IMHO weird fascination with the point that it can be hard to predict consequences of our actions

    It seems to be pretty straightforward to me, but let me try to explain.

    It appears that consequentialists want to evaluate whether an action is moral or not by considering what benefits it may or may not incur for others. The degree to which we would say a decision is “good” or moral is the degree to which it affects others not how we feel about it. (this is sometimes phased as “intent isn’t magic”) In consequentialism, we are only assessing the outcome.

    But there’s a problem: the outcome is not completely under my control. The best case scenario is I might do something, thinking it will result in a better outcome for everyone, but I’m mistaken and my efforts are neutral; nothing happens. What if I’m horribly mistaken and I make things much worse, based on exactly the same consequentalist reasoning as before? Now I have decided to try to make things better because I think it’s the right thing to do – but I have actually wound up doing harm. I may not be an “evil*” person but I’m not a good person because my actions caused harm in spite of my moral calculus telling me I was doing the right thing.

    We can step back from that a bit and say that my calculus that led me to do the right thing was “good” in and of itself but then we’ve defeated the purpose of consequentialism itself, which is that we judge actions based on their consequences, as a way of not having to factor in the Bill Cosby who convinces himself that his intent was good, even though the effect of his actions was harmful.**

    But wait, it gets worse! Suppose your own and others’ assessments of the consequences of your actions change over time, as often happens. Let us say we are Neville Chamberlain, who feels he is making the “good” decision regarding Hitler’s ambitions. Less than a year later, even he realizes that his “good” decision (that should have led to improved consequences for everyone) was not so good – in fact, he’s now an “appeaser” and a fool. Was his decision morally right, or not?

    I think that’s a problem for consequentialists and I’d go farther and say that it seems that a lot of consequential reasoning seems to be post facto self-justification and excuse-making.

    The nihilist, of course, simplifies things by doing whatever they do, and post facto self-justifying if they need that to feel better, or patting themselves on the back “I meant well” and carrying on. We’re not concerned with those inconsistencies because we know it’s all BS (which doesn’t mean we can’t play with the BS if it makes us feel better).

    By the way, that’s another way that time-invariance of morality comes up: our (and others’) assessment of the consequences of our actions changes over time. Let’s say you’re Normal Borlaug and you think it’s a “good” thing to stave off world hunger and economic collapse by releasing high-yield wheat, rice, and corn. 50 years later, you have set the world up for an even bigger, nastier collapse with 8 billion lives at stake instead of just 3 billion. Your decision to attempt to do “good” has placed billions of peoples’ necks on the chopping-block.

    Again, the nihilist doesn’t have to make sense of that situation; they did what they did and it seemed like a good idea at the time, oops, I screwed up, move on! We must look forward, not back!

    And, I’m sure you noticed that I am characterizing the nihilist’s decision-assessing process as being remarkably similar to what most people appear to do most of the time. It is my opinion (I wouldn’t want to try to argue it) that most people who think they are moral are actually nihilists with poor situational awareness. Our limited ability to process cause and effect over time makes that inevitable. Put differently: there are a lot of people who offer a lot of “reasons why WWII happened” – they aren’t all right, but they are all engaging in revisionist consequentialism. The nihilist forgives Neville Chamberlain, rightly, because they know Chamberlain had no idea how his actions were going to work out, and neither did any of the other people involved. The nihilist might point out to the consequentialist that, if Hitler were really thinking clearly he’d have realized that his big idea was going to end with Germany in ruins and himself in a ditch. But if the consequentialist went back in time and told Hitler that, Hitler would probably tell them, “get lost! this is going to be great!”


    * let’s say “evil” is deliberate malice; I can’t avoid a circular definition with moral language, but for the sake of argument let’s call it the opposite of “good.”
    ** let’s imagine Cosby has convinced himself that he’s going to help the person with their career, after he’s done having sex with them.

  24. EnlightenmentLiberal says

    But there’s a problem: the outcome is not completely under my control.

    I’ll agree that intent is not magic, in the manner intended.

    However, I don’t need “completely under my control”. In principle, I just need slightly better than 50:50 odds that it will cause net good in at least some scenarios of interest. By “net good”, I mean where the estimated good and harm are weighted according to the relative probabilities of them happened, and weighted according to the severities of the benefit and harm. Often these are weighted in non-linear ways, which is a non-trivial affair, with lots of room for subjective values IMHO. It’s not as simple as putting a point value to everything, multiplying by the estimated probability, and summing it up, e.g. “non-linear”.

    Moreover, it seems like you’re taking a giant shit on science altogether. It seems like you’re saying that we cannot make scientific predictions about our actions. At least, that seems to be the thrust of your point. Perhaps I am missing some subtly of your point.

    Maybe you’ll try to argue “but we cannot tell about the consequences 1000s of years from now”, which is interesting, but my knee-jerk reaction is to dismiss that sort of argument as follows: For really big stuff and really big decisions, we can make a good guess that is true more likely than not, and for the small stuff, the chance of good is about equal to harm, and it washes out in the end, for consequences thousands of years from now. However, the consequences in the near-term we can predict reasonably well, and those matter too.

    As for weighting the importance of good vs harm now, vs good vs harm in the future, etc., is a complicated question, and I don’t mean to pretend to have an answer for every case that every reasonable person will agree to. (In this aspect, I might disagree with Sam Harris – I don’t know, he waffles on this point AFAICT.)

    Maybe it’s best to put it like this: I don’t have to have an answer to every moral question in order to claim that I have answers to some moral questions. I don’t claim to have answers to every moral question, and I don’t claim that there is, even in principle, objectively right unique answers to every moral question. I hate to make this comparison, but it’s apt: To the standard intelligent design creationist, I often have to repeatedly inform them that I don’t have to know everything with science in order to know some things with science.

  25. says

    EnlightenmentLiberal@#24:
    Moreover, it seems like you’re taking a giant shit on science altogether. It seems like you’re saying that we cannot make scientific predictions about our actions.

    No. Please split those apart. I am not taking a shit on science altogether. Science is good at predicting limited futures in the limited ways that it does. I’ve got absolutely no problem with that, at all. For example, we can use science to predict how a cup of water will cool down, or that GPS will work, or how a computer will function. In fact, I have made most of my living based on the assumption that computers mostly do what they are told to do (and that it’s usually enemy action when they don’t) – I think it’s specious to say I am taking a shit on science when I am clearly communicating with you over the internet.

    We can make scientific predictions of some of our actions, sure. As I mentioned earlier, if I am standing on a bridge over a highway full of traffic, holding a bowling ball, and I let go of the bowling ball – science predicts that it wall go down into the traffic. Science even predicts that it’ll bounce around a bit. Practically I can say that there is a high probability (let’s say 80%!) that it will hit a car. After that, science’s predictions remain accurate but they become unknown to me I can not predict whether the ball is going to hurt or kill someone. Or maybe the ball is going to land on someone’s abusive spouse and they will both be rid of their spouse and collect a substantial insurance settlement and be very happy. I have no way of knowing what is going to happen beyond the basic predictions that the ball is going to go down and it has a good likelihood of hitting a car.

    I don’t believe that you are claiming that you’d be able to predict with any kind of accuracy what the outcome of dropping the ball would be, much more than I’ve outlined. I don’t believe you’d be able to predict anything remotely resembling:
    I just need slightly better than 50:50 odds that it will cause net good in at least some scenarios of interest. By “net good”, I mean where the estimated good and harm are weighted according to the relative probabilities of them happened, and weighted according to the severities of the benefit and harm.
    What I think you can do is say “well, I don’t like the idea of having a bowling ball dropped on my car as I drive by, so I am comfortable assuming that the other drivers on the highway wouldn’t either” and you’re backfilling from there. But that’s not really a consequential argument about understanding the effects of the actions – that’s assuming that your opinions about the effects of your actions hold as a general rule: that’s solipsism. (Or if you’re a nihilist, you’d shrug and say, “yeah.”)

    Often these are weighted in non-linear ways, which is a non-trivial affair, with lots of room for subjective values IMHO.

    That doesn’t seem much like scientific or consequential reasoning – that sounds a lot like you’re accepting my point: that we can’t really predict the consequences of our actions so we fudge the hell out of our assessments. Which is fine – it’s what we all do. But please don’t try to pretend it’s all sciency: you simply do not have enough information to go on to make an informed decision.

    As I should have said: I do not and have never dropped a bowling ball into traffic. Because it’s my opinion that it’s a bad idea because a) it wouldn’t be very fun and b) retaliation and c) it’s probably illegal and given that the amount of fun – zero – doesn’t outweigh the cost – which is potentially huge – I don’t play in traffic. Those considerations are wild-ass guesses about outcomes. I’m comfortable making them because I’m not pretending to have some methodology other than “eh, whatever. seems like a bad idea.”

    It’s not as simple as putting a point value to everything, multiplying by the estimated probability, and summing it up

    I agree that consequentialists who talk about “moral calculus” are making a terrible mistake. When you encounter consequentialist arguments (especially if they are making that damned trolley car argument, or ticking bomb) they like to pretend that there is such a calculus possible. That is what I object to. Although I will say that if consequentialists were honest and say “we try to do stuff we think will work out OK based on our understanding of the situation” I wouldn’t be so hard on them. Because then they would be admitting that they don’t really have a moral system at all; they’re just going by opinion and wild-ass guess like people do.

    For really big stuff and really big decisions, we can make a good guess that is true more likely than not, and for the small stuff, the chance of good is about equal to harm, and it washes out in the end, for consequences thousands of years from now. However, the consequences in the near-term we can predict reasonably well, and those matter too.

    I agree with that, but that does not really answer the question of whether or not Neville Chamberlain was a good guy or a bad guy, or whether Norman Borlaug did a good thing or a bad thing. And, I’d argue that if one actually had a moral system, it would allow us to determine the answer to those questions. I observe that, of course, it does not – so then what is the moral system worth? If you can’t tell whether something complicated was moral or not, why have a moral system at all? (Come over to the dark side… Nihilists get all the chocolate chip cookies we can obtain by whatever means we feel are appropriate!)

    I don’t have to have an answer to every moral question in order to claim that I have answers to some moral questions.

    Ok, I’ll buy that.

    I don’t claim that there is, even in principle, objectively right unique answers to every moral question.

    Me either. I just cut a little deeper, is all, and am a bit more accepting that my apparent decisions are mostly a matter of opinion than otherwise. As I said earlier, I’m not saying that nihilists can’t live by codes of behavior that we create and accept for ourselves. I do. I think I’m a pretty good person, but that’s because I define “good person” as being “like Marcus” and I’m a lot like Marcus. There are a whole bunch of things Marcus does and won’t do and the details are very complicated, but I’ve never given someone drugs so I could rape them, or started a world war, or kicked a puppy, or a whole bunch of things like that.

    It seems to me that once we accept that we’re complicated and arbitrary then why not stop wasting our time with fig-leaves that pretend to understand the consequences of our actions – and just do the best we can with the situation as we see it at the time?

    I’m not saying “moral systems are impossible” I am saying “moral systems appear to be personal” And, as a corollary, they are fractally detailed, so communicating them fully to someone else is probably impossible or a great waste of time or annoying to everyone involved.

    To the standard intelligent design creationist, I often have to repeatedly inform them that I don’t have to know everything with science in order to know some things with science.

    Sure, but what things? ;)

    That’s another whole problem with consequentialism – namely, do we share standardized weightings about the relative values of outcomes? Of course we do not. Therefore we cannot share our reasoning about moral issues.

  26. EnlightenmentLiberal says

    After that, science’s predictions remain accurate but they become unknown to me I can not predict whether the ball is going to hurt or kill someone.

    I don’t believe that you are claiming that you’d be able to predict with any kind of accuracy what the outcome of dropping the ball would be, much more than I’ve outlined.

    What? Yes you can do predictions. Yes you can. This is so amazingly and immediately obvious, that I don’t even know how to respond. Of course you can make accurate enough predictions for this case. You know that there is a substantial risk of immediate significant harm from this action.

    Or maybe the ball is going to land on someone’s abusive spouse and they will both be rid of their spouse and collect a substantial insurance settlement and be very happy.

    This is not a serious reply. You also know that such possible immediate “accidental” beneficial outcomes are substantially less likely to occur than the outcomes where the ball causes immediate harm to someone, i.e. kill someone who doesn’t deserve it, who will be missed by loved ones, etc.

    This sort of reasoning is no better than your cartoonish D&D Paladin, and I thought that we had already dispatched that as a silly strawman. It’s silly for the following reasons:

    I am practically certain that you have a gut feeling that you don’t want someone else dropping bowling balls from bridges over freeways while you’re driving on the freeway, because you implicitly recognize that the odds of you being harmed by someone else doing such a thing greatly outweigh the odds that you will be benefited by someone else doing such a thing. I’m sorry to be so strong in my language and argument, but this is so amazingly obvious that I think that you must be trolling me. Your argument is incredulous, fantastic. And I haven’t even gotten to the second point: rule of law concerns.

    We also need to consider their malicious intent and/or their intent with gross negligence, and we need to punish that, in order to foster rule of law, in order to achieve better outcomes in future cases. Punishing this hypothetical bowling ball person according to rule of law will help ensure that future bowling ball incidents do not happen, but it also directly helps in many other scenarios that don’t involve bowling balls at all. It creates a precedent, an understanding in people in society, that certain actions that have a risk of harm with malicious intent and/or gross negligence will be punished. Proper sophisticated consequentialism must take into account intent, because punishing or not-punishing people according to their intent has consequences, i.e. punishing people according to intent fosters the a culture that respects and follows the rule of law.

  27. Hj Hornbeck says

    Geez, I get busy and this comment thread fills up with words. :P I’ve done my best to skim and catch up, but apologies in advance if I restate something already covered.

    Ranum @10:

    A problem with that approach is that it may not guide us adequately when there is a disputed opinion, because the parties in dispute may assign different probabilities to their assumptions. Unfortunately that appears to be more common than exceptional.

    I’d argue to the contrary, what you consider common is exceptional. You say in comment 25 that “moral systems appear to be personal,” a point I agree on, but people bear a remarkable resemblance to one another. They have an optimal temperature range for adequate functioning, a desire to live and procreate, and so on. There are differences, of course, but it is easy to fixate on “should I torture a terrorist to save millions of lives?” and forget “should I torture a mail-person so I can read other’s mail?” is a far more common scenario. Overlap the moral codes of millions of people, and clear patterns emerge. We can then enumerate over these patterns and turn them into a shared moral code.

    This is a lot like Kant’ Categorical Imperative, but with one crucial difference. The Imperative fails because it attempts to build a code which works 100% of the time, which at minimum is practically impossible. If instead we set our sights on a moral code which we expect to fail less than once in the median human’s lifespan, morality becomes a lot simpler. This solves the nihilist or skeptic’s objections without resorting to deontological thinking.

  28. Hj Hornbeck says

    Enlightenment Liberal @11:

    Of course, for some certain people, they would complain that I haven’t answered the question at all. “You haven’t bridged the gap at all! You just asserted some values by fiat.” I would agree with that assessment. No one can do what they ask. They ask the impossible. The question has been carefully designed to be unanswerable.

    I’m not as convinced. In a true Bayesian framework, there is no limit to the number of hypotheses we can consider and almost no restriction on the hypotheses that can be considered (the main exception being the necessity of an algorithm for turning evidence into likelihood). So why not consider every possible hypothesis, including “Bayesian frameworks fail to model reality?” As we turn the crank, some hypotheses will drop in certainty, which in turn effects the certainty of their underlying axioms. If every hypothesis that contains axiom X decays in certainty, while every hypothesis that does not rely on X strengthens, and no hypothesis is excluded from the analysis, we build a case that axiom X should be discarded.

    This appears to suffer from circularity: how can you adequately assess “Bayesian frameworks fail to model reality” within a Bayesian framework? The solution comes from the anthropic principle. Merely defining the term “reality” requires dragging in a shocking number of axioms, ranging from “I have senses” to the basics of set theory. From those axioms, you can construct a Bayesian framework and then let it loose. If you can also construct other epistemologies, you can generate a probability calculus and pit them against the Bayesian framework on every epistemology’s home turf. The circularity is thus dispensed with.

  29. EnlightenmentLiberal says

    Merely defining the term “reality” requires dragging in a shocking number of axioms, ranging from “I have senses” to the basics of set theory.

    I agree.

    The circularity is thus dispensed with.

    No. You just moved the foundation. The foundation is still there. The circularity, if any, is still there in the foundation. Whether the foundation is foundationalist or coherentist, or both, it’s still there. In the first quote of this post, you agree that the axiomatic foundation is still there.

    I don’t know what your game is, but if you’re trying to evade the regress argument aka the Münchhausen trilemma, I won’t have any of that. Any real epistemology is necessarily based on presuppositionalism or circular reasoning, or both (and yes I dismiss “endless regress of justifications” right out of hand, and I do so on the basis of my pressupositionalism and circular reasoning).

    As you argued upthread, I will be the first to agree with you that practically all people have the same foundation for the epistemology, with some slight differences from person to person.

  30. consciousness razor says

    Sorry, took a while to get back to the thread, which seems like it’s dying down now… was in moderation, then forget to check back in.

    Siobhan, #13:

    But Marcus [@#12 — CR] has neatly captured one of my biggest sticking points: Regardless of how we define morality, there are always other actors breaking our code who will no doubt possess a host of rationalizations for how their behaviour is moral.

    There are always people with uncontroversially (i.e. non-moral) factual kinds of beliefs which are false. That seems to imply all of science (along with tons of other stuff) is in the same boat, and obviously, you don’t make it sound like a very nice boat. Is this why we can’t have nice things?

    But seriously, if this is some kind of a sticking point for you, how exactly are you treating all of that other stuff? Do you regard it with the same kind and degree of suspicion as any conceivable moral fact? Or do you not do that, perhaps because you’re inconsistent or because some additional thing is another sticking point? What does seem especially suspicious or doubt-inducing to you, about the entire class of claims that murder and lying and bigotry (etc.) are immoral? Certainly, some very specific moral claims that certain people have made are wrong (or are not even valid, are uninterpretable, etc.), but as a class I don’t see what the problem is.

    Let me give an example. The overwhelming scientific consensus is that climate change is a real phenomenon. There is a fact about the consensus; but more importantly, there is of course a fact about our planet’s environment. (It could’ve been a fact that temperatures are going down, but it’s not.) There are people who are ignorant of the facts or who believe otherwise for a variety of reasons. There are denialists, people who will try anything despite the odds*, oil industry shills who are simply dishonest, and so forth.

    *It is a question of “odds,” since all such claims are probabilistic, even ones (like “the sun will rise tomorrow”) which you may think must be so certain or undeniable, that harboring any significant doubts about them would just be perverse, counterproductive, trolling, or at least some pointless wankery that you shouldn’t waste much time worrying about. Enlightenment Liberal has presented this general issue as a sort of logical puzzle, which I guess is supposed to lead you down some dark philosophical path or another, but I think we simply need sufficient justification to believe something is true and don’t really see what the big deal is at the most general imaginable level. (I guess I’m a “coherentist” in some sense, but honestly I don’t care a whole lot one way or another.) If things all appear to hang together in a nice coherent sort of way, if nothing seems to be conflicting with it, if no evidence is being ignored, if things which seem like they need an explanation do get one such that you can understand the world, etc., then you can confidently and responsibly hold a set of beliefs which certain satisfy criteria like that.

    I’d say that’s about as good as anybody could hope for, when we’re talking about having “knowledge” of the “truth.” Asking for anything more than that, no matter the topic, will probably be a mistake. If you pump up your criteria just a bit too much, then nobody “knows” anything about anything, or there are simply no facts, or something to that effect, which suffice it to say doesn’t get us anywhere. And I think it’s worth stressing that you (or some person) are the author of such criteria — they’re not written in stone or revealed to you by an omniscient deity, nor could they be derived from something else that you could take for granted more easily than the facts at issue. I think an intelligible reason to pump these criteria up or down (since this is after all up to you) would be in order for us to get somewhere with them. We could make some kind of intellectual progress if we work on things like that make use of them. And if they fail at things like that, then coming up with something better is fairly straightforward if you put your mind to it: you relax them (or strengthen them, as the case may be) until we get what we need and get rid of all of the stuff we don’t need.

    I have no means to compel Cosby to stop drugging and raping women if my only tool is moral philosophy.

    You likewise have no means to compel global warming denialists if your tool is science. What exactly is your worry about “compelling” people? Is that really all you were looking for? Don’t you want to know the truth? Don’t you want your beliefs about the truth to be justified? If other people’s beliefs aren’t justified in the end, then maybe there’s little you can do about it, but your responsibility for your own beliefs remains. Doesn’t it?

    Anyway, based on people and behaviors like that (i.e., global warming “skeptics” and so forth), I’m guessing you don’t think science is wrong, that it’s somehow incapable of being right/wrong, or that it fits into some category or another which has some number of disreputable/suspicious/unsavory qualities about it. However, as I just said, you can’t “compel” people with it, so maybe it is on at least one of your shit lists; but I’m not too worried about you personally, so much as the generic moderately-reasonable person who has a mostly-positive view of science because of its many successes.

    Indeed, when being told about the existence of such pathetic liars/bullshitters/denialists/etc., my guess is that very many supporters of science (perhaps including myself, but introspection is hard) will not take it as a mark against science at all and will instead support science even more strongly than they already did. That is, given the existence of people who “disagree” or “break the rules” or whatever it may be, that acts effectively as evidence in favor of science for some, not evidence against as you might expect. If there is any relevant difference for the moral case, what is it, and why does it seem so stark?

    I can also drum up an argument for why I, personally, also do not want to be raped–Cosby can do the same thing. At some point it boils down to who has the biggest stick to enforce their moral code, and whether one of those codes insists on encroaching on the other.

    I seriously don’t get this. For one thing, it’s hard to parse what you mean by a “code” which is “encroaching on the other.” What is actually and literally happening in such a situation? But whatever….

    A rapist harms a person and violates their rights. I’ve had to deal with that personally and am trying hard not to be so thoroughly offended that I’m just reduced to expletives. The general concern is about things like a person being harmed, not whether some abstract thing like a code is being encroached somehow by yet another abstract thing, which by some magic makes it sound like nothing is even occurring in the real world. The fact is that rape and sexual harassment are wrong. That fact does not in any sense boil down to who has the biggest stick. It boils down (very basically) to the fact that a person was harmed, no matter who has a stick or how big it may be.

    If you merely wanted to criticize bad morality, like people finding war and slavery and tyranny and dogmatism and so forth morally acceptable (when in fact those aren’t good), then join the club. You might start by complaining about people comparing the sizes of their proverbial sticks, because things like that often get “justified” in that way. And if you’re not the most gullible person on the planet, you’re not going to take such “justifications” at face value. So reasons like that do need to go out the window, and that’s a way you could make progress toward a good kind of morality which won’t support any shit like that, which is of course exactly what many people actually do. If you just want to throw up your hands and say it’s all like that and all the same, even what I’m calling the “good” stuff, then I think you’re just plain wrong. And I think that kind of talk is not at all helpful.

  31. Siobhan says

    @consciousness razor, #30

    But seriously, if this is some kind of a sticking point for you, how exactly are you treating all of that other stuff? Do you regard it with the same kind and degree of suspicion as any conceivable moral fact?

    Okay, so, this “conceivable moral fact” is where I’m getting lost.

    With the example of climate change, there are certain causes and effects we can isolate, measure, and test. The dual conclusions that both 1) the climate is changing; and 2) the change is in part caused by anthropogenic carbon emissions
    are conclusions derived from observation and experimentation. If one were to dispute either of the two points, they either have to gather their own evidence which contradicts established observations or engage in the variety of skullduggery to avoid the problem of evidence. But, regardless of whether or not a denialist believes the climate is changing and/or that the change is accelerated/exacerbated by people, climate change continues to happen.

    To be clear, I’m not a solipsist. I think “the phenomena we call facts” exist independent of human existence, even if we may occasionally be erroneous in our observations of what those phenomena are. It can be a fact that I think something; that doesn’t mean what I think is automatically a fact.

    Where the comparison breaks down for moral facts is that there isn’t any comparable continuity. For example, I can ascertain that I personally find vast portions of Saudi Arabian law repugnant by my own personal morality. I can ascertain that you more likely than not agree. However, sincere believers of Saudi Arabian Islamic jurisprudence can also ascertain that they themselves consider their law moral. So it’s a fact that I know what my morality is, and it’s a fact that you know what your morality is, and it’s a fact that you and I likely have mostly overlapping behaviour, and it’s also a fact that Saudi Arabian clerics know what their morality is, and it can even be a fact that given enough time and communication we all know what each other’s moralities are–but we have to pluck from somewhere other than reality another assumption–the hypothetical “ought” discussed earlier at #5 by johnhodges–before we start finding facts that actually exist outside of the human mind.

    So, I pluck from… somewhere, my personal experiences I suppose… that I believe equality to be good. I can then observe that Saudi Arabian law creates an explicit, inescapable, and stratified set of classes in which women have relatively fewer freedoms than men, and are hence unequal. If equality is good, then Saudi Arabia ought to reform, and because it chooses not to, Saudi Arabian jurisprudence is immoral. It’s a “fact” in the sense that we can compare various outcomes as to how an explicitly immobile social system performs to a nominally mobile one. But it’s not a fact in the sense that one can answer “what makes equality factually moral?” experimentally, in the way that we know the Earth remains round regardless of flat Earther opinions. Equality might make Siobhan happy, and that can be a fact, but it is not self-evident to me that my happiness constitutes a moral good. It would also make me happy to smash my downstairs neighbour’s sub-woofer, after all.

    I think I’ve made a great case for why I personally should not be subject to Saudi Arabian jurisprudence, and maybe a decent case for why Saudi Arabians ought to be offered the opportunity to reject Saudi Arabian jurisprudence, but I haven’t effectively argued for why someone satisfied with a stratified, socially immobile society should stop subjecting themselves to Saudi Arabian jurisprudence–this being the comparable effect of a “fact” being true whether one believes it or not.

    I’m just as confused as you are. None of the above makes it apparent to me at all that moral facts are eminently conceivable. If they were, we wouldn’t have wars or conflict. I don’t take for granted that there are any obvious consensuses when it comes to moral thinking. Look at how different societies across the world and across time govern themselves. There are many examples, historical and contemporary, of two groups of people with a vastly different moral consensus. The closest I can see anyone disputing those consensuses is by accepting a hypothetical ought, but that does nothing to resolve the counter-dispute: Why should one accept the offered hypothetical ought or the various consequences that might be measured from using it?

  32. consciousness razor says

    But, regardless of whether or not a denialist believes the climate is changing and/or that the change is accelerated/exacerbated by people, climate change continues to happen.

    Here’s a parallel statement: regardless of whether or not a rapist (or anyone) believes that rape is morally wrong, it continues to be morally wrong. I simply do not see what is supposed to be problematic about a statement like that.

    Where the comparison breaks down for moral facts is that there isn’t any comparable continuity. For example, I can ascertain that I personally find vast portions of Saudi Arabian law repugnant by my own personal morality.

    Why are you even talking about “your own personal morality” to begin with? What sort of thing is it which fits that description, and why is a thing like that of any concern to anyone? I think it’s helpful to think in terms of morality as something independent of my own personal views, because I’m allowing myself the chance to admit/correct mistakes, learn new things about the world and from other people, develop new ideas that might be beneficial, and so forth. It is not something which is “mine,” which cannot be taken or separated from me, something which I could not possibly be mistaken about as it a part of being who I am or something like that. Moral philosophy, such as it is, has taken the collective effort of many people over all of recorded history (even before that I’m sure), and I’m merely participating in that in whatever small way that I can. There are disagreements yet to be sorted out, people still holding incorrect views, much more work to do, job security for anybody in the field, etc.

    So it’s a fact that I know what my morality is, and it’s a fact that you know what your morality is, and it’s a fact that you and I likely have mostly overlapping behaviour, and it’s also a fact that Saudi Arabian clerics know what their morality is, and it can even be a fact that given enough time and communication we all know what each other’s moralities are–but we have to pluck from somewhere other than reality another assumption–the hypothetical “ought” discussed earlier at #5 by johnhodges–before we start finding facts that actually exist outside of the human mind.

    This suggests (or more than that, it says explicitly) that whatever is inside the human mind is “somewhere other than reality.” That doesn’t make any sense, if you also agree with me that the mind is what the brain does, that it’s a physical object doing physical stuff, which is of course a part of “reality.”

    So, I pluck from… somewhere, my personal experiences I suppose… that I believe equality to be good.

    Well, if that is how it’s plucked, the fact is that your “personal experience” is either a real thing or it isn’t. It’s got to be one of those two possibilities. Did any such experience (or multiple ones) happen or not?

    As a reminder, what is it for something to be “empirical”? Such things are founded on all of your experiences, including whatever is available to your senses. That is clearly something that you as a person have — “we” don’t have experience, collections of people don’t have experiences, but instead each of us individually has “personal” experiences. And we have those in reality. Somehow that happens. I’m sure neither of us gets how that works in all its gory detail, but it does happen.

    But it’s not a fact in the sense that one can answer “what makes equality factually moral?” experimentally,

    Here’s a mistake I alluded to in my first comment above, and I ranted briefly about this on PZ’s scientism thread not long ago too, if you’re interested…. What work is “experimentally” supposed to do here? We’re talking about moral philosophy. You’re saying it isn’t an experimental science. (Not even all of science is an experimental science, but let’s leave that aside as just a potentially embarrassing thing for you to keep in the back of your head and ponder for a little while.) In reply, I’ll say to you, yes, sure enough, it’s not an experimental science. It’s also not a performance art. And it’s not taxidermy. And it’s not jurisprudence. And it’s not how George Clooney styles his hair. So what, that isn’t any of those things or countless others?

    That doesn’t imply that it’s “not a fact” in some meaningful sense of the words. In this whole arena, the activities that you do — gathering information, understanding it, coming to tentative conclusions, asking further questions, etc. — don’t involve (for the most part) doing experiments. That’s a statement which is telling me about you, your whole approach to the subject. (Or everybody’s best approach, since the actual philosophy on it, if you read of that, is pretty much the state of the art.) That isn’t even close to a statement that says or implies that there is no fact of the matter for subjects like this. They are miles apart from one another.

    Equality might make Siobhan happy, and that can be a fact, but it is not self-evident to me that my happiness constitutes a moral good. It would also make me happy to smash my downstairs neighbour’s sub-woofer, after all.

    First, I don’t request that anything is “self-evident.” You might wonder why you mentioned that you want to smash your neighbor’s speaker, apparently intended some kind of other example…. Is that you showing a genuine understanding that it’s not about your personal, selfish desires? Well, of course it isn’t. The conflict is just between whatever you want and whatever is the right thing to do. We should not go around smashing other people’s speakers, because there are better/fairer ways of resolving disputes like that. It seems like you have reached that kind of conclusion already. You seem to understand the difference, anyway — not a surprise, since basically everybody does.

    At any rate, the point is that is not about whether there even is a right thing to do (or some very large set of good/acceptable things) or whether there isn’t any such thing in the whole wide world…. not at all, not in any “sense,” not given whatever means we as humans use to best learn/understand/use/communicate such things for ourselves. Sometimes, the best approach is to do an experiment; sometimes not.

    Alternatively, some would say there logically cannot be any such thing a fact about what is the right thing to do, because they have misconceptions (similar to the ones you already expressed above) that things of that sort just can’t be of a factual nature or pertain in some way or another to reality or the truth. (Maybe your experiences aren’t “real” after all! But even a solipsist would have to stop and ask you what is real, if not that….) If there is some contradiction in any of it, which is what they’d have to mean if they’re saying it can’t be so, I certainly do not see it. And I don’t think you can sit in an armchair and deduce things like this, especially not when so much of our evidence and experience seems to weigh heavily against it. But if you’ve got some very convoluted metaphysical picture that you want to prop up somehow, or some fancy-ass argument that you think needs support from a premise like this, then maybe this could seem like an appropriate way to proceed.

    I haven’t effectively argued for why someone satisfied with a stratified, socially immobile society should stop subjecting themselves to Saudi Arabian jurisprudence–this being the comparable effect of a “fact” being true whether one believes it or not.

    Then why not argue that? Why couldn’t someone (maybe not you) argue that? You seem to be saying that you’ve stopped the argument there… and that’s where it stops. What can I say to that? Just keep going a little farther, and you may reach a conclusion that they should stop doing that. It’s hard to convince people about such things, because they often take it personally, find their nationality or their traditions a source of pride and what not…. But the fact here is that it’s hard to convince people about such things. I already knew that, but even if I didn’t, it’s hard to see how anything like that could so radically change my entire worldview, such that I can’t responsibly say that it’s true that murder is wrong, etc., because there is no fact of the matter concerning those kinds of claims. I don’t think there’s anything wacky or irresponsible or fishy or whatever about saying that, but how could something like this change my opinion on the subject? You’ve got people disagreeing, and you don’t want to carry on the argument…. Maybe you’re not terribly interested in the argument, but if so, then that’s all it is, nothing more.

  33. EnlightenmentLiberal says

    To Siobhan
    Let me try to put my spin on this, which I think is substantially similar to consciousness razor ‘s.

    With the example of climate change, there are certain causes and effects we can isolate, measure, and test.

    I’m stating the following questions as a form of argument: How do you know that the results of your investigations and tests have anything to do with external reality? How do you know that there is not a Cartesian demon that will change reality after you conclude your investigations in order to thwart your exploration of reality?
    https://en.wikipedia.org/wiki/Evil_demon
    https://plato.stanford.edu/entries/descartes-epistemology/
    Equivalently, how do you know that reality will not spontaneously change to something else after you conclude your investigations?
    https://plato.stanford.edu/entries/induction-problem/
    How do you even know that there is an external reality that can be examined?
    https://plato.stanford.edu/entries/other-minds/

    To answer these questions, it’s basically required in order to presuppose certain answers. Quoting Hornbeck from above:

    Merely defining the term “reality” requires dragging in a shocking number of axioms, ranging from “I have senses” to the basics of set theory.

    You do have these presuppositions, or their equivalents, whether you realize it or not.

    Now, the interesting question is: Why do you value these presuppositions, these beliefs, more than other beliefs and values, such as a hypothetical value “we should act in order to make the world into a better place for everyone” ? Make no mistake: In order to answer my earlier leading questions concerning external reality and science, you need to presuppose certain values and assumptions. The interesting question that you need to ask yourself is why you value those presuppositions more than some basic moral value, such as “making everyone suffer needlessly is bad, and it should be avoided, and we should act to avoid it”.