Eric has a post about what various things he writes about have to do with assisted dying.
Well, to put it briefly, as I say in the blog’s banner, I argue for the right-to-die, and against the religious obstruction of that right, so anything which impinges on the issue, even indirectly, is of importance to me. That’s why disputing scientism seems to me to be important, because it implicitly defines away all other forms of inquiry which do not satisfy the canonical rules of scientific inquiry and decision. And that includes morality.
Jon Jermey raises an interesting question in response to Eric.
Eric, once again I think the ball is in your court: what, exactly, is the difference between a moral decision and a plain old ordinary decision? I’ve been asking this of various people for several years now, and I still haven’t got a plausible answer. Here are some of the suggestions that have been put up, and why they don’t work:
“A moral decision is one that affects other people.” — but all my decisions affect other people in some way. “A moral decision potentially has great consequences for many people.” — so does the decision to build a sewerage works or an opera house, but these are not normally regarded as moral decisions. And this definition would rule out pretty much all of my decisions straight away. “A moral decision is when you do what I want you to against your own inclinations” — comment superfluous, surely. “A moral decision is when you do what God says.” — ditto.
My personal favourite at the moment is — “A moral decision is one that makes you feel guilty, no matter what you choose.” — but I don’t think it has the rigour to stand up in debate.
So again, just as you need to define ‘science’ in order to explain what you’re objecting to about it, I think you have to tell us what a ‘moral decision’ is in order to explain how these differ from the plain old everyday decisions we can make effectively with reason and logic.
I think J.J. dismisses the first answer much too quickly. I think it’s basically right. He too is right that all our decisions affect other people in some way, but many of those decisions affect other people (and/or animals and/or the environment as a whole) in ways too tiny to measure or take into consideration. If I decide to turn north instead of south while taking a walk, the ways that decision affects people or the environment are too small to detect. If on the other hand I decide not to walk to the grocery store but to drive [never mind for the moment that I don’t have a car], that makes a detectable difference, and is worth taking into consideration.
Morality is about taking externals into account – other people; animals; the ecosystem we all depend on. It rests on the awareness that the self is not all there is. It’s a corrective to pure selfishness. Many purely “selfish” decisions don’t count as morally selfish because they aren’t zero sum. If I go out for a walk to admire the sunset, nobody is worse off because I do, even though I’m the only one who enjoys that particular walk. If I build a viewing tower that blocks other people’s view of the sunset, those people are worse off. (And the decision to build a sewerage works or an opera house should of course be normally regarded as moral as well as other things. Of course it should.)
Yes? No?
Landon says
I think there’s something to the idea that a decision that affects other people is a moral decision, in addition to whatever else it is. That is, to a first approximation, it’s sufficient for a decision to acquire a moral element that it affect other people. JJ is right to point out that a decision that affects other people is not SOLELY a moral decision – I can make other-affecting decisions that are pragmatic, prudential, aesthetic, and so on – but he ignores the possibility that something can be, say, a business decision AND a moral decision at the same time. In fact, the reasoning in each case can lead to different conclusions, leaving us with a single decision to be made that is “good” from a business perspective and “bad” from a moral perspective. This is hardly an unfamiliar situation.
That said, I think it’s arguably true that we have moral duties to ourselves, so it’s not *necessary* that a moral decision be other-affecting. I don’t feel up to going into detail on that, but I thought I’d bring it up.
Maybe the best way to explain a moral decision is to first identify moral principles or, at the very least, morally relevant concepts. It is arguably true that the only really morally relevant concepts there are boil down to something like “harm” and “benefit.” Without turning a comment into a treatise on moral theory, it might be enough to be going on with to posit that moral decisions are those that deal with substantive harm or benefit to some person or persons, even one’s self and even mediated through institutions or social conventions. That is, if I am contemplating doing something that will bring me fleeting pleasure at no cost of harm to anyone, even me, it is not really a moral decision, as it lacks any engagement with matters of substantive harm or benefit. However, if I contemplate doing something that brings real benefit to a lot of people at the cost of doing great harm to one, that is a moral decision, without taking a position on how it should be decided (a matter which is beyond the simple question of what is and what isn’t a moral decision).
Marcus Ranum says
“A moral decision is one that affects other people.” — but all my decisions affect other people in some way.
I think he’s got that pretty much right. But I think we’re talking somewhat at cross-purposes. I interpret him as saying not that it’s not an important decision but rather that he can’t see why we call it “moral” instead of just another important decision. We all make lots of decisions all the time that affect other people and their lives, but we don’t even think of them as “moral” – perhaps because those decisions are made easily. For example, I decide not to drink and drive: nothing happens. I just made a very important decision that may have greatly affected my life and the lives of others sharing the road with me.
It has always seemed to me that we use the term “moral” to add extra freight charges to a fairly normal decision but that what we mostly mean is that it was a harder decision and its consequences may have been more obvious. Consider the decision to shoot someone: the consequences are more obvious than the decision to not drink and drive but they may both be literally life-and-death decisions.
(By the way, it’s the problem of awareness that is one of the main reasons I reject consequentialism. For a consequentialist moral system to work, you have to actually understand that you’re making a decision of consequence. In some other moral systems, you also have to have what amounts to perfect understanding of future effect of your decisions, which begs the question of how far in the future we expect causality to reach. As a thought experiment for consequentialism you might consider Hitler’s grandmother deciding not to drink and ride her horse. Perhaps that was a right moral decision because riding a horse while drunk is irresponsible, but that moment of irresponsibility would have had vast unforseeable consequences if she had broken her neck and been mourned.)
jose says
The difference between moral questions and the rest is that the criterion to answer is goodness and nobody knows what goodness is. If you want to know which course of action will win you more money, that’s an easy thing to figure out thanks to math; we know what profit is. If you ask which planet is bigger, Saturn or Mars, we can answer that too because we know what size is. But we don’t know what goodness is. Moral realists argue we can build morality like math, using reason and logic upon a set of axioms. Problem is we never agree on which axioms are good ones.
I thought morality was about what we should do. In that sense, every decision technically would have a moral aspect to it. What happens is most ordinary decisions are not very important so we don’t have to think about those aspects. Should I make salad or pasta today? I guess the decision may have minimal repercusions to the supermarket’s economy, but realistically it doesn’t matter.
But the effects on others is only an aspect to it. Most people think lying is wrong even if a particular lie has positive effects on everyone. Or hacking Bill Gates’ account and stealing a thousand dollars. He won’t notice, so why is it wrong? I don’t know why but it would seem we feel morality in the action itself rather than judging it as moral when we take into account the consequences.
Marcus Ranum says
I thought morality was about what we should do. In that sense, every decision technically would have a moral aspect to it. What happens is most ordinary decisions are not very important so we don’t have to think about those aspects. Should I make salad or pasta today? I guess the decision may have minimal repercusions to the supermarket’s economy, but realistically it doesn’t matter.
Kant made a very good effort at addressing this with his idea of the Categorical Imperative. The idea is to imagine a world in which everyone felt free to do (whatever it is that you’re thinking of doing) and then ask yourself if you’d want to live in that world. So, to your example, I’d be comfortable in a world where everyone ate salad, or pasta therefore it’s probably not a world-altering choice. Usually, the examples used surrounding the Categorical Imperative is “Imagine a world where people went around murdering eachother at random. Like that? No? Then: don’t go around murdering people at random.” Where the Categorical Imperative falls down is if someone is willing to claim they’re an exception – in which case I suppose we’d say they’re a bad person or something like that, but it doesn’t do much to stop them.
martincohen says
This makes me think of the statement in Theodore Sturgeon’s classic novel “More Than Human”:
Morality is society’s code for individual survival; ethics is the individual’s code for society’s survival.”
Robert B. says
I think, actually, that people have moral duties to themselves as well as to others. So you can’t limit it to just those that affect other people. (And by the way, the decision to build a sewer system is totally a moral one – think of all the little kids you’re saving from cholera!)
I would say that a moral decision is a decision relating to value (however you define value, which I won’t go into because it’s basically an entire field, and I’m only lightly trained in it.) In other words, morality technically is much more broad than people think – it basically does encompass almost all ordinary decisions. When people think of “morality” they’re usually thinking of the subset of morality which is difficult and interesting and therefore worth arguing and teaching about.
jose says
Marcus, noticed I said realistically. The categorical imperative, at least the first version, is anything but. To be honest I can’t think of a real world, specific norm that I would like to be universal, except just “be good”, and nobody knows what that is, so it’s kind of pointless anyway. If I’m to base my real life code of conduct on that, I would end up doing nothing – but I can’t just lay there and do nothing either because I don’t want for everybody to do nothing. So all I can do is to reject that idea completely.
Svlad Cjelli says
Morality is external and unimportant to me. It’s a distracting fucking hat. Just tell me what the consequences are and whether those suck or not.
Ophelia Benson says
The consequences for whom? Suck for whom? Or not for whom?
Lots of luck trying to talk about “consequences” without talking about morality.
jackasterisk says
I agree with Jose @3 that moral decisions have some aspect that relates to good or bad. Chocolate-or-vanilla decisions that have no right/wrong component aren’t moral decisions. I disagree, however, that we don’t know what’s good. We do — but we only have access to that knowledge through moral “intuition”. The Golden Rule and the Categorical Imperative are just thought-experiments to help us engage our moral intuition. They don’t answer moral questions they just recast them as selfish questions that are easier to answer.
The problem is as skeptics we all know that intuition alone is a terrible way to make decisions. It’s fine as a starting point — as a way to see that there’s some actual knowledge here — but at some point you have to add some rigor.
jackasterisk says
BTW, I haven’t seen any explanation of why “decisions that affect the wellbeing of conscious creatures” is a bad answer to “what are moral decisions.” Ophelia’s refinement of “decisions that affect others” — which is too broad — basically arrives at the same result but less precisely. Turning north or south doesn’t affect anyone’s wellbeing; walking vs driving potentially does. Seems to me that’s the very difference that she’s trying to get at.
Are there any counterexamples for Harris’ proposed definition?
Ophelia Benson says
No, it isn’t too broad, because broadness is what I was aiming for. The question was very basic so the answer needed to be very basic.
And you overlooked the crucial difference: I said “others” and Harris did not. One of the problems with his book is that he doesn’t get basic enough – he seems to forget that the central point of morality is that it’s about others, and thus about the relationship between the self and others. So in fact you have it exactly wrong: what I said was more precise than what Harris said, not less.
jackasterisk says
I’m probably using the wrong words, sorry. When I said that “a moral decision is one that affects other people” was too broad a definition, I meant that while it encompasses most or all moral decisions, it also includes many decisions that we don’t normally consider moral. What to get my wife for her birthday, for example, affects other people but has no moral component. It isn’t just about other people; it’s something more than that.
It seems like refining it to say “a moral decision is one that affects the wellbeing of other people” (conceding your point about other) really does get at the crux of the distinction. I’m not saying that the decision necessarily draws its moral meaning from that, just that a decision that most people consider moral will likely fit this definition and vice versa.
Landon says
Correct me if I’m wrong, Ophelia, but I took the point of your broad statement to mean something like “any decision that affects others necessarily involves a moral assessment, even if we determine that the moral impact is negligible.” This turns aside the objection that has been raised that “there are lots of decisions that affect others that are not moral” with the answer “not morally *significant*… this time.”
Is that something like the main thrust of your point?
Ophelia Benson says
Landon – yup!
jmbreslin says
Harris’s ultimate goal of morality makes a pretty solid definition of the moral sphere. It’s not enough to say that moral decisions are those that impact others because, as many here have pointed out, everything we do can impact others in distant and trivial ways. Moral decisions impact others in a special kind of way, and well being captures it pretty accurately. However, I would add one modification: moral decisions are those that have the potential to negatively impact the well being of conscious creatures, not just actual impact. Obviously a decision to be unfaithful to my wife is a moral decision whether or not she ever finds out about it.
I’ve always had difficulty with the notion of duties to oneself because I see morality as entirely other-directed. I have a hard time understanding what the foundation of self-directed duties might be. For example, Kant famously argued that we have a duty to pursue or develop our talents. But what could the foundation of such a duty be, if not that failing to pursue my talents would be bad for others? Suppose I am naturally talented at tennis but I decide to spend my time doing something else. It seems odd to describe such a decision as a moral one. Even if giving up tennis is bad for my long term well being and I regret it later in life, it still seems counter-intuitive to describe giving up tennis as morally wrong. Unfortunate, yes, but morally wrong?
Think about it this way. Suppose you were destined to spend the rest of your life alone on a deserted island, with no other conscious creatures around. Wouldn’t morality lose all meaning and significance? Wouldn’t everything that is important about morality now become irrelevant? Would it be morally wrong for me to not pursue my talent at climbing palm trees, and what purpose would there even be to describe it as such?
jack* says
I took to answer this long-dead thread with my own post: http://www.jackasterisk.com/j_a_c_k_/2013/04/objective-moral-questions.html