Sam has to presume a great big “ought”


I’m rummaging in the archives, locally and globally, for more on the Sam Harris Contest to Find the Genius Who Can Persuade Harris that he didn’t actually invent the new best only way to think about morality all out of his own head by thinking and taking some notes. My rummaging has turned up an article on Harris v Pigliucci at old B&W and under that a comment by Harris’s newly appointed judge, Russell Blackford.

Well, my eyes glaze over whenever I see a complaint about “scientism”. But surely Massimo is right this time, at least on the main point. The Moral Landscape conspicuously fails to derive any “oughts” from “is’s” in the sense that philosophers mean. In order to get started, Sam has to presume a great big “ought” which relates to how we ought to maximise well-being in some sense of the latter. I suppose you could concede that, but then say his overall point stands because it’s just obviously true that we ought to maximise well-being in the requisite sense. But it’s not obvious at all. It’s a substantive, highly controversial claim. You really can’t say that failing to agree with it is analogous to adopting some sort of radical epistemological scepticism (complete with deceiving demons, brains in vats, the radical unreliability of our senses, and the like). That’s just not so. You might as well say that refusing to agree with the claim that there is a God is analogous to radical epistemological scepticism.

Exactly.

I think Harris just took it as given without realizing that he was doing so, and then when a lot of people pointed it out to him after the book was published…well I don’t know: he didn’t understand their point, or he doubled down, or whatever, but he didn’t accept that that was what he had done.

Comments

  1. says

    I believe Sam expected his axiom of maximizing well-being as the One True Value™ would be a little controversial, but may not have anticipated that it would be this extreme. And neither did I.

    At the end of his debate with WLC, he says he thought it was obvious. And it is.

  2. says

    That’s a terrible “axiom” – it’s consistent with total, ruthless selfishness. Maximizing well-being is not the beginning and end of morality; it’s not even moral.

  3. says

    It’s a glaring error that I have wasted thousands of words trying to explain to people who defend Harris or who feel science has an answer to everything. On occasion I might finally get someone to understand the point and if I’m lucky, a, “Ok, well, yeah that’s true, but…” in response. No one ever said science can’t help make moral decisions *once* we decide the values on which to base those moral decisions.

    Shermer seems to have some overlapping views to Harris and handles Pigliucci’s criticism even more awkwardly. https://www.youtube.com/watch?v=4Qhlp-X3EHA I think there is a subset of skeptics who think that we are in a post-philosophy world and with hubris neglect to take any real time to study it before writing engaging in these exercises in the Dunning-Kruger effect. This is/ought error by no means requires a degree to detect which makes it even more embarrassing.

  4. The very model of a modern armchair general says

    Maximizing well-being is not the beginning and end of morality; it’s not even moral.

    Maximising one’s own well-being is not necessarily moral, but I thought Harris was talking about maximising the sum total of human well-being.

  5. says

    @Ophelia #3

    Oops, I mean to say collective well-being, with everyone included. Equality and social justice are critical elements of well-being for all. I made the mistake I was concerned about earlier; groups and individuals are not interchangeable.

    And I think that natural selection, capitalism, libertarianism, and such are ruthlessly selfish and without ethics.

    Does that address your concern?

  6. hjhornbeck says

    I’m in sort of a weird spot, here. I haven’t read all of Harris’ book, but I can remember skimming my copy to where he tried to pull an “ought” from an “is,” and just shaking my head. It was little more than hand-waving that would impress no-one, and I think the critics are right to hound him on that.

    But at the same time, Harris seems to be on the right track.

    A moral code is simply a recipe for how you ought to act in a given situation. If you find a wallet on the street, should you track down the owner, keep it, or track down the owner but claim the money was gone when you got to it? A scientific Utilitarian approach to answering this would require examining all possible cases and all possible outcomes to determine the maximal “fitness,” but this has two problems: examining all the possibilities is likely impossible, even in the case of the wallet, and what the heck does “fitness” mean?

    You can solve the latter by tossing out Utilitarianism and swapping in the Veil of Ignorance and a Social Contract. Our fitness metric is now determined by the characteristics we possess, such as happiness, which gets rid of the ad-hoc-iness of Utilitarianism. Harris doesn’t see this, perhaps for the same reason Shermer doesn’t in his own moral system proposal: it almost inevitably leads to collective action and socialism, a no-no for any modern libertarian or conservative.

    But that still leaves the is-ought. We observe that in eight of ten situations, it’s best to return the wallet intact. Have we shown we must always return the wallet intact? Nope, because if we look farther there may be an abundance of situations where it makes more sense to skim a bit. We can keep looking until the cows come home, and even if we have a perfect fitness metric we still haven’t jumped from “it increases fitness to do X” to “you should do X.”

    We already have a solution to this, though. A Bayesian epistemology draws a line in the sand called “effective certainty.” When the probability of a claim being false drops below that threshold, we treat it as though we are certain it is true, even though we do not have total certainty.

    In the same way, we don’t have to examine every possible scenario or result when analyzing what to do with the wallet, we simply need to evaluate a reasonable subset of possibilities with high background priors, until we reach the “effective certainty” threshold for one of them. We don’t hit certainty? Then we go with the one closest to certainty.

    Easy peasy, and it builds on things we already accept.

  7. dmcclean says

    @5,
    Maximizing the “sum total of human well-being” is also (widely, I thought, but possibly I’ve been traveling too much in the optimization literature) known to be extremely problematic. Huge benefits could accrue to one person from abusing everyone else, and as long as the sum was greater it would increase “sum total of human well-being”. Leads in a straight line to Scrooge McDuck swimming in an ocean of coins. Also can run into problems related to population growth.

    Averaging instead of summing is subject to similar problems.

    Some of the economists want you to accept that the Pareto condition is the right way to combine utilities for moral reasoning, but there are problems with that too.

    It’s not at all obvious how we should compare or combine or reason about the well-being of groups based on the well-being of the individuals that make up those groups. Figuring it out is, in some sense, the entire topic of morality.

    Saying “well, we’ll just define well-being in an enlightened way and then maximize that” isn’t just question-begging, it’s actually incorrect.

  8. says

    he didn’t understand their point, or he doubled down

    I suspect he didn’t understand. Which makes me suspect that he really has done no significant study of philosophy, aside from sitting near Dan Dennet on a couple panels.

  9. says

    To anticipate a possible counter-objection on this topic, which might attempt to dismiss the concerns of philosophers (such as Hume, Kant, Rawls, and Mill) regarding the topic of the is/ought problem as being perhaps obscure philosophical wanking (which I have sometimes seen likened to religion or theology) it is not a matter of counting the numbers of angels on the head of pins. This is the core issue in any discussion of morality: how do you argue that your individual views about right and wrong are not merely your opinion but are fact? Kant tries to do it with an extremely clever formulation of the golden rule.(1) Mill tries to do it with some ham-fisted handwaving about maximizing the common good(2) and Rawls’ makes some brilliant game-theoretic leaps to try to overcome the flaws in Kant’s categorical imperative.(3)

    We humans seem to limp along without an objective morality, basically surviving in societies in which self-interest is the norm and there’s a constant struggle for power – not justice: power. It seems to me that the search for morality is silliness; it ought to be pretty clear that all attempts to establish an agreement in principle amount to nothing more than reifying one person’s opinion as “right” – usually the person with the biggest stick. Harris fails to start with moral nihilism as his default position and to move from there probably because he doesn’t really understand what he’s doing. He’s not really very well-educated when you come right down to it.

    (1) Imagine that the world you live in is the world in which everyone acts as you do; therefore you should act well. Objection: this is obviously simply Kant’s opinion; we can see that selfishness exists and therefore Kant is simply projecting his moral sense onto others.
    (2) Objection: how do you know your idea of what the common good is is fact and not merely your opinion? I.e.: “the common good” presupposes you have a working morality, which is begging the question.
    (3) Imagine that you construct the world you will live in, with no advance knowledge of your place in it; the argument is that a rational person will create the most fair world possible. Objection: this is actually an appeal to self-interest hidden behind smoke and mirrors; if you’re willing to assume that self-interest is the basis for morality you’ve actually scored an own-goal.

  10. says

    Maximizing the “sum total of human well-being”

    Why humans?

    If you have an objective morality you need to either explain why human well-being is more important than, say, the well-being of ants. Without begging the question of that we’re human.

    The simplest objection to that, however, is that “well-being” is simply substituted for good, and the whole argument is circular as it amounts to saying “goodness is maximizing the sum total of human goodness.”

  11. says

    Huge benefits could accrue to one person from abusing everyone else, and as long as the sum was greater it would increase “sum total of human well-being”. Leads in a straight line to Scrooge McDuck swimming in an ocean of coins.

    Another argument I like is this:
    One might expect that an objective system of morality would work forward and backward in time. I.e.: we would know in 1800 that slavery is immoral, as we now know today that it is. To have a moral system that works otherwise is to have situational ethics: there’ s no real right and wrong, only what works at a given time and place. This presents a difficulty for the utilitarian because it would mean that your sense of utility would have to predict unforseen future complications arising from your actions. I.e: suppose you pulled the child Stalin back before he accidentally stepped in front of a bus – at the time it seemed like the right thing to do but the greater good for the greater number would later argue that you should have let him die. We cannot really make an assessment of the greater good based on future results — which, unfortunately for utilitarianism, is exactly what the utilitarians claim to be trying to do.

  12. Tyler T. says

    I’ve read some of Blackford’s commentary and I agree with bits and pieces of it. As mentioned above though, Sam is referring to the collective well-being of all conscious creatures, so he isn’t really supporting some kind of controversial egoism.

    Actually I think people are making Sam’s claims out to be a lot stronger than have been throughout the book (so far). Basically he’s saying that there is an objective set of conditions that constitute the greatest well-being for everybody. A key distinction he makes is between “in practice” and “in principle” — we might not be able to know what the prior state of affairs is, but he’s just arguing that it is a conceptual reality (compare: Laplace’s demon in arguments about determinism).

    I think he’s a little naive and that different people’s values at bottom are incommensurable. It’s a case I hope to make pretty strongly in my essay submission — still working on it though!

  13. says

    Leads in a straight line to Scrooge McDuck swimming in an ocean of coins.

    Scrooge to Utilitarian: “You see, I have a better understanding of the situation than you do. It only appears now that it’s unjust for me to have all these coin but in fact history will record that I spent it wisely late in my life and it was very beneficial to all. Don’t be blinded by your naive short-term perspective and your inability to see what really is for man’s betterment.”

  14. Tyler T. says

    @ Marcus Ranum (12)

    Thanks for the interesting posts. Forgive me if my replies are less than precise.

    The well-being of humans takes priority because well-being is a consequence of consciousness, and humans are the most developed things on Earth in that regard. The potential well-being of humans exceeds the potential well-being of everything else. The first problem with what you’ve said is that (I believe) Sam actually acknowledges the well-being of all conscious creatures (not just humans), he just puts humans first. The second problem with what you’ve said is that Sam has also acknowledged that our collective demise could potentially be moral in the case where a greater benefit to a greater cosmic species is a consequence.

    @ Marcus Ranum (13)

    Another interesting problem you’ve posed here. I’d say that moral progress through time is not at all necessary, even if it is possible. One way of ensuring that morality does progress is by acknowledging the possibility of an objective set of conditions to strive for.

    Again, there’s a distinction between “in principle” and “in practice”. Part of what Sam wants to do is get wide acceptance of the “in principle,” even if the practice is unknowable. Now how can we grant a set of perfectly moral conditions ontological status if we have no way of knowing what it is? That’s a more serious objection in my eyes.

    Also remember that Sam isn’t TRULY concerned with examining the details of that set of conditions. He moreso wants to look at things from the opposite direction so that we can better justify criticizing Muslims.

  15. davehooke says

    Our practical moral reasoning is a hodge podge. Utilitarian considerations are employed for certain situations and are anethema in others. Evolution doesn’t require a single consistent system, and since ethics is informed by our existing moral sensibilities, we will find undesirable flaws in any “single-fit” ethical system.

  16. says

    Oops, I mean to say collective well-being, with everyone included.

    Can he or anyone else state conclusively what that means?

    Thought about a certain way (the way the people committing them were thinking), all of histories atrocities were committed for the “collective well-being.”

  17. says

    @Jafafa Hots #16

    We need to be able to measure (at least comparatively) states of being, and I believe every ethical system depends on this ability.

    The collective assessment is even more difficult. There are minimums and maximums to contend with (e.g. at some point one less glass of water will be fatal). Too much equality disrupts important incentives (and disincentives), yet not enough leads to the situation we have currently. And worst of all, chaos theory undermines any long term certainty.

    To prevent “justified” atrocities we can use science (transparency, double blinded, critical review, and such). Note, any science depends on a political infrastructure for implementation, which is yet another challenge to universal ethics. Our best effort to address ethics with the scientific method will sometimes lead to mistakes, but I predict much less so than the current system.

    The current consensus of Human Rights is the closest we have come to this ideal, but I think we still have a long way to go.

  18. RHolmes says

    It would maximise well-being to carve up a few healthy people and distribute their organs to the many individuals awaiting transplantation. Therefore that would be a good thing to do. Right?

    Well, except for the long-term effects this sort of policy might have on the well-being of the population as a whole, but that just takes us back to the difficulty of measuring short-term benefits against potential long-term harms, which others have already mentioned.

  19. Bjarte Foshaug says

    As I wrote previously, I think this question-begging is the single most fatal flaw in Harris’ argument. To be fair to Harris, he does acknowledge that it’s often difficult – if not impossible – to weigh conflicting interests and values against each other in practice, but that doesn’t mean the very concept of moral truth is flawed in principle*.

    But when your whole argument is based on assuming the very thing you set out to prove, I’m afraid the case is terminal. Even his supposed “proof” involving “the worst possible misery for everyone” ultimately rests on the premise that the well-being of conscious creatures is worth something. It doesn’t get you to that value judgement in the first place. The very question of whether or not science can tell us what we “ought to value” seems to presuppose that we ought to value something, which is rather like asking “Have you stopped beating your wife”.

    Not being philosophically unsophisticated, I wouldn’t have too much of a problem with simply taking as a premise that maximizing the well-being of conscious beings (and not just for some people on other people’s expense) is, by an large, a worthwhile goal, and proceeding from that point. But then at least acknowledge that that’s what you did, and don’t present your premise as a conclusion you arrived at just from looking at the facts.
    ___________________________
    * I think it is, but not for that reason.

  20. says

    @improbablejoe

    Seems like a pretty embarrassing and glaring error on his part. And seeing what he’s written on other subjects since, I’m thinking the whole “writing for a living” thing is ALSO sort of a mistake on his part.

    lmao, agreed.

    Leads in a straight line to Scrooge McDuck swimming in an ocean of coins.

    Scrooge to Utilitarian: “You see, I have a better understanding of the situation than you do. It only appears now that it’s unjust for me to have all these coin but in fact history will record that I spent it wisely late in my life and it was very beneficial to all. Don’t be blinded by your naive short-term perspective and your inability to see what really is for man’s betterment.”

    d’awww, you’re so cute marcus ^.^

    Ophelia scores yet another another easy grand slam in the Paul Aint’s game of exposing Harris’s facile and philosophically naive grasp of reality, as if Harris can just overturn Hume with presumption.. +1,000,000

    Bravo bravo, what a performance Dar Tanyon!! What a performance!! Ok ok enough bugs bunny references. Ophelia outsmarts the seemingly brilliant magician Harris yet again!

  21. says

    @Bjarte Foshaug #19

    From my reading of TML, Sam is not begging the question, he’s answering it.

    In the first chapter he says the most common objection is “But you haven’t said why the well-being of conscious creatures ought to matter to us.” He dismisses the objection as hyper-scepticism.

    To me it is as self evident as it ever gets (the collective version, which should probably include dolphin pods as well).

  22. zibble says

    I don’t know what Sam’s response is, but I think the whole is/ought idea is flawed. It’s like when theists ask how science can answer questions like “what is the purpose of the universe?” The question itself is based on a faulty assumption that betrays an agenda. Looking for “objective” oughts is similarly begging the question.

    What people are looking for when they ask science to provide an objective ought-statement is for the Universe to tell us the Right way to live. But there’s no such thing – the Universe doesn’t give a shit, and even if it did, there’s no objective reason to care that it gives a shit.

    So there’s no such thing as Oughts. However, if, for example, I love someone, then I don’t need the Universe or God’s permission to act in their best interest. I do, however, need the aid of empirical knowledge (what Sam calls “science”) to best determine which actions ARE in their best interests.

  23. says

    I believe in oughts… I just know who created them. We did. We do. And they change.

    If we didn’t, where did they come from?
    Believing that there are absolute moral truths in the universe is like believing in gods. Both are the same thing – painting a human face on a faceless universe to make ourselves feel better.

  24. says

    @Jafafa Hots #25

    I think Sam agrees with you, again fair use from The Moral Landscape

    Chapter 1: Moral Truth
    I hope it is clear that when I speak about “objective” moral truths, or about the “objective” causes of human well-being, I am not denying the necessarily subjective (i.e., experiential) component of the facts under discussion. I am certainly not claiming that moral truths exist independent of the experience of conscious beings—like the Platonic Form of the Good—or that certain actions are intrinsically wrong. I am simply saying that, given that there are facts—real facts—to be known about how conscious creatures can experience the worst possible misery and the greatest possible well-being, it is objectively true to say that there are right and wrong answers to moral questions, whether or not we can always answer these questions in practice.

  25. Dunc says

    At the end of his debate with WLC, he says he thought it was obvious.

    Yeah, but everybody thinks their own fundamental moral axoims are “obvious”. However, given the vast range of fundamental moral axioms you have to choose from, and the fact that nobody can agree on which are the correct ones, they clearly aren’t. If they were, then we wouldn’t have spent the entirety of recorded history (and probably most of unrecorded history too) arguing about them, would we? And you don’t get to claim victory in that argument simply by declaring yourself right and everybody else wrong – that’s pretty much the textbook definition of “begging the question”. In Harris’ case, he seems to be begging the question to such an absurd degree that he even refuses to acknowledge the existence of the question he’s begging.

  26. Axxyaan says

    @601 #22

    Such a dismissal is not an answer. It is just assuming what he claimed he could scientifically proof.

    There is a difference between (1) starting with some base values and inteferring (scienficially or otherwise) other values from these. and (2) starting only from none-value premisses and arriving at a value conclusion.

    As far as I understand, Sam Harris tries to claim (2) and even denies doing (1) but in the end only succeeds in delivering (1).

  27. Minnow says

    “Not being philosophically unsophisticated, I wouldn’t have too much of a problem with simply taking as a premise that maximizing the well-being of conscious beings (and not just for some people on other people’s expense) is, by an large, a worthwhile goal”

    But even then we run into problems pretty quickly, because once people are fed, watered, clothed and housed (in other words, once the basic necessities for living as a human being at all are established) ‘well-being’ becomes a very contestable term. Luxury is a necessity that begins when necessity ends, as the lady said.

  28. Axxyaan says

    @601 #26

    These facts about the condition of peoples are useless in what he claims he wants to proof. He makes it very clear he is not talking about science showing how we can get what we value. That means that facts about what people value are not usefull becasue that kind of fact just leads from what people value to how they can get it. If, as he seems to claim, he wants to scientifically show what people ought to value, he should starts from facts that imply value judgements.

  29. Drew Vogel says

    I don’t understand the objection. What is this presumption Blackford refers to? What is this claim which Harris takes to be obvious which is in fact controversial?

  30. Bjarte Foshaug says

    @601 #22

    To me it is as self evident as it ever gets

    I.e. it’s part of your premises, which is fine, unless the point you are trying to make is how to arrive at those very same self-evident value-judgments to begin with just from looking at the facts.

  31. says

    @Axxyaan #28

    To my understanding, Sam is not trying to prove this premise (max well-being for all), he’s just assuming it.

    —-

    @Bjarte Foshaug #30

    Yes, just a premise (axiom), pulled out of his mind.

    That said, I think it’ a good one. And I’ve seen of plenty of arguments (not proofs) that it’s the best place from which to start the science bit. In fact, many other candidates are equivalent; when I try to consider alternatives, I end up with much the same idea. And since it is so egalitarian, I’m a little wary of those that think it’s a bad idea.

    I thought the trouble would begin with how we implement this scientifically, given that we are having to deal with real people. Maybe a brain implant connected to an ethical computer system in Bluffdale to measure compliance?

  32. Brian E says

    I am simply saying that, given that there are facts—real facts—to be known about how conscious creatures can experience the worst possible misery and the greatest possible well-being, it is objectively true to say that there are right and wrong answers to moral questions

    How is it objectively true, unless he assumes some Platonic value that states ‘greatest possible well-being trumps worse possible misery’? He’s just handwaving the is-ought right there. You have to value the greatest possible well-being over the worst possible misery (which nearly all people, pace psychos, would do, but they’d each define it differently based upon what they consider well-being and misery. Science can’t underwrite this, it cannot make it objective. To be objective it needs to be part of the fabric of the universe, supernatural intersection with the universe and it clearly is not (sorry Catholics, there’s no natural law to be seen and God can’t make something right divine commandists). How do you justify valuing greatest possible well-being over worst possible misery? The question is not how you measure it, or feel about it.

  33. aziraphale says

    @Minnow #29

    Could it be that when everyone is fed, watered, clothed and housed (and, I would add, secure against violence) the moral landscape will look rather different?

  34. says

    @Brian E #36

    How do you justify valuing greatest possible well-being over worst possible misery?

    Is this a trick question?

    How about a scientific experiment? I’ll volunteer for the “greatest possible well-being” control group.

  35. Bjarte Foshaug says

    601 #35

    To my understanding, Sam is not trying to prove this premise (max well-being for all), he’s just assuming it.

    Actually he goes quite a bit further than that:

    First I want to be very clear about my general thesis: I am not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor am I merely saying that science can help us get what we want out of life. These would be quite banal claims to make – unless one happens to doubt the truth of evolution, the mind’s dependency on the brain, or the general utility of science. Rather I am arguing that science can, in principle, help us understand what we should do and should want – and therefore what other people should do and should want in order to live the best lives possible.
    The moral Landscape, ch. 1. (Emphasis added)

    It is thought that science can help us get what we value, but it can never tell us what we ought to value. […] So, I’m going to argue that this is an illusion.
    […]
    It is often thought that there is no description of the way the world is that can tell us how the world ought to be. I think this is quite clearly untrue.
    http://www.ted.com/talks/sam_harris_science_can_show_what_s_right.html

    Assuming that Harris even knows what he is saying, it couldn’t be much clearer that he thinks you can indeed derive a value judgement concerning “how the world ought to be” from a “description of the way the world is”, when, in fact, the value judgement is in his premises from the outset. If that’s not begging the question, then nothing is.

  36. Dunc says

    601, the problem is when you get into the nitty-gritty of distributive justice. If you accept the axiom that maximising total aggregate well-being is your one and only goal, then it rapidly becomes clear that you can improve the aggregate total well-being by sacrificing the well-being of a small minority. For example, you could dramatically improve medicine for the whole population for ever more by experimenting on fairly small number of non-consenting human subjects. From a strictly utilitarian perspective, that’s fine – as long as a very large number of people enjoy a benefit, it can come at the cost of immense suffering for a small minority, even if the benefit enjoyed by the majority is also very small. But whether it’s moral to torture a few people to death in order to make slightly comfier couch cushions for everybody else is, in fact, highly arguable, and serious moral philosophers have been struggling with how to resolve these issues for centuries (if not millennia).

    Now, fair enough, you may disagree with Kant’s categorical imperative, or Rawls’ original position, or any of the other arguments which have been put forward to temper raw utilitarianism, but you can’t simply hand-wave them away with the assertion that they’re obviously wrong because science! Not unless you’re some sort of clueless dilettante with absolutely no grounding moral philosophy, anyway… You need to actually engage with the arguments.

  37. Brian E says

    How about a scientific experiment? I’ll volunteer for the “greatest possible well-being” control group.

    No. I said justify, not measure.

  38. brucegorton says

    I kind of struggle with moral philosophy, but so far as I can see…

    It cannot be maximizing well-being.

    I personally am physically unfit, I do not think somebody forcing me to get fit would class as a moral act because my lifestyle choices (as far as choice is even a thing) are my own.

    Maybe one could argue that freedom is necessarily a part of well-being, but I am not too sure of the validity of that argument, particularly if one takes a deterministic view of consciousness.

  39. Brian E says

    How about a scientific experiment? I’ll volunteer for the “greatest possible well-being” control group.

    This is brilliant satire, and I didn’t realise it, a scientific experiment test a prediction of a theory, the theory being that vector x leads to the greatest possible well being. In effect, it only means that this prediction is whatever you value, and you are prepared, because you do believe this is valuable, to face the data. In other words you are bringing a per-existing value (an ought) to the tribunal of science (is). Well played! I thought you were defending Harris.

  40. Axxyaan says

    @601 #39

    Your scientific experiment wouldn’t give usable results for this. Your experiment would just show what people in reality do value. Sam Harris claims that science not only can determine what people do value, but also what they should value. Your experiment would fail to provide justification for what is valued, it would just inform as about what is valued.

    Take two persons, Rick and Sally the first values being richness over knowledge the second values knowledge over richness. Can you think (even pure theoretically) of an experiment that would show the rest of the world which of those two we should see as an example.

  41. Brian E says

    It cannot be maximizing well-being.

    I personally am physically unfit, I do not think somebody forcing me to get fit would class as a moral act because my lifestyle choices (as far as choice is even a thing) are my own.

    how do you justify being fit as well being? Perhaps we’d all be happier, perhaps not, but why is that good?

  42. Axxyaan says

    @brucegorton #48

    This isn’t what Sam Harris claims to be talking about. What you are doing here is not controversial, you start off with something you value, science points out that keeping fit is one way to actually get what you value and that is how you justify keeping fit. Keeping fit is a derived value from some other values you start off with.

    Sam Harris claims science can start from just facts and arrive at a value without needing support from other values.
    But when asked to give support for this claim, it turns out that all such attempts imply other values they depend on.

  43. says

    well being

    The term “well being” contains the assertion that the state of “well being” is good. That’s what the word “well” means, when you stick it in there.

    That’s exactly what christian presuppositionalists do when they use the bible’s claims about god to prove that god exists and therefore the bible is right. I don’t recall Harris falling for the specific example of “well being” but utilitarianism, in general, which is what Harris is poorly rehashing, does. Basically, “We try to achieve the greater good. How do we know it’s good? Because it says so – that’s what ‘good’ means!”

  44. says

    If you accept the axiom that maximising total aggregate well-being is your one and only goal, then it rapidly becomes clear that you can improve the aggregate total well-being by sacrificing the well-being of a small minority

    I’m not so sure about that.

    Let’s hypothesize that leading a pain-free life grants one 1,000 wellons (a unit of well-being) so, with 7 billion people on earth there are 7 trillion wellons if everyone is pain free. But what if living a lifetime of delicious debauchery and being king of the planet would grant me 8 trillion wellons? There ya go. Or you do it per capita and then, actually, there’s higher utility in reducing the population to one very very happy couple (they have the whole planet to themselves! no reality TV! no traffic! no smog!) because that way the planetary population is much happier. I could assert with fair confidence that if there was only one couple on earth, you’d have less trouble establishing an objective moral framework that all of humanity shared, too.*

    I enjoy looking for weird edge-cases when it comes to morality because that’s where the cracks show, first. Nastier variations of those edge-cases arise when you ask every utilitarian you meet why they haven’t chosen to die by stopping eating and donating all their organs to help increase the total well-being of the world. Oddly, it turns out they really are only utilitarians when the wind is blowing in their direction.

    (*I am not advocating killing anyone. Of course the rest of the planetary population, being good utilitarians, would recognize the sense behind this argument and choose to help make a better world by not having children. Thus humanity would painlessly go extinct except for our two happy winners!)

  45. brucegorton says

    @49Axxyaan

    It is not what he claims to be talking about, but I think it is an example of a natural consequence of what he is talking about. When you take a concept like well-being as an objective means of measuring whether something is moral or not, you do kind of end up with situations like my point on fitness.

  46. hoary puccoon says

    It seems important to me that we’re simply highly evolved apes, with preferences kludged together over millions of years. We’re biologically programmed to need a certain amount of food, certain vitamins, and so on. We’re social animals who want others around. But we also have instincts that tell us we want higher status rather than lower status with those others. We experience love and empathy. We experience anger, get aggressive. And we often think that anger and aggression are justified. We love human children (mostly, most of the time.) We love animals as pets. We love animals as dinner. We all cobble together behavior we think is “right.” But it’s done with a brain that has parts we got from apes, from simpler primates, from reptiles, from fish….

    Where in that stew could there be some objective morality?

  47. robertwilson says

    “How is it objectively true, unless he assumes some Platonic value that states ‘greatest possible well-being trumps worse possible misery’?” I honestly don’t get this. Do people really fall for this? It seems like a deepity to question whether the idea above.

    I think Harris’ health analogy is extremely useful – the fact that we don’t have a complete model for health and that individuals vary a lot does not prevent us from moving towards greater health. While it is possible to make a model of overall well-being that sacrifices individuals for some measure, I think it’s sufficient to say that in doing so you’re not moving towards well-being as Sam Harris defines it or even as many humans already define it.

    Objections along the lines of “someone forcing me to be fit is not something I would like” are just not objections to the idea, they’re arguments towards refining how we define well-being. Arguments that we must have. That well-being is difficult to define doesn’t make it impossible to work towards.

    Lately I’ve been finding myself disagreeing with a lot of atheists on philosophical issues and trying to understand why they bother playing some of these philosophical games. (The other immediate example is people who lament WLC “winning” debates because they fall for his tricks).

    It seems to me the desire for and ought-is solution acceptable to most philosophers and many other solutions to philosophical problems are just deepities that take for granted the idea of some external measure. That’s a holdover from theism to me (even if it has long been separated from it). Why do we as atheists play this game? Morality is surely not an easy issue, but denying that it has something to do with well-being seems more like a word game than an attempt to understand morality.

  48. deepak shetty says

    @601
    One of the challenging parts of morals is when “well-being” conflicts with other attributes that we generally associate with morality (e.g. truth)
    a. if some old person who is about to die asks me about heaven what should I say – tell the truth about what I believe or tell a few white lies so that (s)he is happy? Ditto for any medical issues.
    b. Can you kill 1 person to save 10 people ? (say a person without relatives who wont be missed , harvest his organs and use them to save 10 people who need his organs) – its hard to see how overall well-being wont increase
    c. Hypothetically, what if the overall well-being of people increases if everyone follows the exact same variant of religion? Would you now try to convert everyone to a single religion? would becoming a liar for Jesus be considered moral ?
    and this doesn’t even touch upon the numerous examples of conflicting well-being that people have provided.

    Not only does Harris assume an ought (which I have no problem with – but really he should have said that right at the start) – its definitely not true that well being is the only attribute to consider (even assuming the vagueness inherent in those terms). Only if you are willing to restrict yourself to a rather naive “acid thrown on someones face is not well being” or “suicide bombing people is not well being” can you argue as hariis is doing.

  49. says

    robertwilson – no, this is not about deepities. It’s the opposite – it’s about not oversimplifying the complicated. It shouldn’t be all that amazing to point out that morality is complicated and meta-morality is even more so. Harris tried to make it super simple, and that’s just silly.

  50. Axxyaan says

    @robertwilson #56

    If you think that the desire for and ought-is solution acceptable to most philosophers is a deepity, it is Sam Harris that is displaying this desire. It is Sam Harris that is trying to argue he has such a solution.

    Nobody is disputing that science can inform us about moral issues. The only thing in dispute is the claim from Harris that it is possible to infer a value from only facts.

  51. robertwilson says

    I don’t think he tried to make it simple, in fact I think the health analogy points out that it’s far from simple. I do think he tried to remove unnecessary baggage from it (like the idea that we somehow have to justify why well-being would be preferable to misery) and argue that while not simple it is describable and discernible just as ideas about so many other things we can observe are. I’ll keep reading to see whether anything convinces me otherwise but for now I’m firmly in the camp of 1. Well-being is a useful but not perfect measure for morality, just as it is for health; 2. A lot of the questions people ask about justifying this or that approach are like asking why water is wet.

  52. robertwilson says

    @Axxyaan Understood, then it’s likely I focused on the wrong thing and that’s close enough to de-railing that I should avoid pressing that issue.

    On understanding values from facts… how else would you? Yes values are subjective in many senses but they’re concepts that exist in minds and depend on our interactions with other minds. Unless you believe in some externalism or duality, it’s all deterministic and indeed in principle knowable from the base facts. None of that leads to “it’s easy to figure out” but it can free us of some baggage.

  53. says

    Can I ask if you’ve read any other books on the subject, Robert? I don’t mean that as a gotcha or a claim of expertise; I’ve read only a few myself; it’s just that I think even a little familiarity with the literature makes it (at least more) obvious that Harris is trying to take a shortcut that shouldn’t be taken.

  54. yahweh says

    Here’s another take on it. Philosophers in ancient Greece came up with many ideas which were debated, for whatever reasons, for over a thousand years afterwards. Zeno’s paradox of Achilles and the tortoise is one of these.

    Later, In the seventeenth and eighteenth centuries, mathematicians developed the ideas of infinite series, convergence, limits and differentials which form the foundations of a great deal of practical maths.

    It is tempting to say that maths explained the paradox but philosophers might be killed in the stampede to point out how ignorant such an opinion was.

    Either way, if not explanations, maths and science have at least provided such close analogues to philosophical and theological argument that they have made the latter redundant (even if, like zombies, they never lie down and die).

    Zeno’s paradox is just for amusement now. Serious people just get on and crunch the numbers. Likewise, should and ought can be quietly forgotten while we get on with trying to be empirical about how things (like democracy, liberty, equality and Tequila) affect peoples’ well-being – difficult though that task may be.

  55. robertwilson says

    I have not and while I do want to at times, I also have to admit I treat some of these ideas about morality a bit like I treat theology – they are interesting intellectual or abstract discussions that (as far as I can tell) don’t reflect reality. Morals are in some way biological because like minds and language they evolved. They’re awfully complicated, but starting with a blank slate and trying to develop a model of morality is like starting from a blank slate and trying to develop a model of the universe: it does not work as well as a descriptive approach.

    In short, I disagree that it’s a shortcut that shouldn’t be taken, I think it’s one that’s (no pun intended) justified. I don;t see how questions like “why should we want to flourish” are much different from deepities unless the people discussing them are employing different definitions of the issues.

  56. aziraphale says

    Here’s a question or two:

    People have asked “Why should we prefer well-being to misery? What about a person who prefers misery?”

    Well, suppose such a person is thwarted in his desire to achieve misery. Won’t he be unhappy for that reason? Doesn’t his well-being consist in getting what he wants, i.e. misery?

    Or, as I suspect, is the whole idea of such a person incoherent?

  57. says

    @Bjarte Foshaug #42

    I took a another look, but I still think Sam is just assuming the all-max-well premise (as an ultimate value), and goes science from there.

    —-

    @Dunc #43

    All-max-well does not entail torture a few or “raw utilitarianism.” See my #19 The collective assessment is even more difficult.

    I’m no fan of Kant, Rawls’ VoI is cool, but I prefer David Deutsch and Giulio Tononi.

    —-

    @Axxyaan #48

    Give Rick some money and Sally some books, and watch their neurons light up?

    Sam’s point is that we need to measure brains, instead of only justifying philosophically. Maybe he’s just pushing neuro$cience for personal gain?

  58. Axxyaan says

    @robertwilson

    #61. But these kind of facts are useless for what Sam Harris wants to do, because they just inform us about what people actually value. They don’t say anything about what we should value. Supose one person values being a famous actor, another values being a famous sportsman and yet another values being a famous scientist. We can study these people, we may (in the future) see how these values manifest themselves in the brains but I don’t see how such study can help us in choosing which of these values we should encourage over the others without depending on other values.

    You can study people who love mozart, you can study people who love the beatles and you can study people who love the sex pistols, but the study of those people wont help you in any way finding out what kind of music you like best or which kind of music is to be preffered.

    #64 People are only bringing these kind of questions up because Sam Harris is trying very hard giving the impression that science can actually answer them. If you don’t find those questions interresting go complain to Sam Harris. Most of those who criticize him wouldn’t be bringing them up, if it wasn’t for his claim that science can solve them or them being objective.

    And that morals evolved is not helpful here, because we don’t treat all those evolved values the same. Some sort of gullibility evolved too. That doesn’t imply we should promote gullibility. So from all the drives we evolved with, how do we choose which we want to encourage and which we want to fight?

  59. Shatterface says

    So basically the objection to maximising well being was dramatised in Ursula Le Guin’s short story ‘The Ones Who Walk Away from Omelas’ and that Doctor Who story with the space whale?

  60. Axxyaan says

    @601 #68 Studying brains, will not help here. Suppose we (very simplistically) find out the left brain halve light up when we are peaceful and the right brain halve light up when we are violent. How does this help us in deciding whether we should encourage peacful action or violent action?

  61. robertwilson says

    @65 Ophelia Benson, not only fair enough, more than fair. The desire is there, even if countered by the attitude in my posts, but the more out of my depth I get as I engage the more I realize I should give it some time.

    @68Axyaan, to your example of various people enjoying different things Sam harris has already replied – there may be multiple optimal peaks. A morality that decides what we should do based on empriical evidence still has room for areas where there are multiple right answers, it doesn’t have to say “everyone has to do this” in order to be a succesful objective system. An empirical model of morality might conclude that all of your examples contribute to the well-beign and that there is no better or worse example among them, that for each individual person they are a right answer.

    @67 602, Simply to add to your first point, I think it’s more accurate to say not that Sam assumes all-max-well is the best, but that we can make objective statements that certain peaks (including a hypothetical all-max-well) are better than the valleys. That there are extremely complicated questions in the middle and blurred lines doesn not prevent us from moving in the direction of those peaks.

  62. says

    I love these comments. (Except perhaps Evil Anchoress’s. Could you stop with the exaggerated or pseudo-flattery?)

    I think it’s genuine flattery, just from a very different perspective. Imagine watching a crocodile (you) take on a piranha (Harris), from the top of the eiffel tower (me). It’s not that it’s pseudo or exaggerrated ophelia, it’s very sincere, just comical due to the fact I am observing with a telescope a billion miles away as you devour Harris philosophically.

    Also, you’re bugs bunny, Harris is the magician (imo):

    http://www.youtube.com/watch?v=kg9IVhaSxPE

    That’s the original analogy. Just because I’m way up in the ‘back of the bleachers’ doesn’t mean it’s not genuine, lol. I most certainly think you’re brain is superior to Harris in every conventional way, and if you feel insulted by that I’ll just reiterate it. It’s comical because Harris has spent years thinking he’s way more amazing than he is, when he’s merely mediocre. You can be insulted by the flattery if you want, but you, like many, are misreading me badly. What can I say except to say you and Massimo are leagues ahead, superior minds? I must reiterate it! It’s hilariously true, like the first commenter said: Harris never should have written books for a living.

  63. says

    @Ophelia

    I read you, thanks for the clarification, and I am happy to give you compliments. You are far more forgiving concerning men/males than I am, I will follow your example and try to see them more as human beings. I apologize for my moral failings and will try to be a better person rather than having a blindspot.

    Thanks again for helping me in this endeavor.

  64. Bjarte Foshaug says

    @601 #67

    I took a another look, but I still think Sam is just assuming the all-max-well premise (as an ultimate value), and goes science from there.

    This is starting to remind me of the kind of answers we get from biblical harmonizers who try to explain away the “apparent” contradictions in scripture. If, for instance, the possessed man from the tombs in the gospel of Mark has become two men in the gospel of Matthew, a biblical harmonizer could either say that Matthew and Mark are actually describing two almost identical but separate episodes, or that Mark for some obscure reason decided to only mention one of the two men that Matthew was talking about. All I can say is that God would have to be an idiot to express himself so clumsily if that’s what he was trying to convey.

    Why am I saying this? Because I think the same would be true of Sam Harris if all he was trying to say was that If you start from value premises, science can help you arrive at value conclusions.

    Let’s look at the last quote from my previous post again (I think the other quotes imply the same, but they might need some more unpacking):

    It is often thought that there is no description of the way the world is that can tell us how the world ought to be. I think this is quite clearly untrue.

    First of all, to the best of my knowledge. it is not “often thought” that “there is no description of the way the world is that, combined with a set of values, can tell us how the world ought to be”. It is, however, very often thought that “there is no description of the way the world is that, all by itself, can tell us how the world ought to be”, thus the first sentence is only true under the latter interpretation. Indeed, Harris’ dismissive remarks about Hume’s is-ought distinction only make sense given such an interpretation.

    If it’s “quite clearly untrue” that “there is no description of the way the world is that can tell us how the world ought to be”, then it must be true that there is such a description, agree? If not, feel free to explain.

  65. Bjarte Foshaug says

    @601 #67
    Oh, and let’s return to the first quote, while we’re at it.

    First I want to be very clear about my general thesis: I am not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor am I merely saying that science can help us get what we want out of life. […] Rather I am arguing that science can, in principle, help us understand what we should do and should want – and therefore what other people should do and should want in order to live the best lives possible.

    In the context of everything else that Harris has said about the inadequacy of Hume’s is-ought distinction, the ability of science to tell us “what we ought to value”, descriptions of the way the world is telling us “how the world ought to be” – despite what is “often thought” – etc., if all he is trying to say is that science can help us derive value statements from a combination of facts and other value statements, then this has to be the most spectacular failure to “be very clear” about one’s “general thesis” ever written.

  66. Axxyaan says

    @robertwilson #71.

    This is beside the point. The question remains that neither you nor Sam Harris give an inkling of how we are going to decide such questions without involving other values. We have three people with three different values. Give me a — as far as I am concerned purely hypothetical — method of how we might be able to decide how hight the moral peak is of each of them or just how we can compare the height of those moral peaks with each other, without depending on other values.

  67. says

    @Axxyaan #70

    We need to measure everyone; ethics are about interactions, not isolated individuals. Please see my comment http://freethoughtblogs.com/butterfliesandwheels/2013/09/guest-post-why-the-isought-problem-matters/#comment-622874

    —-

    @robertwilson #71

    Good point, the peaks are various well performing ethical systems (collective, not an individual’s score), but probably none would reach the idealized all-max-well.

    —-

    @Bjarte Foshaug #77

    Thanks for the bible tip, but I’m a regular listener of Irreligiosophy, and they have been covering M&M lately.

    From my #28 pull quote, this smells like a stated premise to me:

    given that there are facts—real facts—to be known about how conscious creatures can experience the worst possible misery and the greatest possible well-being, …

    And from wiki/The_Moral_Landscape:

    Harris addresses the Value problem by maintaining that some presupposition of values is necessary for any science, and that his science of morality is simply no different. He thus yields Blackford’s point that “that initial presupposition does not come from science,”[51] but Harris does not see this as a problem.

    I was just trying to clear up a misunderstanding, but I don’t think this issue is a big deal anyway. I’d love to see a perfect proof that good is better than bad, but I’m not holding my breath.

  68. Bjarte Foshaug says

    @601 #80

    From my #28 pull quote, this smells like a stated premise to me

    Not the same premise. The complete argument would go something like this:

    I am simply saying that, given that there are facts—real facts—to be known about how conscious creatures can experience the worst possible misery and the greatest possible well-being, and given that the latter really is better than the former, it is objectively true to say that there are right and wrong answers to moral questions

    It’s the second, unstated premise that makes his argument circular since he is using a value judgement to argue for the objective truth of value judgments.

  69. Axxyaan says

    @601 #80

    You just evaded my question. Just assume we did the measurements and came up with my hypothetical situation.

    And the presupposition of values for any science is a dodge. Those values are the values the scientists needs to follow in order to do good science. That doesn’t imply that values are presupposed in the domain that is being examined.

  70. Dave Ricks says

    @601 #80, you wrote, “We need to measure everyone,” but you wrote that comment under a pseudonym. I see a conflict: For a greater good, would you give up your present state of privacy, and your right to privacy? If not, why not?

  71. says

    For the benefit of everyone who may have subscribed to one thread’s comments and not the other, I will repeat this in all: I have explained my agreements and disagreements with both Ophelia and Sam in my analysis of this contest and its aims in What Exactly Is Objective Moral Truth? I don’t think the contest is all that bad an idea. And I am certain Harris’s core thesis is correct. (It’s just that I’m almost as certain he’s not the best man to defend it.) I explain both there.

  72. says

    @Dave Ricks #83

    No kidding, this is literally the thought police nightmare.

    And to be clear, by everyone I really mean enough (for a statistically significant sample), and particularly everyone involved in the interaction at issue.

    For example, in the case of a daylight robbery, measuring the perpetrator would be interesting. But more importantly, the victims, their friends and family, and then the bystanders, and finally the evening news watchers.

    I can imagine that with this data, one could reasonably conclude that theft by threat is unethical.

    Regarding privacy, it’s a difficult balancing act. But I have already grieved the death of the 4th (Hi NSA).

  73. Laurence says

    Read the review by Orr that #35 posted. It levels some pretty devastating critiques at Harris’ main arguments.

  74. Jonny Vincent says

    But we also have instincts that tell us we want higher status rather than lower status with those others. We experience love and empathy. We experience anger, get aggressive. And we often think that anger and aggression are justified. We love human children (mostly, most of the time.) We love animals as pets. We love animals as dinner. We all cobble together behavior we think is “right.” But it’s done with a brain that has…

    …been the victim of over 5000 years of misogyny (insanity in recursion). We are conditioned to value status. We’re conditioned to feel love and empathy (and there is no evidence that either is natural and a great deal of evidence that selfless love, in particular, is provably self-defeating and potentially responsible for all violence / conflict). We experience anger (outrage / fury / frustration) only when emotionally degraded and incapable of communicating on a higher plane. Children are raised to be carnivores. We all cobble together behaviours our mothers felt to be “right” but many of our mothers believed women should be treated Right for no reason (prostitution, in reality) and almost all of our mothers believed children should be raised Right with violence and lies (capture-bonding leading to Stockholm Syndrome, in reality).

    Is it Right that girls are slut-shamed for being human girls? Is it Right that toddlers are shamed for being human toddlers? Our species is the only species on the planet that is inhumane, for goodness’ sake.

    I believe most or all of the fine minds above are failing to perceive the reality Harris can see; if the species wasn’t infected with the ‘need’ for combative violence, lies and imposition (M.A.D.), Humanity would exist as a deity species beyond all present comprehension. Till truth and right from violence be freed, our species will not emerge from the Dark Ages of leaching where only blood-thirsty sociopaths can compete. You can argue all you like that maximising well-being isn’t an obvious ought but on our present course (violence & lies), we are game-play extinct within 100-200 years.

Trackbacks

Leave a Reply

Your email address will not be published. Required fields are marked *