Sam Harris v. Sean Carroll


The discussion is interesting. Sam Harris recently and infamously proposed that, contra Hume, you can derive an ‘ought’ from an ‘is’, and that science can therefore provide reasonable guidance towards a moral life. Sean Carroll disagrees at length.

I’m afraid that so far I’m in the Carroll camp. I think Harris is following a provocative and potentially useful track, but I’m not convinced. I think he’s right in some of the examples he gives: science can trivially tell you that psychopaths and violent criminals and the pathologies produced by failed states in political and economic collapse are not good models on which to base a successful human society (although I also think that the desire for a successful society is not a scientific premise…it’s a kind of Darwinian criterion, because unsuccessful societies don’t survive). However, I don’t think Harris’s criterion — that we can use science to justify maximizing the well-being of individuals — is valid. We can’t. We can certainly use science to say how we can maximize well-being, once we define well-being…although even that might be a bit more slippery than he portrays it. Harris is smuggling in an unscientific prior in his category of well-being.

One good example Harris uses is the oppression of women and raging misogyny of the Taliban. Can we use science to determine whether that is a good strategy for human success? I think we can, but not in the way Harris is trying to do so: we could ask empirically, after the fact, whether the Taliban was successful in expanding, maintaining its population, and responding to its environment in a productive way. We cannot, though, say a priori that it is wrong because abusing and denigrating half the population is unconscionable and vile, because that is not a scientific foundation for the conclusion. It’s an emotional one; it’s also a rational one, given the premise that we should treat all people equitably…but that premise can’t claim scientific justification. That’s what Harris has to show!

That is different from saying is is an unjustified premise, though — I agree with Harris entirely that the oppression of women is an evil, a wrong, a violation of a social contract that all members of a society should share. I just don’t see a scientific reason for that — I see reasons of biological predisposition (we are empathic, social animals), of culture (this is a conclusion of Enlightenment history), and personal values, but not science. Science is an amoral judge: science could find that a slave culture of ant-like servility was a species optimum, or that a strong behavioral sexual dimorphism, where men and women had radically different statuses in society, was an excellent working solution. We bring in emotional and personal beliefs when we say that we’d rather not live in those kinds of cultures, and want to work towards building a just society.

And that’s OK. I think that deciding that my sisters and female friends and women all around the world ought to have just as good a chance to thrive as I do is justified given a desire to improve the well-being and happiness of all people. I am not endorsing moral relativism at all — we should work towards liberating everyone, and the Taliban are contemptible scum — I’m just not going to pretend that that goal is built on an entirely objective, scientific framework.

Carroll brings up another set of problems. Harris is building his arguments around a notion that we ought to maximize well-being; Caroll points out that “well-being” is an awfully fuzzy concept that means different things to different people, and that it isn’t clear that “well-being” isn’t necessarily a goal of morality. Harris does have an answer to those arguments, sort of.

Those who assumed that any emphasis on human “wellbeing” would lead us to enslave half of humanity, or harvest the organs of the bottom ten percent, or nuke the developing world, or nurture our children a continuous drip of heroin are, it seems to me, not really thinking about these issues seriously. It seems rather obvious that fairness, justice, compassion, and a general awareness of terrestrial reality have rather a lot to do with our creating a thriving global civilization–and, therefore, with the greater wellbeing of humanity. And, as I emphasized in my talk, there may be many different ways for individuals and communities to thrive–many peaks on the moral landscape–so if there is real diversity in how people can be deeply fulfilled in life, this diversity can be accounted for and honored in the context of science. As I said in my talk, the concept of “wellbeing,” like the concept of “health,” is truly open for revision and discovery. Just how happy is it possible for us to be, personally and collectively? What are the conditions–ranging from changes in the genome to changes in economic systems–that will produce such happiness? We simply do not know.

The phrase beginning “It seems rather obvious…” is an unfortunate give-away. Don’t tell me it’s obvious, tell me how you can derive your conclusion from the simple facts of the world. He also slips in a new goal: “creating a thriving global civilization.” I like that goal; I think that is an entirely reasonable objective for a member of a species to strive for, to see that their species achieves a stable, long-term strategy for survival. However, the idea that it should be achieved by promoting fairness, justice, compassion, etc., is not a scientific requirement. As Harris notes, there could be many different peaks in the moral landscape — what are the objective reasons for picking those properties as the best elements of a strategy? He doesn’t say.

I’m fine with setting up a set of desirable social goals — fairness, justice, compassion, and equality are just a start — and declaring that these will be the hallmark of our ideal society, and then using reason and science to work towards those objectives. I just don’t see a scientific reason for the premises, wonderful as they are and as strongly as they speak to me. I also don’t feel a need to label a desire as “scientific”.

Comments

  1. Givesgoodemail says

    Rational metaphysics (“is”s) can only lead to moral actions (“ought”s) when there is a rational epistomology (facts are facts, opinions and emotions are not) to sort out the facts, and a rational ethical code to link facts to proper values, before proper moral action can be properly conceived and enacted.

    So, I’d be in the Carroll camp as well.

  2. Icarus says

    PZ, you’re absolutely right – you simply can’t form any kind of moral argument without first having made at least some value judgments about one kind of outcome being ‘better’ than another. So, moral judgments are necessarily mind-dependent and not objective, and science can’t possibly have anything to say about moral issues. That’s not something to be afraid of, or regretted – quite the reverse. Our capacity for empathy and value judgments is one of the key things which make us human.

    In point of fact, only atheists can really claim to be capable of genuine morality, since the religious regard human behaviour as being all about following orders regardless of the consequences (thou shalt not…) and self-interest (gaining a place in Heaven and avoiding Hell). There is no place for morality at all, in that kind of worldview – it’s entirely amoral.

  3. Ben Goren says

    I’m pretty firmly convinced that, just as our motor skills are a very efficient method for performing all sorts of mathematical jiujitsu in order to be able to…well, do jiujitsu, our morality is a similar evolutionarily-encoded means of doing game theory math.

    Take something like armed robbery. We instinctively know it’s worng. Those who claim that only some invisible sky daddy can impose morality on humanity also like to claim that, without said invisible sky daddy, us atheists should be running around, shotguns in hand, snatching purses left and right from little old ladies.

    And, while it’s true that there might be a limited form of short-term gain to be found from armed robbery, it’s even more obvious that it’s a losing long-term strategy, no matter how you look at it. The resources you’d have to expend in order to prevent somebody from forcibly taking your possessions or from harming you in your attempts to take theirs far outweigh the resources you gain from the theft. In a peaceful, cooperative society, those resources can be put to much better use for everybody.

    In that example, I think there’s plenty open to scientific inquiry. What does somebody hope to gain from armed robbery? Is that the most efficient means of attaining those goals? If not, what is a better strategy? Statistically speaking, how common are those goals? Do individuals who pursue those goals have any sort of evolutionary advantage?

    I certainly wouldn’t claim that science has all the answers when it comes to morality; indeed, it doesn’t have very many answers. But that’s because no scientists are seriously considering the topic, not because science has nothing to say on the matter. Sam should be commended for his efforts to rectify this.

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  4. mattheath says

    I had a thought about this when it was being discussed elsewhere. i assume it’s not a terribly original thought. I’d quite like if anyone can tell me how it’s been dealt with elsewhere.

    It is that “is”s about the word “ought” might be able to produce “ought”s.

    I mean to decide if an ought statement is true seems to be the same as deciding if “ought” (and the other words in the sentence) are used correctly, and the way to do that is presumably to know “is”s about the usage of the word.

    But that’s kind of cheating, isn’t it? I expect Humeans have dealt with self-referential examples like that.

  5. PsyberDave says

    I read the Harris post on HuffPo a few weeks ago. Harris makes a lot of noise about how he is right and others are wrong, but never presents any actual evidence in support of his point.

    I seriously doubt he can, but I’ll give him the opportunity to try.

    Morals are definitions. How can you prove a definition is true? It is true because it declares itself to be true, not because it is empirically and objectively discovered to be true.

    Even if you choose well-being as a measure, once defined, it is easy to demonstrate well-being decreases or increases depending upon the environment, not just the behavior itself. In one culture a person’s behavior may be rewarded (increased well-being), and in another the exact same behavior may be punished (decreased well-being). It isn’t the behavior that is moral. Morality is assessed by the observer and it is subjective, relative to the definition used by the observer.

  6. Eamon Knight says

    What you say is very similar to what I kept muttering to myself while listening to Harris’ talk, that he’s missing a very basic point: you have to assume some underlying, ultimate set of goals or values that are not scientific in origin. Science can only tell you how to work towards them in the real world.

    Or as I like to formulate it: it is true that I want to be alive and happy — but there is no syllogism or empirical observation you can make that says I ought to want that. It just happens that I do, and a great many (arguably, all) of my “oughts” follow from that fact. Nor is it even a universal given of human psychology: depression victims may in fact not want to be alive, and may have given up on the prospect of happiness, perceiving it as a cruel cheat.

  7. Mark Tiedemann says

    What you can use science for is to demonstrate that there are no material bases that support the oppression of women, that a claim of “separate creation” is unsupportable, and therefore one’s entire basis for oppression is itself irrational and pathological. In exactly the same way that all pseuodo-scientific bases for racism have been blown away by sound biological study showing no differences between ethnic groups that justify a claim of superior-inferior status. The debate can then be moved into the arena of ego vs reason without being side-tracked by claims of some specious biological determinism.

  8. Reginald Selkirk says

    Harris relies on his rhetorical skills to paper over the logical weakness of his argument.

  9. Ray Moscow says

    I think we have to decide what we value, and then use reason and science to help us get there.

    I don’t see how science on its own can tell us what our values are.

    For example, one value (that Harris mentioned elsewhere) is to minimise the suffering of conscious entities. I agree that this can and probably should be a value we strive to achieve. But neither reason or science alone can give us this value nor any other in the first place.

  10. PsyberDave says

    One other point; “ought” may seem like an absolute, but it is actually a relative term, as is “should”, “must”, and “have to”. You don’t “have to” do anything; not absolutely. You “have to” or “ought to” (etc.) IF you want some particular effect. So the “ought” is relative to what someone wants. You “ought to” eat less if you want to lose weight. But you “ought to” eat MORE if you want to gain weight. The “ought” is not absolute. It is relative and depends upon what is wanted.

  11. Dobb's Head says

    I watched Harris’ talk a few weeks ago, and I think you missed the point. What Harris’ seemed to be getting at is that we all have notions of what society ‘ought’ to be. We develop these notions through our socialization. There is no reason why these notions of ‘ought’ are off limits to scientific analysis.

    That analysis begins by observing the social landscape, linking policy and precepts with outcomes. That quantification of not only conditions, but also of outcomes, has a strong chance of implying a set of successful modes of organization. Then we can compare our notions of desirable modes with likely successful modes and make better decisions.

    Saying that Harris’ meant that we already know what structures we want is putting words in his mouth. Instead he was saying that we should quantify the social structures and their outcomes that do exist to define the ‘moral landscape’. That moral landscape will imply a set of directions our society ‘should’ move from which we can choose.

  12. jonathon.j.smith says

    Lets be honest, at the heart of all scientific fields of knowledge lay unjustifiable assumptions about the world. But so what! This observation is so trivial as to be banal, and every time someone makes the argument that things are ultimately “unknowable” because, “After all, you don’t really know that all this isn’t just a dream,” I want to smack them. Likewise, when someone tries to tell me, “Well, the goal of morality might not be well-being, maybe it should be pain and suffering!” I stop the conversation because they are fogging over the topic. What I take away from Harris’s argument is, we should just agree that Morality talks about well-being (just like we agree that authors write books, and musicians play music, it’s a definitional thing), and then exclude people who want to talk about a different morality from the discussion table, because they are playing devils advocate.

  13. Hatherly says

    There have been several sophisticated assaults on the so-called is/ought gap that were published well before Harris gave his lecture. For example, see David Brink and Peter Railton on moral realism. Harris is a media star due to his literary success, but don’t let that fool you into thinking that he is representative of all that philosophy has to offer on this subject. Harris may not be quite right in the end, but that is not the end of the matter.

  14. abb3w says

    Oh, this again.

    I’ll throw out the thought that to get from an IS to an OUGHT, you need to identify what IS mean by “ought”.

    It appears to be an ordering relationship for a semi-lattice of choices. However, for any set of choices, one may enumerate all the possible ordering relationships (yielding a higher cardinality than the original set).

    The question is, which of these is what is meant for any particular use of the word “ought”.

    Alas, I don’t have time today to detail my thoughts.

  15. rooter says

    “Don’t tell me it’s obvious, tell me how you can derive your conclusion from the simple facts of the world. ”

    can we all agree that after studying a species of plants under various conditions, and recording its “well being”, we can make some assumptions (and then test them) about which conditions are ideal?

    can we all agree that this basic process would work for animals?

    can we all agree that people are also animals?

    the end.

  16. mattheath says

    Rooter@#15: Not entirely the end since even you had to put scare quotes around “well being”.

  17. slayersaves89 says

    As far as the meta-ethical debate goes I think Harris is absolutely in the wrong. As far as I can tell his method of deriving an ought from a list of facts is simply to note that we seem to have settled on a definition of “bad” which consists of moving away from the “worst possible misery for everyone”. If this is not bad then the word bad has no meaning. Furthermore is one were to ask “why ought we avoid bad things and seek good things” he would say that the word ought implies this by its very definition. Ultimately he has simply chosen definitions which make him correct, but has still not defeated Hume because Hume was not using those definitions.

    If anyone is interested in the meta-ethics Russell Blackford put it much better than me.

    I think that there is an important point to be made when you leave the meta-ethics aside though.
    There are people out there who claim to have a certain set of values, but whose actions do not in any way reflect those values. We can use science (as PZ said) to establish whether or not certain values we have (like pleasure or contemplation) are, in fact, being instantiated.
    This is the value I see in Sam Harris’ point of view (even though it is hardly original he popularizes it well). He is also a vast improvement on someone who, while they understand the issues with the metaethics, will then simply give up on using to science to maximize certains “good’s” which we have arbitrarily chosen just because it is ultimately arbitrary. I think that is a much greater, and more destructive, fallacy than the one Harris commits. He is so close to having an excellent point of view. I just wish he would give up on the damn meta-ethics.

  18. slayersaves89 says

    Rooter:
    But why “ought” we maximize that wellbeing in either plants or animals? Or rather, what is your response to someone who thinks that they want to use science to sow death and destruction to all living things simply because that is an urge they have? How, in an objective sense, is the urge that you and I have towards maximizing “wellbeing” any more legitimate in an objective, factual, sense than someone who wants to minimize it?

  19. jennyxyzzy says

    @PZ

    The phrase beginning “It seems rather obvious…” is an unfortunate give-away. Don’t tell me it’s obvious, tell me how you can derive your conclusion from the simple facts of the world.

    I don’t think that this is a fair criticism of Harris’ position. He doesn’t think science has all the answers, but rather that he thinks that one day science can have all the answers. That section starting with “it seems obvious” isn’t a claim for truth, but rather Harris trying to give an explanation for why the simplistic criticisms of a science-governed world won’t necessaily come to pass.

  20. rooter says

    mattheath – ‘well being’ is certainly a complex idea, but not outside the grasp of science. Sam isn’t saying he’s pinned down what ‘well being’ is, and he definitely isn’t saying he’s pinned down how to get there. he’s only saying that both these things fall within the realm of the falsifiable.

    slayersaves89 – i think ‘improving the lot of humanity’ is kind of the starting point we’re all at.

  21. BoboHilario says

    I think what several of you are missing from Sam’s talks on this are that he considers well-being to be a theoretically measurable state of living brains.

    If we had sufficient technology, we could measure the full state of a brain and quantitatively determine that creatures well-being. Taking this further we can map the outcomes of actions to the collective affect on well-being to create Sam’s Moral Landscape. I don’t think it’s wrong to call this a map of “Morality” the only question is do we say that people “ought” to act morally be this definition?

  22. Ben Goren says

    slayersaves89,

    If I may answer the question: you should do it out of enlightened self-interest.

    No, seriously.

    You should happily pay taxes to support public education because you yourself will gain more personal benefit by living in a well-educated society than a poorly-educated one.

    You should work to preserve the environment in general and endangered species in particular because your own health is directly dependent on the health of the ecosystem. You can’t eat sunlight or dirt or air, but you can eat the plants that do, and you can eat the animals that eat the plants. And those plants and animals depend on lots more than just air and dirt; they depend on the system as a whole. Therefore you do, as well.

    As I pointed out in my example from post #3 above, you should not commit armed robbery because it will ultimately harm your own self-interests, not enhance them.

    And that is at the heart of the matter, the point that I think everybody (probably including Mr. Harris) is missing.

    Morality is and should be an entirely selfish, individual matter.

    We shouldn’t be trying to discover a moral system that is best for the greatest number of people, or any of the other variants Mr. Harris recommends. Instead, we should be trying to discover a moral system that is best for each and every one of us individually.

    BUT THAT SYSTEM MOST EMPHATICALLY MUST TAKE THE LONG VIEW.

    What is best for me in the next hour or so might be to go out, find somebody who’ll give me a massive dose of heroin, and shove it in my veins. From what I understand, that’ll make for an unbelievably amazing time.

    It’s also obvious that the negative effects of such a decision may well cost me my life, and probably would royally fuck up the rest of my life, regardless. So, me being the selfish git that I am, I don’t (often) trade short-term pleasures for long-term wellbeing.

    Just as I don’t grab every pretty girl I see and drag her behind the nearest shrubbery and rape her. Never mind that I can’t think of a more potent, faster-acting turn-off than rape; even if I were into such vile, disgusting depravity, it would be like the heroin high: do it, and, if the girl doesn’t rip your nuts off and shove them down your throat, her friends and family will. And, in a society in which rapists run rampant, you’ll have to be constantly fighting off those who want to rape you.

    It’s very much like the Prisoner’s Dilemma. There are winning strategies and losing strategies. The most effective strategies mean you win as much as possible; they also mean that, if everybody uses that same winning strategy, everybody gets super extra bonus points. Your enlightened self-interest oh-by-the-way also happens to be what’s best for everybody else, too.

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  23. Fortknox says

    Lets be honest, at the heart of all scientific fields of knowledge lay unjustifiable assumptions about the world. But so what!

    This observation is so trivial as to be banal, and every time someone makes the argument that things are ultimately “unknowable” because, “After all, you don’t really know that all this isn’t just a dream,” I want to smack them. Likewise, when someone tries to tell me, “Well, the goal of morality might not be well-being, maybe it should be pain and suffering!” I stop the conversation because they are fogging over the topic.

    What I take away from Harris’s argument is, we should just agree that Morality talks about well-being (just like we agree that authors write books, and musicians play music, it’s a definitional thing), and then exclude people who want to talk about a different morality from the discussion table, because they are playing devils advocate.

    Topic won.
    The completely irrational and groundless objections to Sam Harris simply amaze me.

    You lost the plot.

  24. sebastian.sylvan says

    I *think* what he’s getting at is that “ought” is only subjective because we can’t get into the minds of people and find out what they think the “ought” should be.

    The position is “an emotional one”, but the point is that once you can scan a brain and truly understand it then an emotional position is just another scientific fact. We can know for a fact what individual X’s emotional position is. And then we can know what the normative emotional range of positions are for a given population. Including on a “meta” level where we can know for a fact what their views are on how we should establish moralities.

    So I think that’s the point. A subjective emotional position is only subjective because it can’t be objectively measured and aggregated over the population. Once it can, it’s as scientific as any other fact, and could be used as a basis for determining what “human morals” are, in an aggregate optimization-problem kind of way.

  25. Azkyroth says

    Science can tell us how best to obtain any particular state of affairs; it’s the decision to value one state of affairs over another that’s orthogonal to science.

  26. Tulse says

    In addition to all the problems others have noted, there are the notorious issues with utilitarianism (such as maximizing utility for all by minimizing it for some) that Harris papers over by importing notions of “justice” and “fairness” into “well-being”. Heck, he might as well say that “well-being” consists of “being a full citizen in a secular industrialized capitalist Western democracy with a strong social safety net”, since that seems to be what he really means.

    And, more importantly, this project will not solve the problem of religious fundamentalism, since once you’ve got heaven and hell, you have literally maximal positive and negative utility, and thus can justify any action or societal arrangement. For example, in Harris’ TED talk he mentions the case of a father killing a gay son to keep him from going to hell. Harris seems to think this is an example that supports his position, when in reality, if the premises are granted, the father’s behaviour is actually completely rational and maximizes his son’s “well-being”. The problem is the non-rational axioms, and those can’t be attacked directly through Harris’ approach.

  27. Quine says

    I love you, PZ, but I am with Sam on this one. First off, others have dragged Hume and the is/ought into the discussion; Sam never claimed to have overturned Hume. Part of this is because the “ought” in Hume is the provably optimum ought, which, again, Sam does not claim. It is trivial to show that you can get an “ought” from an “is” if it does not have to be correct (you could roll dice). Sam is asking us to use the knowledge we have obtained from the scientific method to engineer a better moral system than what so much of the world has inherited from bronze age scripture.

    I also want to stress that this is not science, it is engineering. Science did not tell us to get rid of smallpox, we decided that was a “good” thing to do, and used the knowledge about what smallpox was, gathered through the scientific method, as a basis to engineer a method to get rid of it. We are currently engineering methods to reduce malaria. Can we engineer methods to improve the lives of the people of the world by making changes to the moral codes handed down from the past? Almost certainly. Can we prove an optimum? No, not even in principle. Well, if we can use scientific knowledge of the world to do better, shall we let the (provably unobtainable) perfect be the enemy of the good (or at least better)?

  28. sasqwatch says

    I think Harris is correct in principle but not necessarily in practice (yet). Which seems to be his point. So color me a fence-sitter.

    After all, there are objective measures one can use for well-being (survey, as in recent research on how “happy” people are in general — other objective proxies, like longevity, access to prenatal care, etc.) And one can correlate these and potentially prove causation with things like social safety nets, median income, education, egalitarian social structure.

    So perhaps I’m not getting what the hassle is. I’m not terribly good at arguing tedious philosophical minutiae, anyway. No time to argue fine points… gotta run. Won’t get into mind-bending tedious discussions about moral relativism or any crap like that either, as that has a demonstrably negative effect on my general well-being. ;-) Anyway, will fill myself in on the opposing side when I have time. Thank you all.

  29. jennyxyzzy says

    Count me among the fence sitters. For me, Harris has replaced all of the different ‘oughts’ that we have in society – “don’t steal”, “don’t take drugs”, “work hard”, and replaced them all with one very basic “ought” – that we should seek to maximise global human well-being.

    I can’t help thinking of this as an axiom – simple enough that most of us can agree that it is reasonable, even if we can’t prove it. But now we have removed all of those different “oughts” that can be conflictory, or can produce non-desirable outcomes, or that exist not for the well-being of the people, but rather for the perpetuation of power structures.

    Anyway, Harris’ vision seems eminently more desirable to me than what we actually have today. So, from that point of view, I’m a supporter.

  30. peterwok says

    can we all agree that after studying a species of plants under various conditions, and recording its “well being”, we can make some assumptions (and then test them) about which conditions are ideal?

    Sure thing, let’s do that.

    Experiment 1
    I observe that when I remove weeds from the experimental plot, the experimental plants grow faster and stronger due to reduced competition for nutrients.

    Conclusion: to maximise the wellbeing of a given human population, one should weed out the undesirable ones. Happily, these have been naturally colour-coded for convenience.

    Don’t like that conclusion? Then you need to add in the premise that we ought to maximise the well-being of all humans, not just the ones with compatible melanin levels. That is not a scientific premise.

  31. James Sweet says

    I happen to agree with Harris’ defense of the “well-being” as “obviously” (yes, I’m going to use that word despite PZ’s objections to it) not leading to enslaving the bottom 10% or whatever. When (non-sociopathic) humans see the weak being enslaved and victimized, it creates discontent and unhappiness. Those are part of the totality of “well-being”.

    So while enslaving the bottom 10% might improve the well-being of purely rational robots, it would not improve the well-being of humans.

    Where Harris goes awry, of course, is in this assumption that improving well-being is itself scientific. But I think he’s done a rather admirable job of constructing a non-arbitrary morality on top of that rather trivial (and easy-to-agree-to-if-you-understand-it) arbitrary assumption.

  32. Tulse says

    Sam is asking us to use the knowledge we have obtained from the scientific method to engineer a better moral system than what so much of the world has inherited from bronze age scripture.

    That bronze age scripture says that acting in certain ways will guarantee maximal individual utility forever. Harris’ current approach has no way to attack that position.

    there are objective measures one can use for well-being […] And one can correlate these and potentially prove causation with things like social safety nets, median income, education, egalitarian social structure.

    Yep, just as I said, “well-being” is secretly defined as “being a full citizen in a secular industrialized capitalist Western democracy with a strong social safety net”. But of course, the real question is what is the actual calculus for the measure one is trying to maximize? Depending on how you measure things, having a few extremely happy billionaires might offset the abject misery of many poor people. One might maximize the well-being of most people by randomly picking individuals to be slaves, or to be cut up for organ donation, or to be eaten by lions on television. You need to import further restrictions on what is meant by “well-being”, or add further criteria to capture “cross-society” aspects of well-being, to avoid these kind of well-known pitfalls of utilitarianism. And once you begin to do that, things look far less scientific and far more like “well-being is whatever I say it is”.

  33. Tulse says

    you need to add in the premise that we ought to maximise the well-being of all humans, not just the ones with compatible melanin levels.

    Please — the ones with different melanin aren’t really human.

  34. rooter says

    peterwok – you’ve successfully diffused racist arguments that no one made. grats.

  35. tristan.cragnolini says

    I’m with Carroll on this one too. As Tulse says in #32, Harris fails to prove that his definition of well-being has sound scientific ground, probably because he can’t.
    Then I also have to disagree with #31 : I found Harris’ defense of well-being as an objective measure of morality for conscious being compelling.
    Unfortunately, that does not mean everyone agree on what well-being is about.
    It’s interesting to note that in his article, Harris seems to dodge that point by only considering individuals that claims well-being is not what they are interested in.

  36. Etruscan says

    I agree with PZ but I’m somewhat confused by the fact that axioms are acceptable in other fields of science, but not here?

  37. tristan.cragnolini says

    Etruscan, it’s not that axioms are not acceptable here. Harris seems to claim that the only axiom he needs is that morality is concerned about maximizing our well-being.
    In his post, PZ contends that you need something more, a definition of well-being.
    And that’s why Harris’ argument is useless against religious loons, because they believe that denying women equal rights is a good thing for everyone, including those women.

  38. peterwok says

    @Rooter: You miss the point – I’m not advocating racism, I’m pointing out in the most brutally obvious way I can that “well-being” is a subjectively defined term.

    @Etruscan: Axioms are fine, but the whole point is that scientific frameworks derive from axioms and not vice versa. When you start talking about morality, at some point there have to be axioms at the root of it, and these also fall outside the purview of science.

    That’s not to say that all morals have to be axiomatic. You can certainly start with quite a simple axiom such as “1) Strive for the greatest happiness for the greatest number of humans” and see what you can derive from it. For instance you could observe that in societies with an unequal distribution of wealth, net happiness is a lot lower than in more equitable societies. That gives you the valid scientific conclusion that you should redistribute (at least some) wealth from the rich to the poor.

    Mind you, even that simple axiom contains two large cans of worms, namely the definitions of “happiness” and “humans”.

    You also have the problem that unless you define your metrics very carefully, then maximising “greatest happiness for the greatest number” can lead to counterintuitive results. Does one hundred really happy people score more highly than two hundred less happy people? Time to start weeding again – by random lot if you don’t want to judge on phenotypic factors. Conversely, if you set up your metric the other way round such that lots of moderately happy people is “better” than a smaller number of very happy people, your ethical system will push you towards overpopulation.

  39. TWood says

    I can’t speak for Sam’s motivations, but I suspect that he is advancing this line of thought in some part to spread the counter-meme against religion as the sole source of morality. It’s a recurring theme among the Four Horsemen.

    One way to give well-being a measurable value is to consider the stability of a social system over time. If the lower 10% are enslaved, there is some probability that they will eventually revolt and undo all the well-being that was created up to that date. If Bill Gates bought all the food and stuffed it in his mansion, the revolt would be swift and well-being would find its range of acceptability.

    I don’t think Sam said that science can find exact measures or precise locations to draw a line of distinction. I heard him say that there is a range of acceptable well-being that exists along a spectrum.

  40. Tulse says

    One way to give well-being a measurable value is to consider the stability of a social system over time. If the lower 10% are enslaved, there is some probability that they will eventually revolt and undo all the well-being that was created up to that date.

    I very much doubt that anyone would find such a pragmatic argument against slavery all that compelling (although they may find it a good reason to perform lobotomies on their slaves).

  41. Dobb's Head says

    Hold on a second, I’ve noticed a general shift from ‘well-being’ to happiness. This isn’t what was ever meant. We know how to quantify meaningful traits of well-being: infant mortality, lifespan, access to clean water, education level, number of social contacts, etc. These aren’t abstract concepts like good or happiness, but rather concrete observables.

    Isn’t the goal of the social sciences to link these observables to policy? Doesn’t making that link at all imply a set of ‘good’ choices? Why can’t we quantify and compare these ‘good’ choices? This stuff isn’t magic.

  42. inflection says

    Something I’ve wondered for a long time on the ought-vs-is argument. I like Hume, really. And I get where the claim comes from that one cannot derive an ought from an is.

    But…

    …what else is there to derive your oughts from?

  43. Paul W., OM says

    I’m not sure I understand exactly what Harris is saying, but I think I’m leaning his direction.

    PsyberDave:

    Morals are definitions. How can you prove a definition is true? It is true because it declares itself to be true, not because it is empirically and objectively discovered to be true.

    I disagree. I think you’re missing what Harris is getting at about a science of morality. It’s not just a matter of definitions—it’s a matter of how an actual phenomenon actually works.

    Here’s an analogy.

    What is life? What definition of life did we start from when biology was founded?

    We didn’t have one, because life is an actual natural phenomenon to be studied, whose basic nature was to be scientifically discovered.

    What we discovered is that life is a matter machines maintaining themselves and creating other similar machines. It isn’t about having a special animating life force, as most people thought when we started out.

    Until we discover what actual life is and how it actually works, we don’t have a “definition” of life. (We may have various “operational definitions,” but an operational definition isn’t a real definition—it’s just a kind of reference point to clarify conversation.)

    I think Harris is saying something similar about morality, viewed as a “natural kind” (i.e., a kind of thing in the world, whose nature is to be discovered, not pre-defined).

    It turns out, empirically, that morality is about distributed control mechanisms for intelligent actors that must cooperate, in certain kinds of game theoretic situations.

    It also turns out, empirically, that morality is not really about following rules laid down by a divine moral authority. That was, scientifically speaking, a misconception about the nature of morality. It just isn’t actually that sort of thing.

    I think Harris is right that certain things count as morality and other things don’t, scientifically. Morality is a certain kind of phenomenon and not others—some things don’t count, because they don’t work in the right way.

    Up to a point, we can talk about that scientifically, and say, e.g., that some things are and other things aren’t moral issues—they don’t involve the kind of game-theoretic situation and/or distributed control mechanisms that morality evolved for.

    So, for example, morality isn’t just a matter of arbitrary preferences. Some preferences are moral preferences, that play a certain role in our psychology, given the architecture we’ve evolved for distributed control. Other preferences are mere self-interested or aesthetic preferences, with different motivational mechanisms coming into play. (I believe that to be a truth to be confirmed scientifically.)

    Morality is a complicated phenomenon, though, and as with most natural phenomena, we may actually discover several valid senses of the word.

    In particular, I think that there are at least three scientifically interesting senses of the word moral:

    1. Involving game-theoretic situations where a distributed conflict-suppression and cooperation-promoting mechanism would be advantageous, especially for avoiding commons problems.

    2. Involving the actual distributed control mechanisms evolved for that purpose, but maybe not actually promoting those evolutionary ends. (E.g., something may be “moral” in the sense that we use moral motivations and reasoning, but not in fact promote the general well-being.)

    3. Being rationally tenable, in light of 1 & 2. (E.g., an action may be moral in the sense that the actor believes it to be moral, using moral drives and reasoning mechanisms, but be mistaken; moral reasoning is mostly reasoning, and people often make mistakes, e.g., by being mistaken about whether there’s a God and how that does or doesn’t matter morally.)

    I think that there are two basic kinds of moral drive evolved into us:

    1. a capacity for obedience to moral authority, i.e., people we trust to know what’s right and wrong better than we do, and

    2. a capacity for caring about the general well-being, not just our own.

    I think there’s natural variation in how much people are motivated by each of these things. (And people change over time in this respect, one way or the other.) To some people with more or less authoritarian personalities, #1 is more important than #2. To others, #2 is far more important than #1.

    Often, obedience to moral authority reduces to (and is justified by) a concern for the general well-being—people may trust moral authorities that they perceive as promoting the general well-being, and distrust those they perceive as doing something else. (E.g., being selfish or deluded.)

    It may still be true that some people are morally motivated mostly to be obedient to moral authority, and much less to promote the general well-being. If those people find that their moral “authorities” are bogus—e.g., cease to believe in a righteous God guiding their righteous moral leaders—they may be at sea, morally.

    I think that the large majority of people are not that way. If convinced that there’s no divine moral authority, they’d be able to reason morally as the rest of us do, and recognize the primacy of promoting the general well-being. (They might also want to be obedient to an irreducibly good moral authority, but accept that there just isn’t one.)

    I think Harris is getting at several important points, less clearly than I’d like:

    1. Morality is certain kinds of things and not others, scientifically speaking. It has structure and function that can be described scientifically, whether you personally are motivated by it or not. (An amoral anthropologist from Mars could figure out what moral “oughts” are, and often which things would and wouldn’t count as moral oughts, as well as many “oughts” that are plainly mistaken in light of scientific fact.)

    2. Morality is generally supposed to be “true,” or at least not false—e.g., people generally recognize that whether a moral authority’s dictates could be morally binding depends on whether that moral authority actually exists and says those things. (That’s why it’s so important to fundamentalist homophobes that God is real and the Bible is true.)

    3. People’s moral motivations may differ somewhat, but often the differences don’t matter in terms of rational tenability, because facts undermine some of their motivations but not others.

    In particular, variation in motivation by obedience to authority vs. the common good is generally reduced if you can show that there’s no satisfactory authority; you can focus on promoting the general well-being.

    (For example, maybe some authoritarian fundies are innately and irreducibly obedient to their purported jealous, vengeful god who hates fags, but sane people can agree they’re wrong to believe in that, so their homophobic preferences would have to be justified in terms of promoting the general well-being, without question-begging assumptions like Gay Sex is Bad Because God Says So. And of course, we should try to talk the fundies out of believing in their nonexistent moral authority.)

    I think Harris is right that there’s something natural about that, which is scientifically explicable. Morality is a particular kind of thing, evolved for particular purposes, that isn’t just arbitrary and subjective. It has a natural domain (conflict vs. cooperation, selfishness vs. altruism) and a natural valence (pro-cooperation
    and altruism). And it has natural methods—we’re evolved to reason about morality, and refine our moral judgments, using mainly regular old reasoning.

    We can therefore talk about moral premises being wrong—if you think it’s irreducibly moral to be irreducibly selfish, for example, you don’t get it. That isn’t morality. Or if you think it’s moral to wantonly inflict suffering for no good reason, you don’t get it—that’s immorality, not morality.

    And if you think the highest moral good is obedience to God, irrespective of the general well being, that may be naturally “moral” in a certain sense—maybe that’s a mode of moral functioning we’re evolved to be capable of, too—but you’re still morally mistaken, and therefore morally wrong, because you have a faulty premise.

  44. slayersaves89 says

    Ben Goren and Rooter:

    I’m right there with you from a practical standpoint. This is not what is being debated however.

    When someone is said to be “wrong” scientifically what is meant is that there is some factual aspect of the universe about which they are mistaken. Just saying that I ought to be moral out of self interest also carries the hidden (or not so hidden) assumption that I should even be interested in myself. Someone who says that he wants to commit suicide after blowing up the entire earth is not necessarily mistaken about any facts. Of course someone with those beliefs is probably likely to be delusional, but it is not a prerequisite for believing in something like that. It is possible in principle. A person like that could, in principle, be incredibly rational in the way he goes about this kind of thing. Indeed, they would have to be, it would take quite an intellect to blow up the entire world.
    To use an example from Russell Blackford.
    A psychopath who is torturing me does not necessarily believe I am not in pain. He simply does not care that I am in pain.

  45. Thanny says

    I’ll wait until I read his book on the topic before judging, but I think all of you are missing a fundamental point.

    Namely, that assumptions are required for *every* intellectual endeavor we indulge in. People are often fond of claiming that only in mathematics can something be absolutely proven, but it’s an empty claim – every proof ultimately relies on axioms, things taken as true but unproven.

    Science itself requires a fair set of assumptions, such as the rather prominent one that we aren’t simply imagining reality – there really are other people, and when they say they found the same result I did, it’s not me just talking to myself in what I imagine is my mind.

    Does Harris begin by assuming something a bit too far down the field? That remains to be seen.

    Many of you act as if values come from the ether. They do not. They come from human brains, and as such, they are in principle discoverable by science. It could easily be that Harris’ only real assumption is that human brains are enough alike for a core set of values acceptable to all non-aberrant people to be discerned.

    Most everyone seems to see that deriving morality from values is clearly open to a scientific approach, so it’s really up to whether or not universally acceptable values are to be found. What little I know of research into this area suggests an answer in the affirmative.

  46. Tulse says

    We know how to quantify meaningful traits of well-being: infant mortality, lifespan, access to clean water, education level, number of social contacts, etc. These aren’t abstract concepts like good or happiness, but rather concrete observables.

    Also observable are things like “supportive of revolutionary socialism”, “avoidant of associating with those who promote unrest”, “time spent in church”, “number of premarital sex partners”, etc. etc. etc. Many would argue that those things are also concrete observable traits of well-being. Given that, what is your objective argument against that position?

    Also, how do you do your calculus to add all those “concrete observables” into one measure? What is the objectively-determined formula? Is it more important to minimize infant mortality or maximize social contacts?

    And presuming you can come up with such a formula from objective, scientific principles, what happens if it turns out that your “well-being” measure can be maximized for a society as a whole if some particular people are treated terribly (such as made slaves or used as unwilling organ donors)? Do you now have to justify not just traits of individual well-being but also traits of societies (such as “justice” or “egalitarianism”)? How do those enter into the equation? How are those justified?

  47. rooter says

    slayersaves89 – right. all this requires an assumption (danger danger) that the goal is to lessen human suffering, and increase our well being. I believe that is a starting point we’re all at.

    I think sam assumes that is everyone’s goal and moves from there. i don’t think this is a point we’re debating at all.

  48. Paul W., OM says

    A reasonably rational psychopath or sociopath often has a reasonably good grasp of right and wrong, but does not care.

    That doesn’t mean that they don’t know torturing or ruthlessly exploiting people is wrong when they do it. They just don’t mind doing things that are wrong.

    The existence of sociopaths doesn’t undermine what Harris is saying, near as I can tell. Some have broken reasoning, and don’t know right from wrong. Others do know, but have broken motivations, and don’t care.

    That doesn’t mean that they’re not broken moral beings, doing things that are wrong, in scientifically explicable terms that correspond well to folk psychology and folk morality.

    Morality has a natural structure and function, and they are broken units that don’t function properly. A sociopath or psychopath who enjoys being a broken unit isn’t a counterexample to that. There’s no necessary contradiction between being factually correct and morally wrong.

    On the other hand, most people are not sociopaths, and most “deep” moral disagreements do in fact hinge on errors of fact, e.g., about the existence of a God and what that god morally prefers. (And whether it actually makes sense to consider that morally binding anyway, a la Euthyphro.)

  49. Mr T says

    Tulse:

    Also, how do you do your calculus to add all those “concrete observables” into one measure?

    How do you do that kind of calculus whenever you make what you consider to be a morally good decision? I’m not asking how should you do it — just a description of how it is done.

    I assume you also base your morality on observations of the current state of the world (as necessarily limited as those observations are), and predictions of possible changes to that state after your decision(s).

  50. Paul W., OM says

    For anyone who is under the impression that Sam Harris does not claim to have overturned Hume.

    Here you go.

    Note the subtitle: A response to David Hume (or the Hume of popular imagination).

    I think the parenthesized phrase may be important.

    At least, Harris is not claiming to be able to make a rational sociopath into a properly moral agent by sheer force of rational argument, or anything like that. He acknowledges that Hume is right about that, IIRC.

  51. ostiencf says

    Great post PZ. Though there is something I want to bring up and briefly comment on. I think a major issue here is the distinction between morals and values. When people decry “moral relativism” they assume that morals and values are one in the same. Moral relativism only means that one recognizes that morals vary widely from culture to culture and what is important is how we valuate those moral claims. To put it another way, we cannot conceive of a scientific morality, like Harris wants, but we can make various value judgments which are less strict then morality, but can draw a line in the sand as it were. Thus we can constantly revisit those values and re-evaluate them, perhaps drawing a new line in the sand.

    So one can be amoral but still create ethical models of action. Those ethics and values can be just as strong, such as a commitment to feminist liberation, economic freedom, anti-racism/sexism/LGBTIQ-phobia/nationalism etc. There is just no delusion of moral certainty. So Sam is partially correct, but he conflates morals and values, trying to justify his morals with scientific rhetoric (a mis-use of science which has already been commented on). Choosing the former, in it’s certainty and universalism (even though he claims it is not universal, he is just distinguishing between various universalism we can choose) over the latter, constantly reflexive value and ethics making.

    Again, great post.

  52. irenedelse says

    One problem here, as in most discussions about philosophy, is that everything you mean to say hangs on what definitions you have in mind. And what examples (and counter-examples) you use.

    Someone above posted:

    One way to give well-being a measurable value is to consider the stability of a social system over time.

    But how to define “stability over time”? How long a time? What degree of stability? The Chinese empire was remarkably stable, in a way: its culture and institutions survived about everything, from civil wars to foreign invasions. One can even argue that today’s China is a continuation of the Empire system under other denominations. But that society wasn’t any more just or equitable than other, less durable ones.

    In fact, if we look at the question from a systemic point, the use of “durability” as a sign that a social system is “doing well” is problematic. Yes, a high degree of homeostasis in a system can be a sign of success… for the system! Not necessarily for the elements of the system. Here, the individual human beings, who may be in the situation of people trapped in an abusive relationship, who suffer and make others around them suffer a lot of the time, because they’ve always known abuse and can’t even imagine living differently. This system is “successfully” stable, too, and may even expand its range if neighboring systems are weaker. But it’s not one destined to maximize the well-being of the people inside.

  53. Ben Goren says

    slayersaves89 wrote:

    When someone is said to be “wrong” scientifically what is meant is that there is some factual aspect of the universe about which they are mistaken. Just saying that I ought to be moral out of self interest also carries the hidden (or not so hidden) assumption that I should even be interested in myself.

    Actually, it does require an assumption, but that’s not it.

    It assumes that you have a particular goal or set of goals in mind. In intelligent life, that is invariably a multi-layered hierarchical set of goals.

    Except in rare circumstances, the most basic goal is nearly always self-preservation. Once one ceases to exist, one can no longer affect the outcome of events (though, of course, the consequences of events while one is alive may continue to persist).

    Self-preservation encompasses a vast array of what is considered by morality, for self-preservation nearly always involves access, in one form or another, to limited shared resources. In human society, nearly all of that can be approximated by wealth and social standing.

    Nearly all other goals include propagating one’s genes, and the avoidance of pain and enhancement of pleasure — all three of which are evolutionary proxies for self-preservation. And, since those are all almost entirely dependent on self-preservation, they’re not necessary for a first approximation.

    So, the only edge cases we need consider are those where self-preservation is not primary. Those fall into easily recognizable categories, such as the suicidal, the soldier who falls on a grenade to save his mates, and so on. Each of those cases can be scientifically examined to determine the best course of action.

    For example, the Muslim suicide bomber blows himself up along with the innocent victims around him in an effort to guarantee eternal access to 72 raisins. We know this act is immoral because there aren’t 72 raisins awaiting him; he is harming himself and others for no gain whatsoever. His goal is not accomplished.

    Sociopaths? They don’t care about others, but the almost always care about themselves. Their inability to do the instinctual math to let them make moral decisions puts their own interests at self-preservation in severe jeopardy.

    I hope this is enough to point you in the right direction….

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  54. Jack Flynn says

    I greatly sympathize with Sam Harris’s thesis that science can in fact be a guide to making moral choices. I also appreciate the skepticism of Sean Carrol and PZ, it reminds of the skepticism that surrounds Dawkins’ idea of memes and sociobiology in general. However, I think Science is our best shot at coming to a proper understanding of morality. GO SAM HARRIS GO! :-)

  55. Pierce R. Butler says

    C’mon, don’t y’all realize that Our Glorious Leader Comrade Sam is directly challenging the base canard that atheists lack morality and a vision for the world?

    All True Atheists™ ♥ hardcore scientism! Don’t hold back – there are stereotypes to fulfill!

  56. Paul W., OM says

    ostiencf,

    It sounds like you’re falsely separating morals, ethics, and values. They’re all part of the same moral system.

    Moral rules are generally justified by underlying values, and most values are derived from more basic values and (otherwise) value-free facts.

    (Often people don’t consciously know what their most basic values are, and how different values relate. That’s what philosophical puzzles and thought experiments are meant to clarify.)

    See my earlier post for an example. When push comes to shove, most people value the general welfare more highly than obedience to moral authority. (Moral authority is frequently justified in terms of its promoting the general well-being.)

    More specific rules are generally justified in terms of those very basic things—authority and/or general welfare.

  57. rudi tapper says

    PZ – I think you’re missing the point. You’re not wrong, but your argument seems analogous to saying – “We can use science to tell WHAT foods are healthy for us to eat, but science can’t tell us WHY we should live healthy lifestyles”.

    In other words – it’s a somewhat spurious and irrelevant objection. Sure, science can’t tell us why we should be moral. But we can clearly use scientific evidence and rational thinking to maximise our mutual wellbeing, and we already do this in the field of medicine, at least in principle.

    In my opinion – and I don’t think Sam has put it like this himself – Sam is merely proposing an extension of the philosophy motivating medical science, so that it encompasses general societal wellbeing rather than simply curing diseases.

    To ask WHY we should do this seems to me rather like asking why should we be nice to each other rather than murdering each other. To which the answer is – clearly – why not?

  58. Etruscan says

    I think the problem with Sam’s argument is he’s just poorly advocating it. He could come and say what others are trying to say for him (e.g. #58) but is instead muddling his point by trying to propose a substitute for existing religious dogma.

    The objective shouldn’t be to substitute science for dogma. It should be to drop the dogma, recognize the limitations of our objective ability to reason, pick some reasonable societal goals, and work from there.

  59. petria says

    I find this discussion fascinating but slightly disturbing. Surely the Atheist community should be cheering to have a rational response to the old,’Well science can’t give you morals’ statement that the xtians so often respond with. We know that strictly speaking it is true but it is also true that using religious doctrine is actually worse. I think that we can and should respond with something like, ‘the scientific process can begin to determine the most healthy moral outcomes’

    I was so pleased to hear that Richard Dawkins has changed his response to one more in alignment with Sam Harris’ thinking. I think this is so important despite the obvious plethora of wrinkles to iron out. It’s a fantastic starting point and I hope that it helps the secular community to empower more people in it’s bid to prove that religious based morals are abhorrent and do a lot of harm.

    The issues that Sam has highlighted desperately need secular intervention not high philosophical debate about the ‘is/ought’ problem. It worries me that so many people have backed away from Sam’s point because of pedantry. (is this harsh?)

    I believe that the abstinence only sex education programmes in the US have led to basically no change in the rates of teen pregnancy or teenage sex. The church will happily deny the results and suggest that everyone pray more. The scientific approach would be to analyse the resulting statistics and determine why the programme was flawed. Such an analysis would have to stray into moral territory and so it should.

    The harm that bad moral guidance causes can be quantified by outcomes such as depression, addictions and often death. I don’t think quibbling about moral relativism helps us solve these problems. Lets get on with it!

  60. viggen says

    I agree with Harris entirely that the oppression of women is an evil, a wrong, a violation of a social contract that all members of a society should share. I just don’t see a scientific reason for that — I see reasons of biological predisposition (we are empathic, social animals)

    But there is a scientific reason. Culture is fundamentally a genetic algorithm by which humans, who evolve slowly, can respond as a unit to environmental pressures without evolving biologically. Genetic algorithms operate by culling diversity to fit a niche, just like natural selection. Since diversity here is in potential sources of ideas that can be communicated between individuals and thereby implemented, a culture that suppresses half its population’s ability to contribute to adaptation has effectively halved its base diversity.

    This may not seem like much, but keep in mind that one person in a population of a billion was responsible for Special Relativity… maybe one pregnant woman working at a stove with her shoes off who is alternatively ignored and beaten by her oppressive husband has the grain of an idea that we haven’t yet thought of that could change us all.

  61. slayersaves89 says

    rooter:
    I think I addressed that in my first post. In practice the fact that Sam Harris has failed to establish his meta-ethical position is not a problem because we happen to mostly be at that starting point. Sam Harris is claiming to have established a meta-ethical position which he simply has not. His set of values are useful for those who agree (such as me and you) and his prescription to use science to implement those values in the real world is a very good one.

    However his claim to have bridged to gap between an objective “is” to an ought is, as far as I can see, incorrect.

    Ben Goren:
    You don’t seem to be grasping what I am suggesting.
    Merely pointing out that many people do, in fact, have these sets of values, does nothing to establish the objective truth of those values because values are not the types of things which are true or false. You and I happen to value wellbeing (presumably). What if we met someone who instead of wanting to maximize wellbeing, wanted to maximize the color yellow. He felt that all human resources should be devoted to making everything in sight reflect that most special of wavelengths, 580 nanometers. He went about doing that in the most scientifically valid and effective way possible. What exactly is this guy wrong about? You could say that he is wrong because he does not care about himself, however what does not caring about himself make him mistaken about? Believing that “72 raisins” will meet you in heaven is a claim about what will happen when you die. It is true or false. This is simply not so when it comes to what we ought to do, or the things we ought to value. It is not a claim, it is a point of view.

    Just because there does not happen to be anyone who wants to paint everything yellow does not mean you could make a rational argument against someone who did. If you think I am simply nitpicking thats fine, but that is what meta-ethics is. Peter Singers version of practical ethics is more my style as well. But when we are dealing with meta-ethical questions it is simply a different ballgame. In fact Singer has a chapter on this very issue in his book “Practical Ethics”. Give it a read if you have the time.

  62. ostiencf says

    Paul,

    I disagree, the distinction of morals, values and ethics is important for philosophy. Morality generally makes a Truth (big T) claim (that is a transcendent immutable Truth). However, a focus on values, would mean to gather various truth (small t) claims and from those derive a relative value system and modes of ethical action that is far more mailable then a system based upon a Truth.

    Your claim that most people will value general welfare is also a claim to Truth. To borrow an image from Nietzsche: should the bird of prey see itself as evil in accordance with the morals of the lambs it slaughters? The claim to “value” general welfare is a moral claim, transcendent and universal.

    I’m not saying that morals cannot have values (they must have them and they are a place of disagreement, but I would refrain to say that they may stem from rational errors as that implies a a-historical rationality and Truth) or that morals cannot lead to ethical models (they often do) but my focus on values and ethics tries to escape from Truth claims and thus morals.

    Of course there are branches of philosophy dedicated to this question and it would do little justice to them to go on at length here. It seems we are on different areas of this philosophical debate (me on the continental and post-modern area and you perhaps with a more analytic philosophy bent) and may just have to agree to disagree, I don’t think an internet debate is going to convert a continental or a post-modern philosopher to an analytic one, or vice versa, and should that even be the goal?

  63. says

    viggen, it doesn’t even have to be that the oppressed virtually-enslaved housewife won’t be contributing her Grand Unified Theory. By keeping her barefoot and pregnant, she’s not being anywhere near as productive a member of society in general as she otherwise would be.

    She could, for example, be running a small business out of her home. If she’s one of the better seamstresses in the village, she could be doing that for everybody, bring in a nice tidy sum, and free all the other women from that chore.

    And with the money she makes from sewing everybody’s clothes, not just her own family’s, she could send her daughter to school where she could learn not how to sew but how to make sewing machines.

    And the money that the daughter earns running a sewing machine factory should be plenty to send the granddaughter to college to become an engineer and learn how to make better sewing machines — along with planes, trains, automobiles, and Mars rovers.

    At the same time, all the other mothers are hopefully doing something similar. Maybe the one gets her husband to build a bigger oven so she can bake bread for the whole village, so her daughter can become a chef and her granddaughter a biochemist. You get the picture.

    Instead…well, instead, everybody is forever mired in muck and misery, when they could all be living like royalty.

    Those Taliban men oppressing women? Their desires are to be rich beyond imagination, to live like royalty. They could have their dreams fulfilled, if only they’d stop being stupid comic book cavemen bonking women on the head with a club and dragging them away screaming. And, oh-by-the-way, behaving morally (only it’s not coincidental).

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  64. arjbooks says

    Carroll wins hands-down… it’s hard enough to define “science” in any consistently meaningful way; but it’s impossible to define “morality” in an objective, empirically-applicable fashion… but yeah, folks will have fun trying.

  65. Ben Goren says

    slayersaves89, it’s perfectly fine for somebody to be infatuated with the color yellow. And such an infatuation should indeed drive his moral actions.

    In order to have much success in painting the world yellow, he’ll have to survive a long time in order to do so. And that means not doing anything humanity considers morally objectionable, such as murdering, raping, and pillaging. That part of his moral code, of necessity, will have to be shared with all others who are interested in survival.

    Further, he will undoubtedly need to persuade many people to come to his aid in his yellow cause. That will require money and social standing, both of which are only achievable through moral action. He could be dishonest in his attempts, as too many politicians are, but we see again and again how such dishonesty backfires. In moral societies, it backfires relatively quickly and spectacularly. In corrupt societies, it’s the society as a whole that disintegrates — and it’d be kinda hard to organize a mass collective yellow-izing effort in anarchy.

    Does this help?

    Only if your goals are a perverse form of self-destruction, whether singular or collective, will you come up with a moral code radically different from the one we consider instinctual. And, if such are your goals, you will rightly be opposed by everybody else and condemned as immoral. Even if you do somehow succeed, it’ll be a pyrrhic victory and ultimately little more than a side-show in the grand scheme of things.

    Cheers,

    b&

    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  66. D says

    Ahh, ethics; only slightly less arbitrary than art…

    Richard Carrier gave a talk a while back where he also said you can get an “ought” from an “is,” and he made a very similar-sounding misstep: given the world (the “is” of your situation) and a goal, science can help you figure out what you ought to do to accomplish that goal (the “ought” of hypothetical imperatives). The problem with Carrier’s line of reasoning (as well as Harris’) is that whole “and a goal” bit.

    The business of ethics is to figure out what we ought to take as our goals in the first place. Given any goal – to save humanity from blowing themselves up, or to cause humanity to blow themselves up, or to do anything whatever – we can use science to determine how that goal can be accomplished. But there’s always more than one way to skin a cat, and so determining which of several available courses of action to pursue will also be a matter of (slightly more practical) ethics.

    Here’s a simple example. Suppose I want to go to the park – I just do, it’s a brute fact, I want to go to the park. Now, the shortest route is to cut through the parking lot of the church next to my house, but I don’t want the preacher talking to me about how my house parties aren’t loud enough (true story!). I could take the scenic route and walk the long way around the block, where there are pretty trees; or I could take the shorter but not as pretty route, where there are a couple run-down houses. Given my goal and the situation, which is the best way for me to get to the park?

    Well, if “best” means “fastest,” then I ought to risk a conversation with the preacher and sprint through the church lot. If “best” means “most enjoyable,” then I ought to take a leisurely stroll along the scenic route. If best means “quick but not potentially aggravating,” then I ought to walk briskly by the run-down houses. I can, in principle, use science to determine which route as a strategy will make me happiest in my career of park visits, empirically determining which is the best route for someone of my particular psychology. But – and this is the important part – the matter of whether I should be going to the park in the first place is a matter of ethics*. The practical “ought” of how to pursue certain goals is different in kind from the ethical “ought” of which goals ought to be pursued in the first place.

    In short, science can tell us how ends may be achieved given certain constraints, but it cannot tell us whether those ends are worth achieving in the first place.

    * – Unless going to the park is a mere step in service to some higher goal, in which case just mutatis mutandis the analogy for whatever the highest goal is – these “highest goals” are the province of ethics, and cannot be scientifically validated; they’re arbitrarily chosen.

  67. Ben Goren says

    arjbooks, I would define morality as an optimal strategy for achieving one’s goals. Is that objective and empirical enough for you?

    Before you leap to conclusions, read some of my other posts in this thread and (hopefully) realize that people have conflicting goals of different importances. Of necessity, certain goals (such as survival) are common and of much more importance than other goals. The optimal strategy for such common goals tends to be uniform and closely mirrors that which we instinctually and emotionally feel. Therefore, almost everybody will share a core set of morals while each will have relatively insignificant (or else sub-optimal) variations on that basic theme.

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  68. Ben Goren says

    D wrote:

    Unless going to the park is a mere step in service to some higher goal, in which case just mutatis mutandis the analogy for whatever the highest goal is – these “highest goals” are the province of ethics, and cannot be scientifically validated; they’re arbitrarily chosen.

    It’s actually this parenthetical caveat that points to the identification of an objective universal morality.

    In order to go to the park, you must be alive, and you must be a free member of society. In order to attain those goals, you must be a well-behaved moral member of society, and society as a whole must be moral and healthy.

    That is, to to to the park, you must not go on a murderous rampage; you must pay your taxes; you must eat well and exercise; and so on.

    Pick any other goal of any sort of consequence, and you’ll discover that it is almost guaranteed to be equally dependent on your being a productive — and therefore moral — member of society.

    If I may, I think the fundamental problem everybody is missing is that morality is, of necessity, a bottom-up process, just as Evolution itself is. Virtually everybody seems to think that morality is something applied top-down, like the proverbial skyhook, that there are absolute external morals that tell you what to do. That’s religiously-inspired upside-down thinking.

    Instead, you have goals that dictate your morality, and those goals are dependent on more basic, more primitive goals that ultimately trace back to survival. There are only so many ways to survive — you gotta eat, and so on. Therefore, there are only so many fundamental ways to be moral.

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  69. Pikemann Urge says

    As far as I can see, you are right, PZ. You understand moral philosophy very well, judging from what little I know.

  70. D says

    @ Ben Goren (#69, 54, and others):
    You make a great case for a humanist ethic. I like it, I really do. And I’m also a great big fan of your “ethics as bottom-up” idea (I’ve written a great deal in a similar vein, myself). So, OK, we can set aside the folk notion of a conventionalist ethics (i.e., a specific ethics upon which all must agree and to which all must adhere), and start talking about it at the individual level, as you have been doing (but which, I hope you realize, will talk right past the implicit assumptions of a great many).

    I’ll propose an ethics that is perfectly compatible with, yet directly opposed to, yours. You take the “is” from “we are here” and “how we got here” and “what has worked” and “what’s likely to work.” I shall start from the other end, taking my “is” from “we are temporary” and “how we cease to be” and “how things fail” and “accepting the inevitable.” Since I’m going to die and there’s nothing I can do to prevent that, I take it as most important to accept that fact with a minimum of fuss. Since my death will cause grief to those who I care about, I shall take care to avoid it as long as reasonable. But since I cannot prolong my life indefinitely by abstaining from destructive activities – and what’s more, even pleasurable activities go smooth after a while – I shall engage in those pleasurable destructive habits which will permit me a young death while at the same time living a full life. I’ll have fun, even though sometimes fun costs ya (as Bill Maher says).

    You see, my intellectual capacity to enjoy things increases, to a point; my emotional capacity to enjoy things decreases, after a point. With that in mind, I shall learn as much as I can about pleasure, pursue as much as I can while able, and gracefully step out of life when I no longer feel like living it. Perhaps I shall wander off into the woods to be eaten by a bear; it sure beats waiting to die in a hospital. And so I keep the party going as long as I feel like it, while also contributing to society to help as many others as I can to have the opportunity to do the same. My zest for life is fueled entirely by my acceptance of death, and when I go to my grave I go with open eyes and a happy heart.

    Yours is an ethics of life, mine is an ethics of death. You focus on making it so people can live as they please, I focus on making it so people can die happy with how they lived. We differ only in emphasis and perspective, which side of the coin we think of first and foremost; yet in just about any situation, I bet we’d agree on the right-ness or wrong-ness of this or that. Individual freedoms, the importance of informed consent, the necessity of acknowledging verifiable facts when making decisions and forming beliefs, I bet we’d agree on all sorts of practical ethical matters!

    But now I’ll turn things on their head and show how a similar shift in emphasis & perspective, for someone in your own camp, can turn out a cynical (and, I would say, downright evil) mindset. I’ll start by simply acknowledging all your points about the evolution of altruism, the importance of sustainability, and all of that. Now just game the system in order to achieve the maximum short term benefits for yourself with the longest-range sustainable policy necessary to get away with it. I mean, sure, we hear about corrupt politicians all the time – but we only hear about the ones that get caught. So the rule is, just don’t get caught; even better, don’t do things that are worth catching.

    The trick is to actually do good, but to do better for “you and yours.” There is no shortage of opportunities to do good for others, just conscientiously choose those which also directly benefit you, your close relatives, and your offspring. This will, of course, require that you involve yourself in everybody else’s interests, all the better to influence them. A high-profile career in business or politics is ideal for this. Also, have as many offspring as possible (perhaps by donating gametes, and having an “in” to facility records so that you can track your offspring throughout their lives without anyone else knowing that you’re related), and give them small opportunities – reward those who do well by providing increasingly larger opportunities, and make sure you always benefit somehow. Teach the best of these what’s “really going on,” and set up a dynasty of sorts: a network of invisible nepotism which will generationally take over the world, while maintaining the illusion of social mobility. Only the one at the top will ever know the whole dishonest story, and being vetted for one’s capacity to perpetuate the whole setup is one of the conditions for being informed in the first place, so it’s self-correcting.

    This way, you get genetic variation, you’re selecting for success in the real world, genuine competition still contributes to the churn of natural selection, and you’re doing something sustainable in the long term because it really does work for everybody – it just works best for you. You might not succeed, true, but that would be the best way to play the evolution game: to set things up so that everyone benefits absolutely, but your own lineage benefits relatively more. Accumulate small advantages over time and in the long term – that’s how it works in nature, that’s how you should do it. It involves being conspiratorial, manipulative, cynical, oppressive, opportunistic, and staying just a couple steps ahead of everyone else. Not excessively so in any of these cases, since you shouldn’t let your greed outstrip your grasp, but just enough to get ahead while making it look like you’re just getting along.

    Of course, it can be hard to see how this is “bad,” since you’re actually doing a great deal of good for everyone in absolute terms. But you’re seizing opportunities for your own lineage and invisibly denying them to others down the line, while perpetuating an illusion of liberty and equality. Everyone who’s trying to do something like this on instinct is still playing the natural selection game, but what I’m saying is that a fully informed and rational person could have an ethic of playing it better than anybody else in the history of ever.

    What’s more, to obtain the best results for everybody, this is exactly what everyone “ought” to be trying to do, since whoever isn’t frantically scrambling to do this will get left behind in the dust. To my mind, this reduces life to a rat race par excellence, with a consolation prize of bread and circuses for those unlucky enough to be born outside your genetic lineage, and no room for people like me who prefer to take things at a leisurely pace and enjoy the scenery. It’s optimization to the point of barbarism.

    But that’s just how your morality could go wrong; a moral platypus, as I call it. For those practically-minded ethicists such as you and I, this only demonstrates that working from “is” to “ought” can output unpalatable imperatives, depending on your emphasis and perspective. What most people mean by “ethics” is “how we ought to live, full stop,” or “what values we ought to have, full stop,” independent of the “mundane” concerns which are the whole thing to us. We’re almost literally speaking a different language from them. Our problem is no less real, however, and it amounts to which emphasis and perspective we ought to adopt in our interpretations of the objective facts. And when you answer that question, then comes the question of why we should do it that way. And so on, ad infinitum – plenty of things work which you wouldn’t call good, so working can’t be your criterion, and “what you call good” is arbitrary (because while you may base your chosen values on facts, which facts you choose to base them on at root is still an arbitrary choice, and which facts you have available to you is a matter of chance). You can’t get an irreducible and universal “ought” from the objective “is” all by itself, because first you need a value to get an “ought”; and you can’t make a case for one value over any other without more values still.

  71. Tulse says

    This may not seem like much, but keep in mind that one person in a population of a billion was responsible for Special Relativity… maybe one pregnant woman working at a stove with her shoes off who is alternatively ignored and beaten by her oppressive husband has the grain of an idea that we haven’t yet thought of that could change us all.

    I’ve seen a similar argument used in another moral context involving pregnant women, namely abortion. I presume you would feel the same way about the potential of all embryos?

    I would define morality as an optimal strategy for achieving one’s goals.

    Surely you don’t think that all goals are actually morally relevant, do you? Is the optimal strategy for baking brownies literally a moral issue?

    In order to go to the park, you must be alive, and you must be a free member of society. In order to attain those goals, you must be a well-behaved moral member of society, and society as a whole must be moral and healthy.

    Or you could be Kim Il Jong, or Pol Pot, or Stalin. If one is powerful enough, the qualities you outline would not be necessary. Does that mean that absolute dictators are beyond morality?

    I’m rather bemused by all the apparent attempts to reconstruct morality as biological necessity. Didn’t we learn our lessons with sociobiology? And isn’t this just the Euthyphro dilemma in a different guise? If it turned out that evolutionary biology irrefutably demonstrated that eating toddlers increased overall species survival, who here would actually say “Well, I guess I was wrong about morality!” and start chomping on rugrats?

  72. Conversational Atheist says

    @peterwok “@Rooter: You miss the point – I’m not advocating racism, I’m pointing out in the most brutally obvious way I can that “well-being” is a subjectively defined term.”

    Harris is arguing about the well-being or suffering of conscious beings — so your example of enslaving of fellow conscious beings becomes much harder to make. Even arguing that a certain race was subhuman doesn’t make your case because their conscious suffering (Harris was careful not to limit it to human suffering) is what is relevant.

  73. John Morales says

    D @71, regarding the pragmatic side of your comment, it seems to me you’re putting forth enlightened self-interest as ‘evil’ purely because it’s self-serving; your example seems like far too much work for my liking, when you speak of “A high-profile career in business or politics is ideal for this”.

    I dispute this on the basis of cost-benefit. :)

    I prefer guiding principles to rule-sets.

  74. D says

    Is the optimal strategy for baking brownies literally a moral issue?

    – Tulse, #72

    Yes. Since well-baked brownies taste better – i.e. produce more pleasure – than poorly-baked ones, it most definitely is a moral issue. Some moral issues are more pressing than others. I think my comfort is more important than the survival of a few mosquitoes and the potential offspring they could have by feeding on me instead of getting swatted. I would also rush out the door to save a person’s life if it meant ruining a pan of otherwise perfect brownies, but I would save the brownies if it meant my friend would be left to tend to her stubbed toe all by herself. No offense.

  75. D says

    @ John Morales (#75): Shit, that got way longer than I wanted it to, now that I look at it. OK, three-sentence limit.

    I don’t object to the “enlightened self-interest” part, I object to the deliberate attempt to keep others “in their place.”

    A huge cost to you, personally, to secure a self-perpetuating benefit to your descendants is totally worth it if you take the long-term view, and isn’t that what planning for the future is all about?

    Preferring guiding principles to rule-sets is arbitrary; that which maximizes [whatever it is on which you base your morality] might be a self-effacing system, e.g. we might need to think and behave like deontologists in order to maximize utility.

  76. Timaahy says

    “Can we develop ‘ought’ from ‘is’?”

    I think the question only makes sense if you accept the concept of free will. Without free will, our actions, and those of everyone and everything in the universe, are determined by a very complex matrix of interacting factors over which we essentially have no control. So, if there is no free will, whether we “ought” to do anything at all is irrelevent. We will do what we will do.

    For the record, I think free will is a crock of shit. I also very much admire Sam Harris, and I am interested to see how he has incorporated free will into his “outght from is” framework (I haven’t yet read his articles on it, but I am assuming he has considered free will in there somewhere).

  77. says

    D summarized himself:

    A huge cost to you, personally, to secure a self-perpetuating benefit to your descendants is totally worth it if you take the long-term view, and isn’t that what planning for the future is all about?

    The question is whether you are attempting to maximize your effectiveness in absolute terms or relative to others.

    And, once again, in that light, it should be obvious that maximizing benefit to yourself relative to others is ultimately self-defeating, for you are, exactly as the armed robber, wasting resources on harming / not helping others and protecting yourself from the others who are conspiring against you and yours. Even if you’re being gentle about it, you still gain more by abstaining from destructive conspiracies.

    Timaahy wrote:

    I think the question only makes sense if you accept the concept of free will.

    “Free will” is a meaningless term.

    If the universe is deterministic, there is no such thing as free will, because each action is the direct result of some other action, and so on ad infinitum. This applies even if the events are chaotic; in this context, that simply means it’s too complex to compute.

    If the universe is random, there is no such thing as free will, because outcomes are determined by random happenstance. For whatever range of possibilities exist, one or another outcome might occur, but with nothing guiding which one actually happens. (Curiously enough, random events are much easier to compute in the aggregate than deterministic ones.)

    If the universe is a mix of both, there still is no such thing as free will, for some things happen for random reasons whilst others are the result of an inevitable chain of events.

    One might propose that free will can be the product of an external spirit influencing the actions of the physical universe, but such a proposal is most naïve. Is this spirit realm deterministic, random, or a mix? Whichever, it is clear that spirits, even if they were to exist, have no more free will than we do.

    What matters to me is that I believe I have some degree of choice in my actions…though, to be sure, it seems I have less choice each passing day. After all, once I learn something, what real choice do I have in failing to apply that new knowledge?

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  78. Kel, OM says

    I think the question only makes sense if you accept the concept of free will.

    Perhaps, though then it’s a matter of defining what free will is. If one is talking about making choices representing one’s own desires, then free will does exist. We don’t have the power to violate the laws of physics, but we do have a way of determining whether punching someone in the face is a desirable thing to do. The fact that we can think and reason about particular actions, that we can project into the future outcomes and base our decisions on courses we deem desirable, I think, is free will.

    What we don’t have is contra-causal free will, but one doesn’t have to be greedy reductionist to see there’s a difference between making a choice to punch someone in the face with and without a gun to one’s head.

  79. John Morales says

    D @77,

    A huge cost to you, personally, to secure a self-perpetuating benefit to your descendants is totally worth it if you take the long-term view, and isn’t that what planning for the future is all about?

    Nope. Your descendants aren’t you.

    What you describe is a form of altruism – i.e. planning for others’ future, not oneself’s.

  80. Kel, OM says

    As to OUGHT from IS, we already do it. OUGHT must imply IS, but IS does not imply OUGHT. Otherwise the OUGHT will always be arbitrary – the difference between doing something because it stems from a desire to doing something because it can be done.

  81. vinniehew says

    Harris’s argument is based in part on the meanings of the words ‘ought’, ‘good’, ‘bad’, etc.

    Is it coherent to ask whether we ought to behave in ways that maximize our misery?

    Is it coherent to ask whether it is bad to maximize wellbeing?

    Harris’s argument can be summarized as follows:

    All fields of reasoning are based on axioms. For example, the field of scientific reasoning is based on the axiom that we can make predictions from past experiences. Only an extreme skeptic would question this axiom. Similarly, the field of moral reasoning is based on the axiom that we should pursue the wellbeing and avoid the suffering of conscious creatures. Again, only an extreme skeptic would question this axiom. After all, would it be sensible to argue that moral reasoning should consider the rights and obligations of rocks and other inanimate substances? Would it be sensible to argue that moral reasoning should start from the axiom that misery is to be valued and happiness is irrelevant? Think about it…

    If you are too uncharitable as to grant Harris’s axioms of moral reasoning, then you must also throw out the axioms underlying mathematics, science, and logic too. As Harris himself has said (I’m paraphrasing): “Who decides that logical coherence matters in science? We do. If that answer is not good enough for you, too bad. By the same token, who decides that conscious experience matters in morality? We do.”

  82. Timaahy says

    @Ben Goren

    I think free will is an idiotic religious concept, developed with bronze age knowledge of psychology, and used purely to ascribe blame to “sinners”. For example, John didn’t rob the bank because of billions of unpredictable prior events which were beyond his control or knowledge, he robbed the bank because he chose to be a bad person, and therefore he’s going to hell. So yes, the term is meaningless, in that there is no evidence for it whatsoever, but the consequences of disproving it beyond doubt (which we haven’t done yet) are huge in the battle to rid the world of organised religion.

    I have been arguing about these issues with an old school mate who is now a Catholic priest, and he maintains that we have a soul, and the soul is where we derive (a) free will, and (b) reason. Since humans are the only animals with souls, no other animals have free will or reason (he says!). So yes, the Christian view of things seems to be that there is an invisible entity pulling our strings behind the scenes. As such, it is a perfectly testable scientific proposition, and if science could conclusively show that there is no such thing as free will (and I believe it will, one day), Christians (or Crosstitutes, as I have started calling them) would have some serious explaining to do. If there is no free will then no one can be blamed for what they do, and there is no one to send to hell. And that would piss them off no end. No doubt they will come up with some airy fairy rebuttle that makes sense to no one but themselves, but it would at least cause them some short term discomfort.

    (As an aside… the Catholic “reasoning” above is what leads them to conclude that humans did not evolve, but were created. That is, since we have souls, and other primates don’t, and the soul isn’t a physical characteristic, we cannot have evolved. Seriously… this is what passes for reason in the Catholic Church).

    Like me, you seem not to believe in free will either. So I’m curious… how can you say that “[you have] some degree of choice in [your] actions”?

  83. Timaahy says

    @ Kel, OM

    “…it’s a matter of defining what free will is.”

    I think free will can only be defined as pursuing a course of action that is independent from our current physical state. That is, either our actions are the result of laws of physics operating on the atoms we are made of, or something else. Free will is the “something else”. As a result, I can’t for the life of me see how the concept of free will can be compatible with an atheistic world view.

    “We don’t have the power to violate the laws of physics…”

    No, we don’t, which is why ‘free will’ is an idiotic concept.

    “…but we do have a way of determining whether punching someone in the face is a desirable thing to do.”

    But if we don’t have the power to override the laws of physics, isn’t the “way of determining whether punching someone in the face is a desirable thing to do” also, in the end, the result of the laws of physics? And wouldn’t the same apply to our ability to “think and reason about particular actions…and base our decisions on courses we deem desirable”?

    Also, I didn’t quite follow your statement that “As to OUGHT from IS, we already do it. OUGHT must imply IS, but IS does not imply OUGHT.”

    You are saying, on the one hand, that we already move from ‘is’ to ‘ought’ (“as to OUGHT from IS, we already do it”), but then that we can’t necessarily move from ‘is’ to ‘ought’ (“IS does not imply OUGHT”)…?

  84. D says

    @ Ben Goren (#79): See, this whole “maximizing” thing is where you’re going to lose me, because I’m a satisficer. The difference can be found in toothpaste: a maximizer will consider the ethics of each toothpaste-producing company, their effects upon the world, and so on and so forth, trying to figure out which course of action will most expediently meet their ethical requirements, given the constraints of the world around. The problem is that this requires a full set of information. I, on the other hand, am a satisficer: rather than maximize, I choose my toothpaste based on what will do what I want toothpaste to do, for the cheapest price I can afford, to the best of my ability to tell (which is itself subject to constant scrutiny and doubt). The difference is that a maximizing strategy can always be distracted, more or less, with more information; a satisficer can take action on less-than-complete information, but sacrifices principles for pragmatics. In terms of ethics, I think about what it is that I want to accomplish in life, and then I do that to the nearest approximation that I can reasonably estimate. Maximizing, it turns out, never “works” in the long term because in order to do it reliably, you’ll have to pursue an epistemological vendetta which will more or less leave you paralyzed by doubt. Or, in the short version: you must accept the fact that you can’t be perfect, and you will have to live with your mistakes (conversely, there is no possible way to be perfect, and there is no reliable strategy to avoid all possible mistakes). Maximizing strategies require that you avoid mistakes, which is impossible.

    It turns out that everything, ultimately, is destructive – because nothing lasts forever. IOW, entropy swallows all things. You simply cannot maximize, in the long term. The only strategy that will A) work in the long term, and B) dictate results which can be achieved in the short term, is a satisficing strategy, which requires compromise. But any principled system of ethics requires that you do not compromise, and that is the long and the short of the problem of living well.

    I am also a determinist, for the record. I believe that the world is on causal rails too subtle for us to predict with reliable accuracy to the degree that we would like. It’s a sad fact of the world, on this view, that our desires cannot all be satisfied – yet, in a world where there are people with conflicting desires, this cannot but be the case. Huh.

    @ John Morales (#81): True, my descendents are not me. Yes, I’m describing altruism. My entire point is that, in the long term, there is no such thing as me – and thus the only strategies which make sense are in terms of the benefit of others (and I can’t judge with certainty what will maximize that benefit). There is only “my legacy,” the effects which I have upon the world. So planning for “my benefit,” in the “long term,” is ultimately self-defeating because there shall always be the “after me.” In other words, I shall always have available a question of “then what,” which shall by necessity outstrip my first-hand experiences. And it’s always possible that I might be wrong. What I’m saying, to both you and Ben Goren, is that it is not possible to be certain that one is acting rightly – it is always possible, on any point whatsoever (aside from “I am having experiences at this moment”), that I am bass-ackwards wrong. So even if there were an objective and universal morality, we could never be certain that we’ve discovered it, or even that there is one in the first place. You must either walk in doubt, or in self-deception. Maximizing strategies assume that “that which is being maximized,” ethically speaking, is worth maximizing. You could be wrong on that, The End.

    What underlies this whole thing is that reality is ambiguous. If you can’t be certain about what is “truly the case,” then you can’t be certain about what will accomplish whatever it is that you want to do. You could always stand to double-check, which leaves open the possibility that you must doubt. In short, the question, “What if you’re wrong,” always applies. So you can’t get an “ought” from an “is,” because both are always in question at the most fundamental levels.

    Of course, I only think that because I’m a capital-S Skeptic (though I act on pragmatics, because I don’t need to be capital-C Certain that I have capital-T Truth to live with my capital-D Decisions). You might be credulous about someting, and whatever it is that you simply refuse to question may, on its face and by its nature, serve as an irreducible starting point for any branch of philosophy. But then you’ve got something you’re refusing to question, and so you lose the Skeptical game.

  85. Kel, OM says

    I think free will can only be defined as pursuing a course of action that is independent from our current physical state.

    I personally don’t define it that way. If one defines it that way, it does two things. Firstly it categorically denies that free will can exist. Secondly, it misrepresents when people refer to what we mean about free will.

    But if we don’t have the power to override the laws of physics, isn’t the “way of determining whether punching someone in the face is a desirable thing to do” also, in the end, the result of the laws of physics?

    Of course, but that doesn’t mean it exists independently of our desires. One can desire, one can reason, in effect one has choice for that very reason. At the fundamental level causality doesn’t change, but it doesn’t mean that we can’t see a difference between making a choice as a causal agent and being forced into a choice we don’t want to take. Take away the concept of free will and we have no means to distinguish between a confession signed at gunpoint to one without.

    Like I said, it depends on how you define free will. I think free will is the ability to pursue one’s own desires and if those desires are ultimately causal then that’s not a problem.

    You are saying, on the one hand, that we already move from ‘is’ to ‘ought’ (“as to OUGHT from IS, we already do it”), but then that we can’t necessarily move from ‘is’ to ‘ought’ (“IS does not imply OUGHT”)…?

    Yep, exactly. Take two innate capacities most humans have: protecting the life of an offspring, and distrust of outgroups. Both of these are what IS, yet one of them is morally desirable while the other is morally reprehensible. That we do move from IS to OUGHT doesn’t mean we should in all circumstances. If one, for example, IS genetically inclined to rape, it doesn’t mean they OUGHT to do it.

  86. Timaahy says

    @Kel, OM

    You may not define it that way, but as far as I can tell that’s how the philosophers and the theologians define it. But happy to be proven wrong, as always! :-)

    Re: your first objection… if you define it that way, it only categorically denies that free will exists if you don’t believe in the supernatural. Obviously lots of religious people define it that way (including the Catholic Church).

    Re: your second objection… are you simply saying that it can’t be defined that way because people commonly use the term in a different sense?

    Thanks for clarifying your IS and OUGHT statement… I’m now on board. :-)

  87. Kippers says

    This is how I would defend Harris’s contention the science can inform us of the many best ways to live in order to maximise human wellbeing.

    1) Identify forms of well being at the level of the brain.

    2) Examine the brains of a cross-section of people living in different types of societies such as Islamic Theocracies, Liberal Democracies, Communist Dictatorships etc…

    Science should be able to achieve this in principle and if it can then we have an objective scientific approach for identifying peaks of human wellbeing and troughs of human misery.

    I don’t know if this accurately represents Harris’s view but this is my understanding of what he means by claiming that science can provide guidance towards a moral life.

  88. Kel, OM says

    Re: your first objection… if you define it that way, it only categorically denies that free will exists if you don’t believe in the supernatural.

    True. Though I’m not sure how the supernatural is going to help exactly. Calling it supernatural sheds nothing on how it can work, just that it cannot be denied that it can’t. The phrase begging the question comes to mind, sort of like saying that God isn’t complex because God isn’t made of material.

    Re: your second objection… are you simply saying that it can’t be defined that way because people commonly use the term in a different sense?

    I’m saying that the term isn’t entirely useless, that one shouldn’t let the naive interpretation of free will destroy the concept completely. It would be like throwing the way the concept of the mind because mind-body dualism isn’t real. The easiest demonstration of what I’m alluding to is to look at how people react when they feel their will is being impeded or taken away. That the will is a representation of physical processes is the least of the worries around free will. Being able to differentiate between compulsion and choices undertaken voluntarily, being able to recognise the difference between signing a confession voluntarily and with a gun to their head, an autonomic reflex and a conscious movement – this is what I think most people would recognise as acting in free will.

    Then again I could be way off.

  89. Timaahy says

    @Kel, OM

    Re: your other point… yes, we desire and reason. But if everything is just atoms and physics (and I think we both think it is) I can’t see how it’s “choice” at all. It’s similar to the theist’s concept of omniscience. If god knows what we’re going to do in advance, do we really have choice?

    If you deny that free will exists, you can of course still distinguish between the two confessions. In one case the confession was largely caused by the atoms of the one holding the gun, and in the other case it was caused largely by the atoms of the person confessing.

    Finally, if pursuing your desires is ultimately causal, then how is it “free” will? Free from what?

  90. Mr T says

    I’m saying that the term isn’t entirely useless, that one shouldn’t let the naive interpretation of free will destroy the concept completely.

    Also, as Kel made it clear above, one can be entirely consistent in claiming that contra-causal free will does not exist, but that free will itself does. That is, our will does not need not be independent of causation in order for it to be “free” in any meaningful sense.

  91. Mr T says

    Finally, if pursuing your desires is ultimately causal, then how is it “free” will? Free from what?

    You are free to do whatever is physically possible for you to do, whenever you make a choice. ;)
    Your body is composed of parts that necessarily obey physical laws (whether or not those are deterministic or indetermistic is irrelevant). You are composed of those parts. The fact is, you as a conscious agent have more abilities than your constituent parts. Dennett made the point well with this analogy (here in this video — watch out for loud audio at poor quality): we are alive but our parts aren’t alive. Do we need to propose some supernatural life force makes this possible, to make a meaningful distinction between living and nonliving? Of course not. No dualism is required.

  92. Timaahy says

    Kel,

    No, like most theistic concepts, calling it supernatural sheds no light on how it works. Calling the “first cause” god sheds no light on how that concept works either. It’s a complete and utter cop out on the part of theists, in that it just shifts the ultimate reason for our actions one step further back, to a place where evidence and reason can’t touch it. If our “soul” is the entity calling the shots, what is driving it to make the decisions it does? Anyway… probably getting a little off topic now… sorry.

    It seems that you are merely distinguishing between types of causes. The difference between compulsion and acting voluntarily is simply the difference between (mostly) external and (mostly) internal causes, and the difference between automatic reflex and conscious movement is the difference between (mostly) body and (mostly) mind causes. Ultimately we are compelled to do what we do one way or another… whether by an external agent that is mostly influencing our action, or our own DNA and neural pathways.

  93. Timaahy says

    @Mr T

    I commented earlier in the day that my head hurts… and it hurts even more now! :-)

    I must be missing something. Can you explain more fully how saying that contra-causal free will does not exist does not automatically eliminate the possibility of free will? If we can’t override causality, isn’t every event then subject to causality? And if so, from whence comes free will? Or maybe, like Kel, we have different concepts of what free will actually means…?

    I fully agree with your second paragraph in #96… but the first paragraph I’m not so sure of. :-) What exactly is choice, in a universe whose curent state has been determined entirely by physical forces?

  94. Mr T says

    If we can’t override causality, isn’t every event then subject to causality? And if so, from whence comes free will?

    We are a lot more complicated than our intuitions about causality might lead us to suspect. We’re not simply billiard balls being hit by cues from every direction. If such an analogy is supposed to give us any insight about people: then instead imagine that we’re billions of billiard balls constantly being hit by billions of cues, which are conscious of their condition, which like some and dislike others, and wish to satisfy their own intentions accordingly. Those balls (that is, we) are capable of doing all of that.

    Free will comes our evolved abilities to choose some action over another action, to predict the outcomes of our actions, to get things we like and avoid things we dislike. Some of human actions are entirely determined, like your heart beat — you have no way to choose if or when that occurs. And we are limited by physical laws, so in a very real sense we’re not as free as we would be if we had contra-causal free will. Nevertheless, you do make conscious choices. We have the ability to act according to our current desires, our past experiences, our predictions of the future, etc.

  95. Paul W., OM says

    ostiencf:

    I disagree, the distinction of morals, values and ethics is important for philosophy.

    The distinction you’re making may be interesting and important, but I don’t think it’s a generally accepted distinction between “morals,” “values,” and “ethics.”

    I’ve heard similar distinctions in the atheist community, but that’s not how the moral philosophers I know talk, and at least at the moment, I don’t think it’s a particularly good set of terminological distinctions.

    In particular, IMHO, we should not concede the word “moral” to the religious types who believe in Divine Command theory or something functionally similar. It’s too important a word for a real-world phenomenon to let them define it in a bogus non-naturalistic way.

    Morality generally makes a Truth (big T) claim (that is a transcendent immutable Truth).

    Depending on exactly what you mean by that, I would either disagree with that conception of morality, or agree with it and defend it.

    I agree that morality is not based on the kind of fundamental immutable Truth that the religious types often talk about. (E.g., about God’s authoritative commands, or Karma’s deep ineffable essence or whatever.)

    On the other hand, I would also say that morality is “transcendent” in the rather pedestrian technical sense of being beyond mere arbitrary preferences. There’s something going on beyond that. It’s not magic, though—it’s basically applied game theory emerging from evolutionary psychology.

    However, a focus on values, would mean to gather various truth (small t) claims and from those derive a relative value system and modes of ethical action that is far more mailable then a system based upon a Truth.

    (I assume you meant “malleable” there.)

    I’m not sure what you mean by Truth with a capital T. I do think that up to a point, there are scientific truths about what can or can’t reasonably be called morality, and what can or can’t be considered a reasonable morality.

    That’s not cosmic, though—it’s just scientific, like thinking that words like “life” and “fish” and “wave” have certain reasonable meanings but not others, and that what counts as a reasonable meaning depends on actual facts about actual phenomena.

    I’m saying that the normal way many words work—both folk terms and scientific terms—is to refer to a poorly-understood thing in the world without actually having a definition for it, until such time as the actual phenomenon can be clearly understood, and more precise distinctions can be made.

    (That is not classic “analytic” philosophy, in the original sense of thinking that important philosophical problems can generally be dissolved by analysis of concepts. In fact, it’s rather the opposite—it’s saying that the full meanings of terms are not just in the concepts, but depend on the actual nature of the actual phenomena that they refer to. So what counts as “life” or a “fish” or a “wave” isn’t a merely definitional matter—it’s a matter of finding out what life actually is, and what fish actually are, and similarities and differences between different kinds of waves, and revising our crude, provisional descriptions to be something closer to precise definitions in light of the scientific truth of the matter.)

    Your claim that most people will value general welfare is also a claim to Truth.

    Again, I don’t know quite what you mean to by capitalizing Truth. I don’t think I’m talking about some cosmic kind of Truth beyond the usual pedestrian sense of truth in which we provisionally assume that scientific “truths” are true.

    In a sense, I’m mostly picking terminological nits and making a “semantic” argument. I’m saying that words like “moral” are perfectly good words with real-world referents, and if we want to understand what counts as morality or immorality, we should study the actual phenomena scientifically, and figure out what’s correct or incorrect about our preconceptions about those phenomena. We should refine our concepts of morality in the same realistic, scientific way that we refined our concepts of things like life, fish, and waves.

    That doesn’t mean that the answers will necessarily be simple or univocal. Often a folk term turns out to have multiple reasonable senses with different referents. (But generally only a few—there’s a vast number of unreasonable definitions, in light of the facts. So, for example, we can still use the term “fish” in the somewhat archaic sense that pretty much any animal that lives in water can be called a fish—e.g., shellfish, cuttlefish—but it’s still interesting and important that those things are not fish in the same deep sense as perch and bass and tuna.)

    I think this “semantic” argument is an important one, because there’s no phenomenon of more real-world importance than morality. We should get straight on what is or isn’t morality, and not let the religious loons dominate moral discourse and have all the forceful words. (And abuse them.)

    When the fundies and dupes say that they’re defending morality, we shouldn’t say that they’re right but we don’t care, because we do ethics but not morals. We should say that they’re fucking wrong, because they are. They’re pushing bogus morality, and we can do better. We are more moral than they are, because we realize that morality is about things like hurting other people, not about which orifice you put your penis in and whether God gave you permission to do that.

    To borrow an image from Nietzsche: should the bird of prey see itself as evil in accordance with the morals of the lambs it slaughters?

    In a sense, yes. A member of an individualistic species that doesn’t have the general kind of morality we do should recognize that morality is a natural kind of phenomenon that it doesn’t participate in. It should recognize that it is amoral in pretty much the same sense that it recognizes that it’s a ruthless predator, because both are true. Like a human sociopath, it may be perfectly happy being that way, and Hume was right that we can’t argue it into being a moral being if it’s not fundamentally constituted as a moral being. You can’t get from is to ought in that sense.

    But that doesn’t mean that the words “moral” and “ought” (in the moral sense) are just statements of personal preference, either. They’re not just relative to an arbitrarily chosen set of goals.

    Morality wasn’t invented by human beings. It was invented by evolution, and experienced and discovered by human beings. It’s a particular kind of engineered system—“a good move in design space” for an intelligent social species, and presumably some alien species have something fairly similar.

    The claim to “value” general welfare is a moral claim, transcendent and universal.

    Did you actually read what I wrote? It’s not clear to me that you understood it.

    Here’s an analogy. Take the concept of money. Is the nature of money transcendent and universal?

    There’s a profound sense in which I think it isn’t. There’s no Platonic Ideal of money that particular forms of money participate in. Money was not invented by God and gifted to us, written into the microstructure of reality. In a sense, it’s just a human invention.

    But in another sense, and an important one, money and a money economy are a strange attractor—it’s a good move in design space that can be easily discovered, such that situations with certain properties often tend to fall into particular very specific patterns. It would be suprising if there aren’t other species out in the cosmos who’ve discovered the concept (and utility) of money—social species only very roughly like our own.

    (You don’t have to be superintelligent to think of money, and you can evolve full-blown money from simpler exchange systems through a set of relatively small steps—barter, IOU’s, etc.—so it’d be surprising if some other intelligent social species didn’t discover and adopt something very much like human money, with a whole constellation of likely details including currency, banking, loaning money at interest, counterfeiting, embezzling, financial regulations, inflation, etc.

    In that sense, I think that the nature of money is “transcendent,” in the pedestrian sense that it transcends merely and specifically human experience—it’s like a mathematical theory discoverable by a fairly wide variety of intelligent social species, not just us. It’s not a peculiar and wildly contingent artifact of our detailed history, or our detailed psychological structure, but a strange attractor that we fell into and discovered because it’s a Good Trick that’s not hard to come up with.

    Now consider predator/prey relationships, and parasitism. Predation and parasitism have evolved many times on earth alone, and it’d be surprising if most even minimally complex evolved ecosystems didn’t have them.

    Similarly, intelligent social species have roughly similar phenomena at a social level, with exploitation and cooperation being crucially important. It’s not really suprising, at least in hindsight, that we’ve evolved a mix of selfish and altruistic drives, and mechanisms of distributed social control that somewhat downplay the former and enhance the latter, and provide standards of behavior so that people can be rewarded for pro-social activities and punished for anti-social ones.

    The deep and basic facts about “our” kind of morality are therefore “transcendent” in the pedestrian sense, IMHO—they’re not just facts about us, specifically, but a strange attractor that other only roughly similar species likely fall into.

    That’s very different from an arbitrary goal, or prioritized set of goals. The goal of being pro-social is not merely coincidental to the nature of morality—it’s essential to its being morality, in much the same way that certain facts about money are essential to it even being money.

    The rules of money are not arbitrary preferences. If you violate them, your purported money stops functioning as money, and therefore stops being money. (E.g., if you allow people to print up as much as they want, or to take it from others without obeying rules of exchange.)

    Similarly, if you change the basic deep rules of morality—e.g., dropping the requirement that it be pro-social in some important sense—it stops being morality, and becomes something else, because it doesn’t function as morality.

  96. Ben Goren says

    D wrote:

    Maximizing, it turns out, never “works” in the long term because in order to do it reliably, you’ll have to pursue an epistemological vendetta which will more or less leave you paralyzed by doubt.…What I’m saying, to both you and Ben Goren, is that it is not possible to be certain that one is acting rightly – it is always possible, on any point whatsoever (aside from “I am having experiences at this moment”), that I am bass-ackwards wrong.

    Of course. And so what?

    One can, of course, only act on the best knowledge one has at any given point in time. And acting on bad or incomplete information is all but guaranteed to lead to suboptimal choices.

    But what other option is there?

    Act on the best information you have. If it turns out to have been worng, be sure to take that new information into account when making decisions.

    One of the great powers of the human mind is to build surprisingly accurate models of reality based on vanishingly small sets of facts. One need not know that it is a poor strategy to forcibly rob each and every individual person, for example; it is enough to know that it is a poor strategy to forcibly rob any and all people.

    It is exactly such a general strategy we are discussing here.

    And, I suppose, now would be a good time for me to identify the one I’m pretty confident is close to optimal:

    I. Do not do unto others as they do not wish to be done unto.

    (The First Rule may be broken only to the minimum
    degree necessary to otherwise preserve it.)

    II. And as ye would that men should do to you, do ye also to
    them likewise.

    III. An it harm none, do what thou will.

    The rules must be applied in that order. For example, following
    the second rule is not permissible in circumstances which require
    violating the first rule (except as provided for by the Exception).

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  97. Paul W., OM says

    Timaahy:

    Can you explain more fully how saying that contra-causal free will does not exist does not automatically eliminate the possibility of free will? If we can’t override causality, isn’t every event then subject to causality? And if so, from whence comes free will? Or maybe, like Kel, we have different concepts of what free will actually means…?

    I’ll give it a shot.

    Like lots of terms, “free will” doesn’t really have a definition. What we really have is an approximate description, and some examples, and the reality of the actual phenomena is what’s important.

    In normal non-theological use, doing something of your own “free will” means roughly that it’s your choice, flowing from your knowledge, beliefs, and decision-making predispositions. It wasn’t forced on you by some coercive outside agent, and you weren’t simply tricked into doing it by somebody who misled you into misunderstanding your actions and their consequences.

    That’s the central sense of the term “free will,” according to “compatibilists” like Dennett, who think that “free will,” reasonably conceived, is compatible with determinism. That’s roughly the sense we talk about free will in law, or most mundane, day-to-day interactions, and for very good reasons.

    That’s very different from the theological concept of “free will” which requires that your decision making not be a deterministic consequence of previous things beyond your control.

    Arguably, that concept of free will is artificial and bogus. The main reason it’s theologically important is that it’s supposed to get God off the hook for Problem of Evil. (It doesn’t, really, but it’s good for muddying the waters.)

    The standard move in apologetics is to equate free will in the mundane sense of choosing (without coercion or being tricked or deluded) with the bizarre metaphysics of somehow making a choice without that choice being either deterministic or random.

    Dennett et al. argue that the latter sense is just unreasonable and should be discarded, and the only reasonable and useful interpretation of “free will” is the former.

    Dennett makes the same basic kind of argument I’ve been making about morality. Words don’t really have strict definitions, initially, just rough descriptions that pick out real phenomena in the world, and the definitions we eventually arrive at should generally be correct descriptions of the actual phenomena.

    That’s how words do usually work—we revise definitions in light of actual facts, because most terms are mostly referential, not really definitional.

    Often the facts turn out to be complicated, so we end up with multiple senses of even seemingly simple, common words. (Like “mother,” for example. There’s biological mothers, and mothers who raise people whether they’re biological mothers, and a few other more obscure senses.)

    Dennett’s argument is that when you have several senses of a word, and some of them don’t have any actual referents—nothing in the real world actually fits the “definition”—you typically just keep the definitions that work, and discard the nonsensical definitions that don’t. We should therefore use the term “free will” to describe the kind of free chosing we actually do, even if it’s not “free” in the theologically crucial sense, which is just wrong and obsolete.

    That’s similar to the kind of argument Harris and I are making about “morality.” There’s the real kind of morality that makes sense, and there are bogus obfuscations preserved by religion for religious reasons, based on falsehoods, which obscure the real moral issues in the real world.

    I have mixed feelings about the term “free will.” I think that both senses are entrenched in our language, and it’s all too easy to conflate them. It’s not clear to me that there’s a right thing to do about it, except to strive to be clear about what basic sense of “freedom” you’re talking about, and what sense you think is bullshit. (Because it conflicts with being willed, if you understand what willing actually is.)

    I have stronger feelings about defending the word “moral,” because I think that most people do have a gut sense of what “moral” means that is approximately right—they do generally get that morality is supposed to be for the general good, and that it’s about being at least minimally altruistic—and since that’s crucial to what I think the real phenomenon is about, we should preserve the term and adjust our concepts to fit the actual phenomena, rather than discarding the term as hopelessly misconceived.

  98. Paul W., OM says

    Timaahy:

    So, if there is no free will, whether we “ought” to do anything at all is irrelevent. We will do what we will do.

    I disagree, and the reason is related to why Dennett and other “compatibilist” philosophers preserve the term free will even when it’s deterministic.

    I consider myself to be a mostly deterministic machine, and I don’t consider the ways that I’m not deterministic to be terribly philosophically interesting for most purposes.

    (I’m statistically close to being a deterministic machine, and the random “noise” is comparatively minor for most philosophical purposes. E.g., if I do something “wrong” because a close call went one way ratehr than the other due to minor random variations in neural firing rates, it’s generally only because I was too close to deciding to doing something wrong anyway, and I shouldn’t have even been close to doing that.)

    Viewing myself as a deterministic machine, I can still make sense of my feelings of guilt and shame. I’m “programmed” to want to be certain kinds of things and not others, and to want to be perceived as certain kinds of things and not others.

    If I do something I feel bad about, such as doing something selfish and ruthless, I’m not happy with the kind of thing I am. It may not be “my fault” in an ultimate sense—maybe it’s all God’s fault, or Just How Things Are in the final analysis—but that doesn’t change the fact that I don’t like what I did, or what about me made me do it. Being a moral being, I am at least somewhat motivated to change it, whether it’s ultimately my fault or not. I’m the proximate cause of the bad thing, and that’s what I can do something about, and I’m motivated to fix things and change myself, e.g., to resolve not to do that in the future.

    The fact that all of that is pretty much deterministic doesn’t really matter, except that it has to be pretty much deterministic, or it wouldn’t work at all—if my decisions were just random consequences of random events in my head, I couldn’t do anything about them. I can only resolve to change my behavior because doing so has a reasonable chance of actually changing my future behavior, i.e., it’s at least somewhat deterministic. I couldn’t diagnose myself as a faulty unit and try to fix the fault if my behavioral predispositions were not mostly deterministic consequences of my internal state.

    That’s what most people don’t get about free will—or rather, what they have inconsistent and unrealistic intuitions about. Willing things requires a substantial degree of determinism. If willing was not deterministic, it’d just be random, and if you’re at the mercy of random stuff happening in your head, you’re not really willing things at all—you’re a slave to random neural activity.

    Free will in the weak sense—of choosing without coercion or delusion or whatever—makes a certain useful sense. We all know the difference between choosing something because you have gun to your head, or because you’re laboring under an illusion or delusion, and doing it because you’re a selfish asshole. (Irrespective of why you’re a selfish asshole, in ultimate terms.)

    Free will in the ultimate sense of not being a deterministic consequence of prior events makes no sense at all. It might be “free” in some useless sense, but it isn’t even will.

  99. D says

    @ Ben (#101): Aww, Hell. I got way the fuck off-topic. For what it’s worth, I like your three rules and they make sense to me, I think the world would be a much better place if people thought about that before they acted. OK, back to “ought” from “is” with the three-sentence limit back on.

    In order to get from “is” to “ought,” you must have a value (or system of values).

    Values cannot be dictated – there might be some that make more or less sense, given biological facts (e.g. different for homo sapiens than for turritopsis nutricula), but which facts to take into account and how to prioritize them presupposes yet more preexisting values.

    So while you “can” technically reason from is to ought, you can’t get your chain of reason started without presupposing the very values you’re attempting to derive (i.e. an ethics of life will say that life is good, an ethics of happiness will say that happiness is good, etc.) because you need to decide which facts are more important than others, and this prevents your reasoning from being objective or universal.

  100. Ben Goren says

    D wrote:

    [Y]ou need to decide which facts are more important than others, and this prevents your reasoning from being objective or universal.

    I don’t think I ever claimed that what I’m proposing is absolute. Quite the opposite, in fact.

    What I am claiming is that there are certain goals that, of necessity, must be fundamental to nearly all other goals. Survival is almost always a prerequisite of any other goal, for example, and survival in modern society requires being a productive member of the society. Being a productive member of society means conforming to certain moral norms — such as not going on murderous rampages.

    You can probably construct some theoretical moral code that’s radically different that makes sense, but only for hyperintelligent shades of the color blue. But so what? You can also posit that the shortest route between two points on the surface of a planet is something other than the great circle route, if the planet is cubical rather than spherical. So what?

    Is it a problem that neither Newtonian mechanics nor Relativity can be derived from first principles? If not, why “should” this be any different?

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  101. PsyberDave says

    How does one empirically observe a moral? How can you use science to discover “It is immoral for a person to punch another person in the nose.”?

    Yes it is possible to observe the act. You can observe the fist, the nose, the reactions and consequences (pain, sadness, anger, retaliation, etc). You can even count the number of people who regard the act as moral or immoral.

    But how do you see the morality itself? If you observe the consequences and don’t like them, you still aren’t observing morality, you are just judging the consequences using your own set of morals or values. It is no different than observing the initial act and judging it based upon your morals or values. That’s not objective observation. That’s subjective opinion expression, not empirical discovery.

    I don’t think morals exist in the act or in the world, independent of our notions.

  102. PsyberDave says

    Paul W. OM,

    Thanks for your thoughtful reply to my earlier post (#5). I apologize as I just don’t have the time to reply to your post in the substantive way I think it deserves.

  103. Ben Goren says

    PsyberDave,

    How does one empirically observe the mathematical function that describes the motion of a ball and the athlete who catches it? How do you see the math itself?

    Ergo, maths don’t exist or act in the world…?

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  104. Knockgoats says

    arjbooks, I would define morality as an optimal strategy for achieving one’s goals. Is that objective and empirical enough for you? – Ben Goren

    OK, suppose my goal is to obtain complete power over all living beings and then subject them to the maximum agony indefinitely. So if I pursue the optimal strategy for achieving this, I’m being moral?

    *Backs away from Ben Goren slowly, muttering soothing nothings*

  105. KOPD says

    OK, suppose my goal is to obtain complete power over all living beings and then subject them to the maximum agony indefinitely.

    Have you been reading my journal?

  106. Ben Goren says

    Knockgoats,

    In the most rarefied disconnected-from-reality abstract, yes.

    But, in the real world, there’s no way you’d ever be able to make any progress towards such a goal without not only staying alive but recruiting minions to assist you. Doing either will require behavior we all would instinctively agree is moral. From a practical perspective, you’d reach insurmountable paradoxes long before ever getting close to implementing your plan — not the least of which is there’s no evolutionary pathway that leads a human being with both the desire and capability to even come close to pulling off something so grandiose.

    On a smaller scale, we see Torquemada, Hitler, Dahmer, and the like all trying such things. Though they all “enjoyed” a certain measure of short-term success, they all failed most spectacularly in any meaningful long-term sense.

    Cheers,

    b&


    EAC Memographer
    BAAWA Knight of Blasphemy
    “All but God can prove this sentence true.”

  107. cody.cameron says

    Felt compelled to share this:

    I watched the movie Kinsey recently, (excellent! I highly recommend it), and there is a scene in which a potential source of funding asks Kinsey not to “dwell on sexual oddities and perversions.” Kinsey gets a signal from his wife indicating he needs to go along with that idea, and correctly answers “no, of course not.”

    But I think the correct response is that only after the study would anyone even be able to say what “normal” sexuality really was, because until a very large number of opinions (and feelings, and behavior, etc.) had been collected and understood, it was really a grand mystery. Of course, after the study we could talk about how common certain behavior is, what the distribution is like, the spread, and so on.

    I think following such a study with respect to our morals, we could then make scientific statements about what people’s preferences and behaviors are like, exactly. (I imagine there are many such studies, yourmorals.org is a study with very interesting results.)

    Like all spectral phenomena, we would have no objective method of defining how much of the distribution should be included in the conclusion, but it would give us a clear idea of where the middle was. With these results in mind, we could then make specific scientific statements about what was and was not in our collective best interest.

  108. Kel, OM says

    Ultimately we are compelled to do what we do one way or another… whether by an external agent that is mostly influencing our action, or our own DNA and neural pathways.

    This sounds like will is a passive enterprise, it’s almost as if you’re advocating a cartesian approach to cause and effect.

    Can’t really get into it now (might tonight if you have any more questions, though Paul seems to have it more than covered), but to quickly add something. In a general sense your will is neural activity.

  109. Paul W., OM says

    I just realized I said that money and morality are “strange attractors” in comment 100, when I only meant that they are attractors. Sorry about the strangeness.

    (If you don’t know the difference, don’t worry about it.)

  110. D says

    Ben,
    Re #106: I didn’t say it was absolute. But you said earlier, “It’s actually this parenthetical caveat that points to the identification of an objective universal morality.” You then went on to talk about being a productive member of a healthy society – was I wrong in thinking that the two were supposed to be connected? If so, then OK.

    At any rate, your analogy about plotting a route on a planet needs to be tweaked a little. Sure, if you wanted to go from A to B on a planet, the shape of the planet must be taken into account. But that’s not analogical to ethics – what is analogical to ethics is, “Should you go from A to B?” You can’t get that from the shape of the planet. Similarly, while the goal, “I want to live,” will dictate all sorts of hypothetical imperatives given the world around, the ethical question is, “Should I live?”

    Mechanics was devised because we wanted to figure out how this world works. But the world does not impose goals, values, or desires upon us – it simply imposes constraints. It does not tell us which goals to pursue, which values to hold, or which desires to feel. It can tell us which goals, when pursued, tend to be achieved; which values, when held, can be consistently applied; which desires, when acted on, may be reliably satisfied; but “what works well” is different from “what is morally right.”

    Re #109: Umm, Ben, I hate to break it to you, but math is a system that we made up. Counting is artificial, just like language. We can use math to describe a great many phenomena, which is truly awesome, but there is no such “thing” as a number. You can’t find “two” out in the world – you can find two of something, but you can’t find a number itself, because the number itself is just an idea that we made up.

    Words are made up, though they have the meanings we give them. Communication is possible when we agree sufficiently on the meanings of those words. The same goes for math. So maths exist in one sense, and don’t in another – the maths do not “act in the world,” they are the language we use to describe what happens in the world. You have your metaphysics backwards if you think the motion of a ball in the air is dictated by a parabola – the ball’s path comes first, and the numbers are how we talk about it with precision and predict future cases.

    Ethics, too, is made up. We use words like “good” and “bad” to describe things which simply are, and the meanings we project on to them are merely projected. The fact remains that it does have such meaning to us, but they only have meaning to us. Meaning is not objective, universal, or capable of having truth value – the meaning something has to someone, whether ethical or emotional or logical, is subjective and arbitrary. These meanings may be agreed upon, but the agreement is merely that, and does not say anything further about the external world.

    So, once we have our terms defined – what “one” means, or what “good” means – then we can do science to see how the world fits into those categories. But what those mean is ultimately arbitrary. We can “just agree” on what one means, and go from there. Some people disagree, and fine, they’re talking a different language. Similarly, we can “just agree” on what good means, and go from there – but in ethics, that is exactly what is at issue, and since the word “good” has no intrinsic meaning independent of the arbitrary meanings we project onto it, the matter cannot be settled empirically. And what you pick as “the good” will determine all of your “oughts” for you, given the state of the world – but the impossibility of getting a non-arbitrary idea of the good stops you dead in the water. As I write at the end of The Moral Platypus,

    We can decide (individually, of course) to go with an ethics of happiness, and there’s nothing “wrong” with that. We can even get all scientific about our happiness, measuring hedons and right-ons and hate-ons and whatnot, but we have to keep in mind that it’s happiness on which we are so enthusiastically doing science, not ethics. By choosing an ethics of happiness, we are doing ethics arbitrarily, and then we have the option of doing happiness scientifically.

    You’ve got some great ideas for doing human flourishing scientifically, but you’re still doing ethics arbitrarily. With human flourishing as your standard of value, humans ought to flourish and you give a few workable rules for doing so – having arbitrarily picked your value, you acknowledge the “is” and then derive your “ought.” But you didn’t start with “is,” you started with your value. There’s nothing says that human flourishing needs to be our standard of value, or that humans ought to flourish – you can’t get there from “is,” you have to do that on your own. You can, and you do it elegantly – but you’re not going from “is” to “ought,” you’re going from “value” to “ought” based on what is. That’s different from starting only with “is” and going to “ought.”

  111. Timaahy says

    @Mr T (#99)

    Agree completely with your first paragraph.

    But again… if every event is caused by some combination of past events, however numerous or unfathomable, “choice” is ultimately an illusion. We may not be able to predict how we will act, but once we have acted, we know by definition that our action is the result of prior events over which we had no control. As Kel said earlier, we cannot change the laws of physics. The atoms comprising our bodies are still following pysical laws at a local level (some of which may involve randomness), even if the combined effect of these trillions of local events is beyond our comprehension.

    We don’t really make conscious choices, we act in accordance with the laws of physics in a way that is (for the most part) unpredictable. If atoms aren’t making choices, we aren’t either.

    I guess it comes down to predictability. The level of predictability that you can ascribe to the potential actions of another sentient being determines the level of that being’s “free will” (in the sense that you, Paul W and Kel mean it). Kel’s confession example thus becomes: there is a higher probability that someone will sign a confession if they’re at gunpoint than if they’re not… therefore they have less “free will” in the former case, even though both situations and the confessor’s ultimate action are entirely the result of past events…?

  112. Paul W., OM says

    Timaahy:

    if every event is caused by some combination of past events, however numerous or unfathomable, “choice” is ultimately an illusion. We may not be able to predict how we will act, but once we have acted, we know by definition that our action is the result of prior events over which we had no control.

    No. You are making a very basic common mistake in thinking that not being deterministic is essential to something being a choice.

    That’s simply not true, and is closer to the opposite of the truth. For something to be a choice, it must be at least approximately deterministic.

    You also seem to be making a very basic common mistake in thinking that words like “free” and “choice” have strict definitions with necessary and sufficient conditions. Outside of mathematics, most words do not have definitions. They have rough descriptions that pick out something real in the world, and our definitions should be adjusted to fit the reality of those things.

    Consider “choices.” We have clear examples of choices—choosing a flavor of ice cream, choosing what TV show to watch, choosing whether to walk or drive or ride the bus to work, etc.

    However we choose to “define” the term “choice,” those kinds of examples have to count. Our preconceptions about what choosing “really is” do not have to be preserved, because “choice” is not something we start with an axiomatic definition of. It is just a name for a real phenomenon we have observed, however poorly we understand that phenomenon. Whatever’s going on with those real examples of choosing is what is important.

    And it turns out that our preconceptions about choosing being nondeterministic are just profoundly wrong. All of the examples of choosing that we started from turn out to actually be deterministic, or mostly deterministic. And that’s okay. They’re still choices, because they are exactly the kind of thing that the term “choice” was coined to describe.

    You really need to accept that words don’t generally have definitions.

    That was the point of my examples like “life” and “fish” and “wave.” It’s the usual thing in how natural language works, and in science. Words are names for things we observe, which we give to those things before we really understand them. Definitions are provisional, and subject to revision in light of how things really turn out to work.

    When vitalism died—that is, when we realized there’s no special animating “life force” and living things are just machines made out of nonliving matter and nothing else—we didn’t say that “it turns out there are no living things after all,” or that “life is just an illusion created by complicated evolved machinery.” That would be silly. The term “life” was coined to describing living things we’d actually observed, whatever they turn out to really be and however they turn out to really work.

    The association between the term and the real phenomena is primary, and “definitions” are secondary. “Life” is just whatever those things we consider alive turn out to really be doing. Any definitions that don’t fit the reality, such as those that presuppose a life force, or a divine spark, are simply mistaken.

    That is how words generally work, and they have to work that way. We usually name things before we understand them well enough to give lists of necessary and sufficient conditions for being that kind of thing, so our examples are what’s really important, and we have to be very flexible with our “definitions.”

    Consider water. People have had words for water for tens of thousands of years, but until the last couple of centuries, we didn’t have an actual definition. We didn’t know what made water water, but that didn’t keep us from talking about water. We had plenty of examples of water, and we assumed that all of the important examples had something in common, whatever that stuff is.

    Until we understood atoms and elements, and molecules and compounds, we couldn’t say precisely what water really was. It turns out to be a compound, which is an aggregate of molecules, which are made of particular combinations of atoms—2 hydrogen atoms connected to one oxygen atom.

    Most previous attempts at a precise definition of water were wrong in various ways. It was often thought to be a continuous substance, not made of particles, or to be an element which couldn’t be broken into other kinds of things. All those definitions were seriously wrong.

    When we discovered that liquids were collections of particles, and that particles of water were made of particles of other things, we didn’t decide that “there’s no such thing as water.” That would be silly. We knew there was water—lots of it—and that that was why we came up with a word for it. If the definitions were wrong, too bad for the definitions, because we at least knew that (1) there was such a thing as water, and that (2) it was called “water.” “Water” is a name for a thing in the world, which happens to be made of hydrogen and oxygen atoms put together in a particular way.

    The same goes for “choice.” We know that choices exist—we have plenty of examples. And we know that whatever those real choices really are, that’s what real choices really are. That’s just the name we gave those things, before we knew that choices were mostly deterministic, just as “water” is the name we gave water, before we understood that it was a collection of particles made of two other kinds of particles.

  113. Kel, OM says

    We don’t really make conscious choices, we act in accordance with the laws of physics in a way that is (for the most part) unpredictable. If atoms aren’t making choices, we aren’t either.

    Again, I’ll stress that this is where the word isn’t useless except in the narrow scope as you define it. We can’t violate causality, but that doesn’t mean we don’t make choices. Whether to get the chicken or the fish on a flight, it’s a choice we as agents make. Whether to save our money or spend it, again it’s a choice we agents make.

    What I think that you’re forgetting is that while we are an expression of the laws of physics, our brains are choice-making machines. An expression of the laws of physics is a machine that has a sense of self, the ability to project into the future, experience pain, understand that others experience pain, make causal links between actions and consequences, recognise patterns, solve problems, etc. Yes underlying it are biochemical reactions which in turn are underlied by physical processes, but we aren’t some passive observer being taken for a ride. We are part of the process itself, an expression thereof. Beyond our control we are choice-making entities with the capacity to express our will through our actions. That it’s causally determined doesn’t really matter so much.

  114. ostiencf says

    Paul,

    Sorry I am responding so late to your post, I just remembered to check again. I just want to start by saying I love intellectual debate but make sure it is known that this is not personal, I simply do not agree and it seems we have very different outlooks and philosophies which clearly suit each of us better. so really I’m just going to respond to some of your post with clarifications of my position and a bit more elaboration but I’m not out to convince you, nor should you be of me, but it is always invigorating to consider other positions.

    The distinction you’re making may be interesting and important, but I don’t think it’s a generally accepted distinction between “morals,” “values,” and “ethics.”

    Actually those distinctions are all over the philosophy I study. Perhaps it is not generally accepted or known outside academia (or my corner of it), but that’s what I’m familiar with. I’d get puzzled looks from my professors if I did not distinguish those terms or used them interchangeably.

    Morality wasn’t invented by human beings. It was invented by evolution, and experienced and discovered by human beings. It’s a particular kind of engineered system—“a good move in design space” for an intelligent social species, and presumably some alien species have something fairly similar.

    On the other hand, I would also say that morality is “transcendent” in the rather pedestrian technical sense of being beyond mere arbitrary preferences. There’s something going on beyond that. It’s not magic, though—it’s basically applied game theory emerging from evolutionary psychology.

    This was my point in capitalizing Truth. I did not necessarily mean a Platonic idea of Truth or a metaphysical Truth but of a simply non-subjective Truth. This is the type of truth you are describing, a Truth that can be discovered, not a constructed truth. This is just a base point of departure between us. I’m not using subjective in place of arbitrary, (for it is not at all arbitrary what each person takes as their subjective frame), but it is certainly not objective rationality on which humans operate but subjective value and moral judgments we become accustomed with (perhaps informed by scientific discourse to be sure) for whatever reasons. Those reasons can be fascinating, constructing a genealogy of morals of science of rationality is what I like to think on, looking at the history of events that brought us here and what that may mean for society.

    The type of psychological game theory you speak of is popular is some psychological circles, but also vehemently apposed by others. I know PhDs in psychology who would oppose such a cognitive behavioral approach, and there are many large organizations around them. I’m not saying this to hide behind numbers, as there are just as many who would approve of your approach, but to illustrate that what you contend is not self-evident or agreed upon in the slightest. It is most likely a combination of evolutionary brain models, social conditioning, and the individual psyche molded by the events and conditions which influence development. I am not an expert in psychology but have a friend who I discuss with it at length who is in the process of his PhD and actually wrote his dissertation on psychological ethics. So take that for what it is worth.

    it’s a matter of finding out what life actually is, and what fish actually are, and similarities and differences between different kinds of waves, and revising our crude, provisional descriptions to be something closer to precise definitions in light of the scientific truth of the matter.

    I find it more interesting to look at what the discourse we have about life is or what fish are rather then searching for an objective reality of fish, or only doing so to intemperate human subjective truth claims in relation to that idea of objective reality. That’s what I personally find more interesting. I’m not decrying the creation of knowledge (quite to the contrary) I’m just more interested in what power relations that knowledge makes. For example, the knowledge of the city and surveillance creates certain power relations just as science creates power relations with the knowledge it produces. Power here is not negative, but productive and I am more interested in this investigation, the effects of the discourse rather then the discourse itself, whether that be about science, morality, history or whatever you like. But this is just my interests and I am not putting myself above the scientist, but I do not claim to be of them.

    When the fundies and dupes say that they’re defending morality, we shouldn’t say that they’re right but we don’t care, because we do ethics but not morals. We should say that they’re fucking wrong, because they are. They’re pushing bogus morality, and we can do better. We are more moral than they are, because we realize that morality is about things like hurting other people, not about which orifice you put your penis in and whether God gave you permission to do that.

    I’d agree on that point, but just far enough as we should overcome and create our own values, and if you like the word so much, morals. But, unlike morals I don’t claim I have found the right answer, just a different one based on different values we can discuss and modify freely.

    In a sense, yes. A member of an individualistic species that doesn’t have the general kind of morality we do should recognize that morality is a natural kind of phenomenon that it doesn’t participate in. It should recognize that it is amoral in pretty much the same sense that it recognizes that it’s a ruthless predator, because both are true.

    The point was an allegory for moral action. In context Nietzsche was speaking of christian slave-morality. More broad however, the point is that moral action is an illusion, that decrying the strong for dominance over the weak is missing the point, that good and evil are subjective terms depending on whether one is dominated or dominating. This may not appeal to you but I find the subjective power play of truth and morality interesting.

    Did you actually read what I wrote? It’s not clear to me that you understood it.

    I did read it, and I understood it, but that does not mean I agree with it or find it compelling. Your discussion about money is interesting but what I find interesting is the consensus around which money operettas and the material factors that surround it. The material forces around us, namely scarcity of resources, creates a situation in which the concept of money can function. Money may be a more basic condition then some moral questions (though money and scarcity do indeed impact moral formations). But this is just how other material conditions determine the discourse of morality and that those often change and thus modify the truth of morality. The views on life and death, even without a religious component, impact our attitudes to murder and violence. the knowledge surrounding biology and psychology affects views of madness. you may object and say that we are always progressing our knowledge and thus will constantly refine our views in relation to the objective reality we uncover. I am less convinced of human rationality and even if we find the underlying cause for some madness we will still contextualize and value it perhaps in vastly different ways. So while you may be concerned with finding that reality, I’m concerned with how others will react to that and what rationality they will use to justify that reaction, whether it be other “facts” (whether one deems them to be true or false) or via a logic based on what they subjectively value.

    The deep and basic facts about “our” kind of morality are therefore “transcendent” in the pedestrian sense

    Perhaps here we can find a place of tacit agreement. I just would not claim these basic facts exist without a social context and that context informs what we view as transcendent, but social context can change and with it the interpretation of those facts that guide or social existence. I’m not interested in what may or may not be real but how society reacts to the prospect of the real. We are simply on different projects.

    Again this is an interesting a lively debate but I’m unconvinced of you position as I am sure you remain so of mine.

  115. https://me.yahoo.com/a/hGKpRV9y3eMGpiWvuZqCmseagwlT#cfa09 says

    Pz,

    For once I think you are totally missing the big picture here.

    ‘Science’ is the only rational well we have from which to draw our moral rules. Just because we don’t have the answers now, nor do we know how science will get us the answers, is no reason to doubt the power of science to provide the best (and only) answers on this subject. Just as it does on every other topic in the physical world.

    I’m really disappointed in your position on this.

    This is my first post fwiw. I read you daily, you saved me from Morminism. Ty

  116. Elentar says

    On free will:

    We act according to reasons and causes. Causes are things like physical constraints, chemical imbalances, extreme duress, and so on. Reasons are the orderly progression of our own character. Causes are neurochemistry and circumstance, analogous to hardware errors, power outages, etc, in computers. Reasons are analogous to software and data. Furthermore, since we are self-programming entities, we have only ourselves to blame for our reasons. If a computer malfunctions, and I find no hardware problems, I fix or discard the software. If a person does something bad, and we find no brain dysfunction, he takes the blame, and we lock him away.

    That, I think, is all there is to be said about free will. Free will requires the orderly process of the brain, the primacy of reasons over causes. If the brain is impaired, we forgive the offense and attempt to correct the problem. If the brain is not impaired, we blame the offense on character, and lock the perpetrator away. Physical determinism has nothing to do with it, because a brain that does not operate in an orderly, deterministic way would make freedom impossible. A brain that was determined by random events would be wildly dysfunctional, so any resort to quantum weirdness is no help at all–the person would not be free in any sense. Nor would quantum weirdness under the direction of God be of any use–then you would be a puppet under God’s direction. To be free, you must act for your own reasons, and because of the recursive nature of human consciousness, you are the one who chooses those reasons.

  117. Mbswish321 says

    I’m 18 years old and I’ve just started to read posts like this.
    I do not have enough knowledge about this topic
    to have an informed opinion, so I wanted help. If someone
    could please tell me if god can be proven or not proven unequivocally? It seems to me that with everyones knowledge and statistics backing up what they are saying noone can say for 100% certainity there is a god or isn’t a god. It can’t be proven one way or the other for certain? Please help

  118. Mbswish321 says

    So if it can’t be proven 100% there is a god or isn’t a god.
    Then you can’t say any person is wrong for saying there is a god or saying there isn’t, because no one knows for certain. I most definately do not know for certain. I am certain that I’d rather believe in a god and believe that if I pass judgement I’ll be rewarded with internal bliss and happiness In heaven over choosing there is no god and when I die I just am dead. Is that wrong of me to WANT to believe in something I can’t prove, or disprove?

  119. Kel, OM says

    Then you can’t say any person is wrong for saying there is a god or saying there isn’t, because no one knows for certain.

    So basically you’re saying since one cannot be certain they aren’t living in the matrix you can’t call anyone wrong for saying we are? Or even better, just because you can’t be certain that you have a hand you can’t call anyone wrong who says you don’t?

    Operating in a world of absolute certainty means you’re almost certainly going to be wrong. Reality isn’t like that, even at the most fundamental level of physical interactions there is a degree of uncertainty inherent in the interaction and the observation thereof. One cannot be certain that the sun will rise tomorrow, just as one cannot be certain that they aren’t in the matrix. It doesn’t mean that anyone is justified in saying the sun won’t rise tomorrow or that reality is a computer simulation for robots to harvest our bodies as fuel. Yet it’s reasonable to assume that the sun will rise tomorrow (rotation of the earth, motion of the earth around the sun) just as it would be reasonable to reject the matrix conjecture on account of the bare assertion without evidence (if you can’t disprove that you won’t be digested by an interdimensional alien, it doesn’t mean that you should consider it a real option).

    If the only reason you’ve got for believing in the Christian deity is that one can’t disprove it, then what puts a belief in the Christian doctrine as more or less preferrable than any other religious belief? Why not Islam? Why not Buddhism? Why not the ancient Greek or ancient Egyptian beliefs? Why not invent your own religion? None can be disproved with absolute certainty, and I’m sure you could come up with something more desirable than the Christian story. Your argument is like when having a choice of a number between 1 and infinity that you’ve picked 473 because that’s what other people pick.

  120. Mbswish321 says

    Since no one is answering my question. I’ll answer
    no human can 100% prove scientifically or otherwise that there is a god or isn’t a god. So basically everyone on here makes good points of if there is a god or there isn’t, but no one on here or anywhere (humans) can say 100% there is or isn’t a god. The people who say there is good reason to believe god is a not real could be true and the people who say there is a god could be true, and than you have people like me who honestly doesn’t know what to believe, and since no one can prove to me 100% there is no god I’ll choose to believe there is a god. I don’t understand why some people would want to believe there is no god. Even if there is more proof or more scientific facts to show there is no god compared to there is a god. Even if it was
    somehow proved that there is only a 1% chance god exists wouldn’t that 1% chance to believe be better than the alternative of believing there is no god and no afterlife and no internal bliss and happiness. I guess I’ve just fallen into the trap of assuming everyone would want internal bliss and happiness. I don’t know that everyone would, but I would and that’s 100%

  121. Usagichan says

    Mbswiish321 #123

    God can of course be proven unequivocally – He simply needs to appear, and act outside the laws of nature (in the way most religious texts say He used to do all the time) – given physical evidence, God can be shown to exist. He hasn’t as yet, and to be honest I’m not holding my breath, but in the absence of clear divine manifestations theologians of all stripes attempt to ‘prove’ their deities existence – I have not as yet been convinced by any of their attempts, but they do keep trying.

    God cannot be unequivocally dis-proven – a fact theists joyfully trumpet all the time. However, before you get too hopeful, he simply joins an infinite list of things that can be conceived (imagined) and not dis-proven from orbiting space teapots (see Russel’s teapot), invisible intangible unicorns, leprechauns or any one of a huge cast of deities historically or currently worshiped. Just because something cannot be dis-proven does not mean it exists (or that it is even likely it exists).

    There are two tings you might want to ask yourself about the whole God question – firstly do you want to put your trust in a book written to explain the Universe to a bunch of Bronze Age goat herders, or would you rather trust the body of knowledge acquired through centuries of investigation? Secondly what reason, other than by accident of birth, would you have to follow any one of the many flavours of religious thought over the others – if there is no other reason, why follow any of them?

    I hope this has helped rather than confused you :)

  122. Mr T says

    mbswish321:

    So if it can’t be proven 100% there is a god or isn’t a god.
    Then you can’t say any person is wrong for saying there is a god or saying there isn’t, because no one knows for certain.

    You can say with absolute 100% certainty that either there is at least one god, or there are more than one, or there are none. Thus, some groups (believers of various stripes or nonbelievers) are indeed wrong. The best we can do is rely on what we know and what we can know. What we know is that is no evidence that any kind of god exists. The God of the Bible is so full of contradictions that the being portrayed therein cannot possibly exist.

    If a god is incapable of producing some evidence of his/her/its existence, then it is either incapable of doing so or does not want to do so. In either case, there is still no reason to believe in such a thing. If a god can’t even hint at its own existence, then what reason would you have to assume it is capable of torturing or rewarding you for eternity, much less have a motivation for doing so?

    Is that wrong of me to WANT to believe in something I can’t prove, or disprove?

    It’s wrong that wanting to believe is the same as actually believing. If there is a god, then I’m sure he/she/it could tell the difference. If a god exists, then he/she/it is already laughing at how ridiculous Pascal’s Wager is. There’s no purpose in being laughed at by a cosmic tyrant, simply because you think you can fool yourself (or fool god) into believing something that isn’t true.

  123. Mbswish321 says

    You are right saying I believe in god just bc it can’t be proven
    isn’t a good reason to believe in god. Saying the sun won’t rise would be dumb because it most likely will but noone knows for sure. You made good points and I appreciate your comments. I really don’t know what to believe about religion. What do you believe?

  124. Usagichan says

    Oh dear, Mbswish, I took too long composing my post.

    Even if it was
    somehow proved that there is only a 1% chance god exists wouldn’t that 1% chance to believe be better than the alternative of believing there is no god

    But what if you choose the wrong God? By believing in YWEH over say Shiva, or worse Baal, you might be automatically damning yourself – indeed, without some method of proving which is the true God, your odds of getting it right are miniscule – surely the risk of damnation from picking the wrong God to believe in outweighs the bet on the right one.

    I think you’ll need to come up with a better justification for belief than Pascal’s wager before you start to convince people here. I guess I fell into the trap of assuming that you might want to live your only life in an intellectually free and unfettered way, and not waste it in the thrall of of a bunch of Bronze Age Fairy tales (see that formulation works both ways;))

  125. Kel, OM says

    I guess I’ve just fallen into the trap of assuming everyone would want internal bliss and happiness.

    No, you’ve fallen into the trap of believing that internal bliss and happiness comes from ignorance, from living an unexamined life and having no curiousity about the world at all. By creating the impossible standard of absolute certainty on matters non-mathematical, all you’re serving to do is build a hole in which you can remain ignorant. Meanwhile you’re operating in the world of the empirical, where there isn’t absolute certainty but there is progress.

    See the computer in front of you? It has a mathematical capability comparable to the entire human species. Underneath it lies physics. These physical “laws” have been derived through the scientific method, the a posteriori process which is inherently uncertain. And that’s fine because uncertainty is not a weakness – it’s a strength!

    For being on a computer in a house that is no doubt electrified makes you a hypocrite. While you decry knowledge that isn’t absolute, the fact of the matter is that all those modern conveniences are based on the scientific process. Not being able to say with absolute certainty that a computer works through electrons passing through semi-conducting material arranged in such a fashion as to create logic gates does not mean that saying that computers work through magic black smoke is anything but wrong. Can’t be absolutely certain that computers don’t run on magic blue smoke – there’s just no reason to assume it is and so much to suggest that it isn’t.

  126. Mbswish321 says

    I thank all of your for your help! I appreciate your comments.
    I was wondering if there isn’t a god. Which from what I’m reading there more than likely isn’t a god in the biblical sense. What happens when I die? What do atheists believe? Do I just do the best I can while I’m on earth than once dead hopefully I’m remembered ? I guess sad that heaven isn’t real. And i’m not disguising what I believe I’m not pretending I believe in something to get good treatment from god. I don’t know What I believe is my point. Maybe I never will.

  127. Mr T says

    I really don’t know what to believe about religion. What do you believe?

    I am an agnostic atheist, although I was raised in a large Catholic family (I wouldn’t recommend it, but I still love them).

    I believe this conversation should probably move to the endless thread, since this is an old post and we’re going off-topic.

    Very briefly, I think “gods” and the “supernatural” have always been unintelligible concepts in peoples’ minds. Until something about such a concept can be rendered useful and definite, it is meaningless and does not exist. I could say, “I believe qpdskxcvnai exists”, but I’ve said nothing coherent whatsoever until I’ve explained what “qpdskxcvnai” means and how one would determine whether what I’ve said is true. Thus, there is no chance whatsoever that what I’ve said is true. Until there is something to back it up, it’s only a bunch of meaningless words with no relationship to the real world. Try doing this with “god” or “soul”. The words cannot possibly refer to anything real until someone has evidence indicating what they are, what they do, or at least some idea of how their existence could in principle be verified.

  128. Mbswish321 says

    I never meant to offend anyone and I never thought I’d be attacked like I have been. I wasn’t using any theory or trying to say I know enough about this to be justified in any way. I’m sorry my messages were misleading and angered people.. Please dont attack me I was just trying to get help.. I was simply looking fir help not to be attacked. I don’t blame you for judging me or hold any I’ll will I understand what you all are saying and I’m sorry if I offended anyone. I won’t post anymore. I’m sorry

  129. Mr T says

    What happens when I die? What do atheists believe?

    I personally believe that when you die, the rest of the universe will continue to exist for quite a while. This can be either comforting to you, or not at all. It doesn’t matter.

    What happened to you before you were born? You didn’t experience that either, and that’s a time-reversal of what happens when you die: you’re not experiencing things. It is nothing to worry about. Your life while you still have it is what should concern you.

  130. Mr T says

    Please dont attack me I was just trying to get help.. I was simply looking fir help not to be attacked.

    I’m sorry, but I don’t know why you think you’re being attacked. We do tend to have no respect for religious beliefs, which should be expected. I’ve felt no need to attack you personally, especially since you seem to be polite and sincere.

    Please feel free to comment, but I would recommend that we eventually move this to the endless thread, since nothing is off-topic there.

  131. Usagichan says

    Mbswish –

    No intent to attack you personally – however you did use a (rather poor) argument for belief in god – the replies were refutations of the argument, not personal attacks on you.

    I doubt any of the posters were actually offended, but people here are used to dealing with posters of a rather dogmatic stripe, and tend to respond robustly. I would advise you not to be quiet, keep asking questions, argue and learn to defend your positions, but don’t be afraid to change your mind.

    People here seem too forthright sometimes, but most have pretty thick skins. Avoid dogma, ask questions of the posters here, ask them of yourself… substance is what matters – sorry if my earlier response sounded like an attack – remember if you put forward an argument, be prepared to defend it (or question it)!

    Good luck with the thinking :)

  132. Kel, OM says

    What do you believe?

    When it comes to religion? I believe that religion is a human construct, something that has its roots in tribal culture whereby supernatural thinking is not considered a separate worldview but just the way things are. That with the dawn of civilisation, religion served as a means of codifying behaviour of a particular type, as well as serving to explain the unknown. That particular destructive events were tied to immorality – something that still persists in some circles today (e.g. blaming New Orleans for Hurricane Katrina) is a sure sign of the relationship between religion as an explanatory process and religion as an ethical one.

    These days we know that earthquakes aren’t caused by gay men having sex, but by the movement of tectonic plates. Hurricanes aren’t caused by promiscuity but by differential pressures in the atmosphere. Bushfires aren’t God’s wrath for abortion, but are usually started by human intervention or lightning. 9/11 was terrorists, the global financial crisis had to do with business practices.

    So what do I believe when it comes to religion? That religion is a social construct, a human-based endeavour that was an attempt to understand the relationship between humanity and nature. As an explanation for the world, science has superseded it. Now we can understand the world through empirical measure. In terms of the existential and ethical teachings, those too have been superseded by philosophers and other thinkers. Religion once had a domain, that domain has crumbled. And pushing the supernatural as part of the package? That just makes it absurd.