A short argument against immortality


This weekend, I got into an argument with Eneasz Brodski, Eliezer Yudkowsky, and David Brin about immortality. We each took a few minutes to state our position, and I prepared my remarks ahead of time, so here they are, more or less.

First, let me say that I’m all in favor of research on aging, and I think science has great potential to prolong healthy lives…and I’m all for that. But I think immortality, or even a close approximation to it, is both impossible and undesirable.

Why is it impossible? I’ll cite the laws of thermodynamics. Entropy rules. There is no escaping it. When we’re looking for ways to prolong life indefinitely, I don’t think there’s enough appreciation of the inevitability of information loss in any system in dynamic equilibrium, which is what life is — a chemical process in dynamic equilibrium. What that means is that our existence isn’t static at all, but involves constant turnover, growth, and renewal.

We already have a potent defense against death put in place by evolution: it’s called more death. That sounds contradictory, I know, but that’s the way it works. Every cell replication has a probability of corruption and error, and our defense against that is to constrain every subpopulation of cells and tissues with a limited lifespan, with replacement by a slow-growing population of stem cells (which also accumulates error, but at a slower rate). We permit virtual immortality of a lineage by periodic total resets: reproduction is a process that expands a small population of stem cells into a larger population of gametes that are then winnowed by selection to remove nonviable errors…but all of the progeny carry “errors” or differences from the parent.

In all the readings from transhumanists about immortality that I’ve read, none have adequately addressed this simple physical problem of the nature of life, and all have had a naively static view of the nature of our existence.

The undesirability of immortality derives from the proponents’ emphasis on what is good for the individual over what is good for the population. There’s a kind of selfish appeal to perpetuating oneself forever, but from the perspective of a population, such individuals have an analog: they are cancers. That’s exactly what a cancer is: a unit of the organism, a liver cell or skin cell, that has successfully shed the governors on its mortality and achieved immortality…and grows selfishly, at the expense of the organism.

Of course, it then all spun on from that and much more was said on all sides.

The transhumanists certainly had an ambitious vision for the future — they talked rather blithely about living for billions of years or more, but just their idea of individuals living for 10,000 years seemed naive and unsupportable to me. I don’t think it’s even meaningful to talk about “me”, an organic being living in a limited anthropoid form, getting translated into a “me” existing in silico with a swarm of AIs sharing my new ecosystem. That’s a transition so great that my current identity is irrelevant, so why seek to perpetuate the self?

Comments

  1. Holms says

    This seems to go hand in hand with the ‘technological singularity’ crap. In every explanation I have ever heard, the grand vision is achieved only by hand-waving vast difficulties and assumptions with ‘future technology will solve that’.

    Rubbish.

  2. says

    I’ve seen many good arguments that lifespans measured in centuries would be a disaster: a person who has only 50 years to accumulate money and power is bad enough, but what if they have 500 years? Or 5,000 years?

    And there is no way this could ever be made available to the general population: what would happen to food, drinkable water and living space if all 8 billion people could live such lifespans, and still be able to breed, with their children being able to live that long as well? So it remains in the hands of a very elite few, who are now fighting to get it all — literally — using people whose lives mean as little as a mayfly’s.

    It would not be a pretty picture at all.

  3. sawells says

    Isn’t there a further evolutionary argument to be made – that the longer you extend your lifespan, the further you move away from the environment to which you’re adapted? As time passes you’d have to put increasing efforts into either adapting yourself to changes in the environment, with every adaptation making you more different to the “you” you were trying to preserve, or insulating yourself from the environment, which again makes mere persistence pointless because you won’t get to actually experience the new world you wanted to live to see.

  4. octopod says

    But that’s crap, because if you thought of it in those terms you’d never want to let yourself change at all because changing would be equivalent to dying. Obviously that’s not true — so perhaps it’s more useful in this case to think of the carbon-to-silicon transition as equivalent to a partial metamorphosis. It would be immortality in that you’d have a more or less continuous sense of self, that’s all.

  5. Lofty says

    Living for much longer than I am likely now would turn me into an even grumpier old man. Get off my lawn, you mere thousand year olds.

  6. says

    How about we concentrate on creating a global society in which every human can live freely and comfortably, without fear of deprivation or exploitation, and every baby born can reasonably expect to see its 80th or 90th birthday? Then we can worry about pushing that envelope outwards.

  7. dianne says

    I presume most or all of the people commenting here about the evils of immortality are not actually suicidal. Therefore, you must see some meaning or pleasure in life or existence. So what is “enough”? People used to rarely live past 40. Is 40 too old? Should we go back to a more “natural” state where most people die before age 40 in order to avoid being selfish (wanting that on average about an additional 40 years of life) or accumulating too much power? If it’s ok to live to 80, what is different, in principle, about living to 100? Or 500? Or 1000?

  8. says

    I saw an article years ago that claimed that even if we stopped aging it would only delay death, not end it. It argued that sooner or later you’d end up in an accident or other misadventure that would kill you, no matter how efficient emergency services and medicine might be. I’m not sure what the estimate they made was, but I think it was only a couple of thousand years at the most.

    Another thing I wonder about such medical immorality is how memory would work after several hundred years. In fiction immortals are always portrayed as having accurate memories of their lives. But I wonder if you lived long enough you wouldn’t lose a lot of your early memories simply because the brain didn’t evolve to contain more than a few decades worth. (It would certainly make a good plot element in a story about immortals.)

  9. Anders Kehlet says

    That’s a transition so great that my current identity is irrelevant, so why seek to perpetuate the self?

    To experience it.
    When I do psilocybin I am still ‘me’, even though the transformation is incomprehensible.
    That aside, my mind is very different from what it was ten years ago and it will be more different still ten years from now.
    If my mind ceased to evolve I might as well die.

  10. dianne says

    How about we concentrate on creating a global society in which every human can live freely and comfortably, without fear of deprivation or exploitation, and every baby born can reasonably expect to see its 80th or 90th birthday? Then we can worry about pushing that envelope outwards.

    Why not both? What more urgent thing do we have to do with our resources than to make sure that as many people as possible can live as long and happily as possible, both in terms of implementing available technology and finding new technology to further improve life and health. The only other urgent matter I see is the health of the planet and avoiding an eco-disaster.

  11. Bob Dowling says

    The thermodynamics argument doesn’t hold water. Entropy increases in a closed system. But we have plenty of energy gradients to reverse entropy in a small component (ourselves) of the larger system (the universe).

    The arguments about the morality and desirability of immortality are far more convincing.

  12. says

    octopod:

    It would be immortality in that you’d have a more or less continuous sense of self, that’s all.

    But it wouldn’t be continuous. The silicate me might have an illusion of continuity, but the biological me, the identity I inhabit, would no longer exist. That does no good for the biological me. This isn’t immortality for my current sense of self. It’s an egotistical indulgence, the assumption that my personality and knowledge are worth preserving.

  13. dianne says

    immortality is impossible.

    Given that the universe is probably finite, hard to argue. But what does that have to do with whether or not it’s a good idea to try to extend people’s life expectancy to 200 years, which is almost certainly not outside of the life expectancy of the universe?

  14. alanuk says

    I did not get very far into the post:

    “…I’ll cite the laws of thermodynamics. Entropy rules. There is no escaping it. When we’re looking for ways to prolong life indefinitely, I don’t think there’s enough appreciation of the inevitability of information loss in any system in dynamic equilibrium, which is what life is…”

    What! PZ writing like a creationist! I know you biologists are starting to get into quantum mechanics but this came as a bit of a shock.

  15. octopod says

    Surely it’s an egotistical indulgence to think that your biological self is worth preserving, too?

    I should avoid this argument; I’ve had it before and it’s very difficult to get it to go anywhere. It seems that you have made up your mind to accept the axiom that the illusion of self is distinct from some kind of “real” self, which I reject, so we’ve gotten down to the level of axioms, where logic and argument are pretty much powerless.

  16. says

    The predictions for electronic immortality tend to be no different than those for “traditional” life after death. They assume consciousness, personality, etc. have some sort of separate existence from the brain, and that they can be transferred, intact from the brain. It’s the eternal soul of religions with science trappings added.

  17. David Wilford says

    I think any research into immortality has to include finding out why Keith Richards is still alive. There has to be *something* there that’s preventing death in his case, big time.

  18. Marshall says

    I don’t think technical arguments against “immortality” (and I use that word loosely) are valid, because they rely on our technology, which will soon be immensely outdated. In 2,000 years I don’t think any person on this planet can even remotely predict what society will be like, or what humans will be like, if we are still around.

    I say “loosely” because the Entropy issue is one that appears inevitable. But we’re talking 10^50 years, not a couple billion years. If we do hit a Big Freeze, it won’t matter what we want or what we should do, because it’s going to happen anyway. I don’t see why the deterioration of usable energy is even an issue, because if it happens, it happens, there’s nothing we can do about it, and we shouldn’t even bother “planning” for it, because every single plan, or non-plan, has the exact same outcome.

    With respect to preserving the “self” by a metamorphosis into in silico bodies–this is a very interesting discussion that I think deserves more than a single blog post. I am unable to define what my “self” really is, and Steve Novella had a few blog posts that spawned some very interesting discussions with respect to making perfect copies of oneself and whether those copies are still the same person, have the same consciousness, etc., and whether or not some sort of temporal and physical continuity is necessary to retain a sense of self. If we slowly replaced our biological cells with electrical ones, one by one, the transition would probably be slow enough that you’d still consider yourself you; what then?

    I see no moral problems with living for very long times. It will surely turn current societal norms upside down, but if someone gave me the opportunity to live for thousands of years at least, with the option to end my life whenever I wanted–I would take it in a heartbeat.

  19. unbound says

    I’ve seen many good arguments that lifespans measured in centuries would be a disaster: a person who has only 50 years to accumulate money and power is bad enough, but what if they have 500 years? Or 5,000 years?

    Gregory has one of the most important issues right here. It’s bad enough that we have the super-rich families that can keep a clamp on power for several generations (before one of the children finally messes up), you really don’t these people living forever. Most of the super-rich don’t get there by being smarter…they get there by being extremely aggressive assholes with very little (usually zero) regard for the masses (other than using them as stepping stones to their own wealth).

  20. says

    Personally, I’ve been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I’m willing to grapple with those issues for as many centuries as it takes.

    – Patrick Nielsen Hayden

  21. machintelligence says

    Surely it’s an egotistical indulgence to think that your biological self is worth preserving, too?

    Of course it is, but the urge was hard wired into us by natural election, so I think we can be forgiven. I have no problem (obviously) with the thought of transferring my personality or self or whatever into a silicon based life form. I want to go on a grand tour of the universe and moist robots are not well adapted for that.

  22. mikeconley says

    But I wonder if you lived long enough you wouldn’t lose a lot of your early memories simply because the brain didn’t evolve to contain more than a few decades worth. (It would certainly make a good plot element in a story about immortals.)

    Already done: See Jerome Bixby’s The Man from Earth.

  23. jamessweet says

    First things out of the way: Although I agree with the bulk of PZ’s post, I think the last paragraph is far too easily dismissive. Certainly it is not at all clear that a digital copy of one’s self would be the “same person” in certain key meaningful ways, but it’s not at all clear that it wouldn’t be either. The persistence of identity is a very difficult problem, not to be swept away so lightly. ocotpod @4 provides a pretty good reductio ad absurdum of PZ’s dismissiveness in the end.

    I have a feeling that if such a technology did come to pass, people would apply the Duck Test: When my friend “went silicon”, did the result still look, act, and quack like my friend? If yes, well then it must be my friend! People would get used to the idea really fast, I think.

    Okay, that out of the way — like I say, I agree with the bulk of PZ’s post, except that I think the first part more or less obviates the second part, at least for the foreseeable future. There are some serious, serious ethical conundrums posed by deep life extension. But extending life more than a couple decades is just not in the cards anytime soon, so I’m not sure it matters.

  24. dianne says

    @21: The argument that immortality or even modest life extension is a bad idea because the Koch brothers and Donald Trump will get it first is one of the more compelling ones in my opinion. But I’m not willing to commit suicide to kill Trump, so can’t really say I’m convinced by the argument that we should all commit passive suicide to get them either.

  25. badgersdaughter says

    I thought about greatly enhanced lifespans when it flooded around here and they talked about the hundred-year flood levels. A hundred-year event is statistically expected to happen about once in a given hundred-year period. Naturally a lifespan of several centuries will be expected to have more than one. A thousand-year lifespan would be expected to have at least one “thousand-year” event in it. These events could be catastrophic. Significant psychological breakthroughs would probably have to be performed to help someone cope with the possibility of living through that many disasters, that long in which to become a victim of crime, and so forth. I imagine it’s possible. But how would you treat someone who’s dealt with schizophrenia for the past 400 years? Who has been involuntarily imprisoned for the past 200? Who can’t seem to get over the loss of their family in a war 300 years ago? Even counseling someone whose moral and social outlook has not changed since they were a young person hundreds of years ago? I’m not saying immortality is a bad thing. I’m just saying that it could have the effect of causing or perpetuating a lot of suffering in ways barely guessed at today.

  26. Beatrice, an amateur cynic looking for a happy thought says

    Dianne,

    Research needs money. If given a choice between feeding a village or seven and investing into immortality, what do you think rich bastards will choose?

    All you’ll get is a 5000-years old Donald Trump, and illness and hunger plaguing whatever part of the world will be fucked up by World War _ at that point.

  27. says

    Whatever means of life-prolongation we humans invent will, inevitably, be available only to the richest humans — for a very long time after its invention. And that gross inequality of wealth — on top of all the other inequalities we’re already familiar with — will spawn such a backlash of resentment that the 1% will find themselves and their kids hiding in mile-deep bunkers for their entire lives, or abandoning Earth for self-sustaining (and heavily-armed) permanent settlements in space. For a movie version of this scenario, think “Elysium” with a touch of “Fierce Creatures.”

  28. badgersdaughter says

    Do I want to live a thousand years? Do I worry too much about how many wars, pandemics, global warming events, famines, depressions, and social stupidity trends I would worry about in that time? Do I have GAD? What are the long-term effects of taking Zanax for 900 years? LOL…

  29. left0ver1under says

    The tune of “Would You Like To Swing On A Star?” just started running through my head (re: turritopsis nutricula).

    Forget the issue of whether immortality is biologically possible, how about the fact that being immortal would get boring? Or the social isolation of growing up in a certain era and that era and it’s people eventually disappear? It would be like being forced to live in a foreign country with no home to go back to (e.g. Edward Snowdon).

    Part of the fun of living (for me, anyway) is making it last as long as possible, enjoying it and learning while I still can.

  30. says

    I have a feeling that if such a technology did come to pass, people would apply the Duck Test: When my friend “went silicon”, did the result still look, act, and quack like my friend?

    The answer will almost surely be “no.” It is sheer folly to think that a human mind will function the same outside of a human body as it did inside one — if it’s evan able to “function” at all. Our thought patterns and personalities are almost completely, if not completely, determined by our neural hard-wiring, and said hard-wiring is closely connected to how our bodies work. Even if you could “copy-paste” an entire human mind from ordinary flesh to some other hardware/wetware/liveware/whatever, I really don’t think there’s any way to predict how that ridiculously complex set of interrelated thought-patterns would change or adapt (or fail to adapt) to its new vessel.

  31. Randomfactor says

    One reason not to do both is, as the first part of my argument states, immortality is impossible.

    When a distinguished, but elderly, scientist says that something is impossible, he’s very probably wrong. –(one of Clarke’s three laws).

    Of course, what “elderly” means in this context is debatable. It’s been suggested that the cutoff point in physics is 30 or so.

  32. says

    Dianne @11: Your last sentence is a prerequisite for achieving my first. If we can’t accomplish that with a life expectancy of ~80, it will be a disaster with significantly longer lifespans (unless we’re going the in silico route, and our robot bodies are far less resource-intensive to manufacture and sustain).

    Also @8: Historical life expectancies were largely a function of infection, privation (which contributed to the previous cause), accident and violence. We know in principle how to fix most of those, and to a great extent have implemented those solutions, at least in the West. Mortality in the ninth decade has a number of causes which seemingly can be summed up as “the body just wears out” — even given adequate lifelong nutrition and good preventative care. Fixing the causes, or working around the effects, of that decay is likely to be a harder problem. Not that we shouldn’t try — being now in my sixth decade, and with a few subsystems already no longer working quite at “factory spec”, I’m all in favour of, at least, staving off senescence, if not death itself.

  33. Beatrice, an amateur cynic looking for a happy thought says

    Also, overpopulation.
    Unless we would have already colonized other planets by that point.

  34. kemist, Dark Lord of the Sith says

    But I wonder if you lived long enough you wouldn’t lose a lot of your early memories simply because the brain didn’t evolve to contain more than a few decades worth. (It would certainly make a good plot element in a story about immortals.)

    Or maybe it’d start to get slow and buggy like a hard disk that is too full. Unless you can build extensions to it.

    One thing I think would happen if people lived a few thousand years is that most of them would commit suicide out of sheer boredom. I don’t think most humans can handle that much time without becoming utterly disinterested in it all at some point. I know I couldn’t.

  35. dianne says

    If given a choice between feeding a village or seven and investing into immortality, what do you think rich bastards will choose?

    The current evidence suggests that the answer is neither. The current set of rich bastards are cutting funding for medical research including research into diseases that wealthy people get like mad. That’s why I don’t like this argument: we’re fighting over a tiny crumb while ignoring the people sitting around with the rest of the cake.

  36. Beatrice, an amateur cynic looking for a happy thought says

    That’s why I don’t like this argument: we’re fighting over a tiny crumb while ignoring the people sitting around with the rest of the cake.

    I don’t know what the rest of the cake in this analogy is.

    You say that the rich don’t fund research even if it could improve their own lives? But that just limits our funds for research even more.

  37. ChasCPeterson says

    Four hundred quatloos on the bearded thrall.

    what life is — a chemical process in dynamic equilibrium. What that means is that our existence isn’t static at all, but involves constant turnover, growth, and renewal.

    I figure your argument is probably right, but I mean to quibble about word choice. Life is about as far from “equilibrium” (in a chemical or physical sense) as you can get; in fact equilibrium with the environment = death.
    That’s why we physiologists refer instead to a ‘dynamic steady state‘, where the steady state is maintained far from equilibrium by constant energy input and use.

  38. dianne says

    Also, overpopulation.

    Overpopulation is a major problem right now–in countries with low life expectancies and high birth rates. People worry about underpopulation and lack of replacement level birth in many industrialized countries. Education, lower infant mortality (so people are willing to take the “risk” of only having a few children rather than having 10-20 and hoping one survives), and access to birth control will reduce overpopulation far more than limiting life expectancy.

  39. dianne says

    I don’t know what the rest of the cake in this analogy is.

    The money the 0.0001% has that they are not putting back into the economy in any form (taxes, investments, charitable donations, anything at all.)

    You say that the rich don’t fund research even if it could improve their own lives?

    Nope. I don’t understand it, but they don’t. The NIH is losing funding, including sections dedicated to research into heart disease, cancer, etc (i.e. things that old, rich people get.) Rich people simply don’t seem to be able to think long term. We need higher taxes. (Caution: rant US centric, but probably true in other places as well.)

  40. says

    You say that the rich don’t fund research even if it could improve their own lives?

    The rich don’t want GOVERNMENT to fund such research, because they would not be able to control who benefits from it. They would, however, be perfectly happy to fund such research through private entities, as long as such entities answer only to them, and have no obligation to share the fruits of their labors with anyone other than the owners of their assets.

  41. David Marjanović says

    I’m with comment 32. I am my body.

    (…Coming to think of it, I’ve been thinking so for long enough that it probably helped in my deconversion.)

    People used to rarely live past 40.

    Misunderstanding from average life expectancy at birth. When most babies die and the rest go on to live till 60 or 70, you get averages like 35 or 40.

    I saw an article years ago that claimed that even if we stopped aging it would only delay death, not end it. It argued that sooner or later you’d end up in an accident or other misadventure that would kill you, no matter how efficient emergency services and medicine might be. I’m not sure what the estimate they made was, but I think it was only a couple of thousand years at the most.

    That seems to be how turtles actually work. As you can imagine, the observations haven’t been concluded yet, but currently it looks like they simply don’t age. They live slowly enough that they can repair all of the normal damage. They get more fertile with age, not less…

    What! PZ writing like a creationist! I know you biologists are starting to get into quantum mechanics but this came as a bit of a shock.

    Do explain.

  42. consciousness razor says

    I don’t think it’s even meaningful to talk about “me”, an organic being living in a limited anthropoid form, getting translated into a “me” existing in silico with a swarm of AIs sharing my new ecosystem.

    *puts down bong*
    Would you use a Star Trek transporter, one that destroys the original and makes a clone somewhere else? More to the point: whether or not you’d choose to do it, would “you” be the clone, or does it only make sense to talk about “you” being the original? I mean, this is assuming the clone is physically identical in every way (except location obviously); but I figure which assumptions you make about identity or spatio-temporal continuity in the transporter case might be related to some of your intuitions about the brain-uploading case.

    Or it could have to do with what you think about AI in general. Is full-blown consciousness (of the type an organic being has) only possible in something that’s an “organic being”? That is, are you sort of like John Searle and his Chinese Room argument, who thinks the materials of the brain make all the difference, not their functional relationships?

    If you’re not going down either of those roads, I don’t think you could be answering the same question when you claim it isn’t meaningful. You could talk about your identity as an organism (all the details about the exact micro-state of your body as it changes over time); but you could also talk about your subjective self-experience. The latter is closer to what the question is supposed to be about: could there be an experience of the same subjective-“you” your brain cooks up for the organism-you (same memories, same personality, etc.), even if it was a different kind of brain which is doing the cooking?

    I seriously doubt any kind of brain-uploading is ever likely to happen, but I have a lot easier time believing we could make AI some time in the very distant future. It’s pretty clear that there’s a whole lot going on under the hood in your brain that makes the subjective-“you” happen, but you don’t experience any of it. So as long as something is making the same subjective-“you,” changing the substrate from neurons to whatever else wouldn’t make a difference, as far as self-identity (in this sense) is concerned. It would think it was you, because it feels and remembers like you, except as your experiences begin to diverge. Or if it’s just plain-old AI (not a copy or an upload), it would be able to think it was “itself,” whatever that is. The point is that it would have the kind of identity you and I have, which is the kind of identity a rock or neuron or proton doesn’t have.

  43. David Marjanović says

    One thing I think would happen if people lived a few thousand years is that most of them would commit suicide out of sheer boredom. I don’t think most humans can handle that much time without becoming utterly disinterested in it all at some point. I know I couldn’t.

    I’m a scientist. I can’t imagine becoming bored. :-)

  44. dogfightwithdogma says

    @20

    I am unable to define what my “self” really is….

    I don’t know what what “self” is but physician and neuroscientist Robert A. Burton’s book A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves provides a discussion on this subject that may be helpful. Here is one excerpt from Chapter 1 on the subject.

    To come up with anything remotely resembling a tentative answer to what the mind might be or do, we first need some working understanding of the placeholder for the mind – the self. A mind … is an integral aspect of a self, a part of what makes us an individual as opposed to an object. It’s the center of our being, the main control panel for our thoughts and actions. The self’s central function – creating thoughts and actions – is commonly what we mean when we talk of a mind.

    Burton writes that the “evidence is overwhelming: the most basic aspects of a sense of self – its physical dimensions and where we experience our center of awareness – are constructed from sensory perceptions.”

  45. dianne says

    Misunderstanding from average life expectancy at birth. When most babies die and the rest go on to live till 60 or 70, you get averages like 35 or 40.

    True, but not complete. There have always been people who have lived extremely long lives. But they have been rarer in the past. The life expectancy at 18 years is also higher now than it was in the Medieval era or ancient Rome.

  46. says

    The abnegation of reasoning that is necessary to believe that in silico consciousness is even possible makes Christianity seem like empirical wisdom. But it is fashionable, and as with all religions, once it is fashionable in a society, even questioning its absolute truth is not allowed. We must all believe in the Information Processing Theory of Mind, or be punished for our heresy. Since we’ve only ever learned to model neural activity with electronic computers, we are taught to assume that our brains work like (very badly built) electronic computers, and no alternatives will be tolerated.

    The real problem is that atheists never do achieve independence from social belief structures as they are taught to assume that they have already done, simply because they have rejected religion. They just change which belief structures they use to maintain their tribal truths. As soon as you get away from the strictest quantitative metrics and mathematical formulas, you’re dealing with opinions and interpretations, not science. As PZ’s frequent combat with evolutionary psychologists makes clear, opinions are held just as dearly by most atheists as by anyone else.

    But back on subject, it seems like a cowardly retreat to discuss personal immortality (being an indefinite, not an infinite, lifespan) in the context of entropy. It looks for all the world like you’re hand-waving to avoid dealing with a difficult philosophical subject, probably because that is exactly what you’re doing. There isn’t anything impossible about immortality, unless you specifically define immortality so that you make it impossible to achieve.

    That said, there isn’t anything particularly interesting about talking about it, since it is so hypothetical to begin with. And it all comes down, not to how long the lifespan is, but to how much variation of lifespan there is between different people, and how that is determined. A hundred, a thousand, a billion years; the assumption that there is some magical length wherein people will then finally be “more wise” in some stupendous way is just another fantasy like St. Peter and the Pearly Gates; ubiquitous but unsupported by facts or even a rigorous theory.

  47. says

    Would you use a Star Trek transporter, one that destroys the original and makes a clone somewhere else?

    No, because I have no way of knowing — or even being reasonably certain — that the “me” that steps into the transporter would continue to accumulate thoughts, perceptions and memories at the destination-point. From my own point of view, would you really be transporting me somewhere else, or would you just be killing me and replacing me with an identical clone somewhere else?

    And no, whether or not you believe in a “soul” or “spirit” of any sort doesn’t change the question — there’s no guarantee that your “soul” would automatically go to wherever your new body materializes.

  48. Anthony K says

    I, personally, can’t wait to see what mind-blowingly hilarious shit Singularians are tweeting at DefCon 3013.

  49. says

    We must all believe in the Information Processing Theory of Mind, or be punished for our heresy.

    The very existence of this thread, and countless other articles and conversations about the same subject, proves you wrong.

    The real problem is that atheists never do achieve independence from social belief structures as they are taught to assume that they have already done, simply because they have rejected religion. They just change which belief structures they use to maintain their tribal truths.

    Wait, lemme guess…you’re a Christian trying to inject bits of his evangelist/propagandist script into an atheist conversation.

    There isn’t anything impossible about immortality, unless you specifically define immortality so that you make it impossible to achieve.

    So give us an alternative definition that’s both meaningful and possible.

    That said, there isn’t anything particularly interesting about talking about it, since it is so hypothetical to begin with…

    Then why are you coming here to talk about it?

  50. Donnie says

    I am reluctant to agree that “immortality is impossible”. We humans are pretty fucking ingenious and we ‘typically’ assume our current understanding is ‘the end all, be all’ of everything. We also thought that the ‘speed of sound’ was impossible to break – but we did.

    At one point, we thought going to the moon, was impossible – but we did.

    We think that the ‘speed of light’ was impossible….for now, yes, it is. However, with future sources of energy, we may not be able to break the speed of light, for that is probably a ‘Law’ but we could increase space travel tot he point of approaching ‘speed of light’

    Thus, maybe immortality is a Law, but we can approach near immortality, which would be a disaster for humans, if nothing else changes. Longevity would be, ‘Longevity, INC’ and TM where only the rich could be able to purchase it.

    Without evolution correspondingly altering our birthing patterns, we would lay waste to the Earth without resulting changes in our consumption patterns – Solar for Energy, some type of Biomass for our food source.

    /Sorry for the Monday morning ramblings……

  51. dianne says

    We already have a potent defense against death put in place by evolution: it’s called more death. That sounds contradictory, I know, but that’s the way it works. Every cell replication has a probability of corruption and error, and our defense against that is to constrain every subpopulation of cells and tissues with a limited lifespan, with replacement by a slow-growing population of stem cells (which also accumulates error, but at a slower rate).

    Off the main topic, but just wanted to point out about this point…This is exactly how a lot of natural cancer defense mechanisms work: They simply stop rapidly reproducing cells from continuing to live and reproduce. This is also essentially the way that the oldest and most nonspecific of chemotherapies work. In short, nature, in its great wisdom, has given us a sort of “natural” cisplatinum for fighting cancer. Now can we stop with the “natural=good” thing already?

  52. Anthony K says

    Would you use a Star Trek transporter, one that destroys the original and makes a clone somewhere else?

    Does being destroyed hurt just a little bit; the kind of pain that lets you feel you’re alive?

  53. alyosha says

    Perhaps I lack the imagination but I find it difficult enough to subscribe to a view that what we deem to be the ‘self’ has any basis in reality, let alone that this nebulous collection of memories, values, emotions, faults and talents can be aspirated from our corporeal shells and uploaded into something more permanent without radically altering our consciousnesses (whatever that means when the transfer is complete). I enjoy the what ifs posed by enthusiastic transhumanists and it makes for good science fiction but my thoughts are that the first generations of Homo novus will have exactly the same sense of humanity as we know it now as those cryogenically frozen heads still awaiting reanimation. In a sense, even if we can survive our deaths in this way, we still essentially die. Id have to imagine that any synthetic body would still have to simulate neuro-biological chemistry and plasticity. Again, perhaps I lack imagination. Do people think the passions that define us will follow us into our new forms? How does the desire to survive and reproduce express itself as an artificial intelligence not bound by the restraints of emotion?
    In any event, immortality kind of devaluates life. The thread began along the lines of who would have access to such an upgrade. Last words to Chaplin’s Adenoid Hynkel:
    ‘The hate of men will pass, and dictators die, and the power they took from people will return to the people and so long as men die, liberty will never perish.’

  54. dogfightwithdogma says

    @31

    …how about the fact that being immortal would get boring?

    What makes this a fact? I know it is quite common for many people to assume this to be true. I’ve had discussions with friends locally about immortality and I frequently hear them say this and declare it to be a fact. But I’ve never understood what about this statement makes it a fact. For myself, I don’t think I would be bored if I were immortal. I suspect I would experience a whole lot of human emotions with greater intensity, such as frustration, etc. But I don’t think boredom would be one of them.

  55. consciousness razor says

    No, because I have no way of knowing — or even being reasonably certain — that the “me” that steps into the transporter would continue to accumulate thoughts, perceptions and memories at the destination-point.

    Uh… I don’t know what this means.

    From my own point of view, would you really be transporting me somewhere else, or would you just be killing me and replacing me with an identical clone somewhere else?

    The latter. It’s not a “transporter” in the literal sense of moving your physical body from A to B to C, etc. It’s more like a fax machine that destroys the paper with the information, transmits the information and prints it onto a new piece of paper.

    So that’s why I said the point isn’t really whether or not you would choose to do that. The original you would cease to exist. Some new thing would start to think it was you and that clone would have continuity in terms of his/her/its experiences and in terms of yours up until your death. So I’d need to have a very good reason to do it, but it’s not because I’m doubting whether or not it would happen the way I think it would. It’s because I’d have to die, but if that would mean my clone could go save lots of people’s lives or some convoluted scenario like that, it could be ethical thing to do.

  56. Robert B. says

    Technically, perfect immortality is impossible due to thermodynamics. In practical terms, thermodynamics will theoretically allow lives so long that it’s hard to imagine the difference. But I usually say “radical life-extension” instead of “immortality” for precision’s sake.

    As far as thermodynamics goes, the continuous existence of life has been a billions-of-years-long local entropy inversion. The physics doesn’t care that no individual organism has lasted that long – there has been a direct chain of cell division going all the way back to the first cells, and complex intelligence has been continuously present for millions of years. That means that there can’t be any hard-and-fast physical laws against maintaining an organism for at least a few million years, a lifespan so long that I don’t think we can imagine it from our current perspective. The only thing standing in our way is the (presumably immense, since still unsolved) technical challenges. Humans are very good at solving technical challenges.

    That said, I really don’t think computer-uploading is going to be the answer. The human mind is very much situated in its biology. To get a human mind to run on a computer and be faithful to the original, you’d have to emulate all that biology, which seems unnecessarily difficult. It’s like what they say about building bipedal war machines (i.e., mecha, “giant robots”) – theoretically you could do it, but in practice by the time you can, you’ve had the tech to do something more effective for a long time. I’d bet the real solution is something biological/medical – artificial stem cells, maybe, copied from a sample taken from youth and known to be error free, and implanted in the body to replace stem cells that have accumulated errors over time. Or, more likely, a combination of several medical technologies that a non-doctor will not be able to guess in advance.

    The sociological challenges are real, but we’ve been more or less figuring out how to deal with the politics/sociology/ethics of our current, less radical life extension tech so far. (Not to understate that challenge, either, given how big a political issue healthcare is in some countries.) So far in human history, the development in tech has been slow enough to solve the sociological problems it raises as we go, though with temporary disruptions that I hope we can learn to reduce. In other words, if we do tech like we’ve always done tech, we’re not going to go from 70 years to immortal in one invention, we’re going to extend life spans relatively gradually over time. If that pattern continues, we can presumably continue to adapt to our own tech more-or-less fast enough. And to change that gradual development curve, we would need a radical advance in intelligence. If that intelligence is ethical, it will be smart enough to rebuild society to fit the new tech. If that intelligence is not ethical, we’re all screwed anyway.

    Also, keep your eye on the ethical ball, here. Whatever your argument against the ethics of radical life-extension is, its implied last line is “therefore, everyone must die.” Whatever case you make needs to be good enough to support that conclusion. Good luck.

  57. dianne says

    ‘The hate of men will pass, and dictators die, and the power they took from people will return to the people and so long as men die, liberty will never perish.’

    But the dictators reappear again in new guises. Maybe if people lived longer, we’d remember the last time a plausible, charismatic person came by offering to solve all our problems and remember how well it worked out then. Maybe, for example, Japan is now flirting with nationalism because the generation that can say from personal experience, “Been there, done that. Bad plan,” is now dying and the younger people don’t know what they’re voting for.

  58. Anthony K says

    For myself, I don’t think I would be bored if I were immortal.

    I’ve known very few people who think they’d be bored with 90 years. I’ve known a number of bored 90-year-olds.

    Even noting the lack of accuracy in declarations by the young about the old would get tedious, eventually.

    Even so, I think boredom would be less of an issue than surviving the various neuroses, depression, and trauma accumulated over a lifetime. Healthy people fear memory loss. The rest of us somewhat wistfully covet it.

  59. Anthony K says

    Maybe, for example, Japan is now flirting with nationalism because the generation that can say from personal experience, “Been there, done that. Bad plan,” is now dying and the younger people don’t know what they’re voting for.

    I doubt that. Ever met an anti-immigrant WWII vet?

    Next generation or not, people don’t learn shit.

  60. says

    On the matter of Star Trek transporters:

    “no way of knowing — or even being reasonably certain — that the “me” that steps into the transporter would continue to accumulate thoughts, perceptions and memories at the destination-point”

    You have no way of knowing (let’s set aside the ambiguity of ‘reasonably certain’ for now) that the “you” that woke up this morning was actually the same “you” that went to sleep last night. Just because you remembered being that person doesn’t mean you are, any more than remembering what happened before you got in the transporter does.

    Or, rather, it does, and you have nothing more to fear from a transporter than you do a good nights sleep, existentially speaking.

  61. dianne says

    You have no way of knowing (let’s set aside the ambiguity of ‘reasonably certain’ for now) that the “you” that woke up this morning was actually the same “you” that went to sleep last night.

    Or the “you” now is the “you” that wrote this sentence.

  62. says

    RagingBee@#53:

    “The very existence of this thread, and countless other articles and conversations about the same subject, proves you wrong.”

    This thread doesn’t question the Information Processing Theory of Mind at all. I’m not just talking about the feasibility of in silico consciousness; not everyone who assumes the Information Processing Theory of Mind believes you can transfer consciousness into electronic hardware. But just about everyone assumes the Information Processing Theory of Mind is valid, regardless.

    “you’re a Christian trying to inject bits of his evangelist/propagandist script into an atheist conversation.

    No, actually, I’m an atheist, who has perfectly predicted how you would react to my points.

    So give us an alternative definition that’s both meaningful and possible.

    I already did. Try reading comments more carefully before you respond to them. A meaningful definition of immortality would be an indefinite lifespan rather than an eternal one.

    Then why are you coming here to talk about it?

    I’m not talking about immortality. I’m talking about your response to other people talking about immortality. And that is positively fascinating, let me tell you.

  63. alyosha says

    Robert B.,
    Good point on ethical considerations vis a vis the necessity of human turnover. The quote was more of a flourish than anything. But if we automatically consider immortality to be some evil that must be avoided simply because it is an affront to the natural order our decision must be based more on emotional reaction than reason. The binary valuation is usually life:good, death:bad but I take no moral position on deathlessness. I’d merely find it dehumanising. But you’re right, synthetic immortality is worthless to our still mortal side if it isn’t shackled to the ethics we’ve evolved thus far.

  64. sigurd jorsalfar says

    In a world of immortals, there would always be immortals who, in spite of their ‘immortality’, die as a result of mishaps or criminal acts.

    It’s interesting to ponder the possible differences between present-day mortal coping strategies toward these types of deaths, and the strategies surviving ‘immortals’ might employ to cope with such tragedies. Would death become even more of a tragedy to them, because it was almost entirely avoidable? Would this make it seem even more ‘senseless’ to them? Or would it be seen as even less of a tragedy because the deceased had already lived so long?

  65. consciousness razor says

    You have no way of knowing (let’s set aside the ambiguity of ‘reasonably certain’ for now) that the “you” that woke up this morning was actually the same “you” that went to sleep last night. Just because you remembered being that person doesn’t mean you are, any more than remembering what happened before you got in the transporter does.

    If so then, what does remembering something mean? For all you know, if this is supposed to be convincing, it could happen any time even while you thought you were wide awake just a second ago, or a second before that. Or maybe I’m a brain in a vat dreaming it’s a butterfly, who’s being tricked by an evil demon who thinks it’s a p-zombie in the Matrix who likes to eat Boltzmann brains. We could keep going like this for a long time. Or maybe there’s no reason to be this absurd about it.

    All I get from it is that we’re never exactly the same from moment to moment anyway.

  66. consciousness razor says

    This thread doesn’t question the Information Processing Theory of Mind at all.

    What is the question?

    I’m not just talking about the feasibility of in silico consciousness; not everyone who assumes the Information Processing Theory of Mind believes you can transfer consciousness into electronic hardware. But just about everyone assumes the Information Processing Theory of Mind is valid, regardless.

    Well, functionalism isn’t just a claim about brains-with-neurons on the one hand and electronic computers on the other. We could just as well talk about the mechanical computers from way-back-when. It’s just that electronic computers are more practical in a lot of ways. But it could be made of anything whatsoever, since we’re only concerned with getting the same result. The thing is, a conscious entity made of pulleys and gears and billiard balls (for example) would take a lot more space and time and energy than anyone wants to spend.

  67. says

    Regarding the memory thing….

    There is a growing belief among memory researchers that the brain relies on “archetypes.” You actually have only one or two physical memories of the taste of bacon: all of the apparent memories of bacon link back to them. REM sleep is when the brain recompiles, tossing out actual memories from short-term storage and integrating the day’s experiences into long-term storage with heavy object reuse (pardon the computerese.)

    According to this model, children learn faster because they have fewer archetypes: they are building a “library” and links into them are pretty straightforward. As we get older, though, the ability to store and link novel information becomes more difficult and memory begins to ossify. Someone who pursues life-long learning can stave this off, but not completely. To use another computer example, the problem does not appear to be one of storage so much as the storage becoming fragmented. The ability to link begins to suffer, and memories begin to get lost in the shuffle.

    Without a major redesign of how the brain stores memories, very long lifespans will probably bring us to a point where novel experiences cannot be integrated at all. We see this sort of slow down in people who are 90 and 100; I cannot imagine what it would be like for someone who is 200, much less 500 or 1000.

  68. Shplane, Spess Alium says

    (let’s set aside the ambiguity of ‘reasonably certain’ for now)

    Yes, because just ignoring the massive difference in degree between “The particles that comprise me went into low-power-consumption mode for a while” and “The particles that comprise me were broken apart and a bunch of other particles were put in the same shape” is entirely logical. The first one is just as likely to result in the ceasing of an individual’s consciousness as the other. Totally no different at all.

    I seriously don’t understand why anyone thinks the “It’s a perfect clone of you! Obviously that means that the first you exploding is ok!” argument isn’t ridiculous. There is no reason to believe that any given human being does not entirely arise from the chemical reactions within their bodies. If those chemical reactions are ceased, some other chemical reactions that look the same popping up somewhere else isn’t going to suddenly cause that consciousness to transfer over. No, a perfect copy of me is not me. It’s a perfect copy of me. Not all philosophy is masturbatory, but that sure as hell is.

    That isn’t to say that a gradual replacement scheme of some sort couldn’t potentially result in a continuation of the same consciousness in a more resilient body, but just copying someone’s brain into their smart phone isn’t going to result in them being immortal, it’s going to result in there being a copy of them on their smart phone. Very little is likely to produce real immortality, even if I would very much like it to and think it’s kind of an asshole argument to say “Well, you’d be displacing future people! Who are more important to you! So you have to die!” because I am not pro life, but whatever.

  69. jetboy says

    I know I’m going to die, I just don’t want to. Who does? I’d like to watch the stars change, see new comets, watch the continents come together and pull apart again. I’d like to see new forms of life change, adapt, and evolve. I’d like to measure the advance of ice sheets and the number of times Mt. St. Helens will reform and erupt before the hot spot moves on, and I’d like to see where it goes. I’d like to observe and measure the progress of this world and the universe it exists in. If I have the opportunity to extend my life, I’ll do it. If it’s an extra hundred years or an extra billion years, I’ll do it. The point is, of course, moot. If I could, I would. I have spent my whole life up to this point in a state of great amazement and wonder about the world – I can’t imagine that sense going away. I’m already alone and isolated because of it, so I don’t see much concern in that regard, either.

  70. machintelligence says

    maxdevlin @ 49 & 66

    They just change which belief structures they use to maintain their tribal truths. As soon as you get away from the strictest quantitative metrics and mathematical formulas, you’re dealing with opinions and interpretations, not science.

    Atheist or Christian, postmodernist crap doesn’t score many points here.

  71. says

    65@diane:

    Or the “you” now is the “you” that wrote this sentence.

    Yes, or that there is a “you” at all, but let’s not get too far ahead of the group.

  72. says

    I think if we extended lifespans to 1000 years or beyond, we’d discover that people’s characters and personalities would change unrecognizably on the timescale of a century or shorter. It’s pretty much already true that people in old age are not the same people they were when they were young adults.

    David Brin was proposing some fairly exotic scenarios, in which immortality would simply be a difficult problem rather than an impossible one. Like the scenario where only very wealthy people can extend their lifetime, and only by giving up all their wealth. All I could think was, why would people in power ever want to create such a societal structure?

  73. says

    74:

    Neither does calling anything you don’t want to try to argue against “postmodernist” and giving up, I hope. Your ball.

  74. says

    timgueguen:

    But I wonder if you lived long enough you wouldn’t lose a lot of your early memories simply because the brain didn’t evolve to contain more than a few decades worth. (It would certainly make a good plot element in a story about immortals.)

    Glass House by Charles Stross.

  75. badgersdaughter says

    When I think of “me in a thousand years”, the picture in my mind is me-as-I-am-now, but displaced forward in time one thousand years. When I say something like “I would like to see what’s changed in the world in a thousand years”, who knows whether I will or not? Four hundred years from now, the 445-year-old “me” might well want something I can’t even begin to guess now.

  76. Beatrice, an amateur cynic looking for a happy thought says

    I know I’m going to die, I just don’t want to. Who does?

    Um.

    Let’s not make hasty generalizations.

    And it’s not even necessarily about being suicidal, I think much more people would find the idea of never dying horrifying.

  77. Beatrice, an amateur cynic looking for a happy thought says

    Just imagine living for 500 years, and having to work for at least 450 (I’m being extremely generous with retirement time here, obviously).

  78. says

    Atheist or Christian, postmodernist crap doesn’t score many points here

    “Here,” machintelligence? You realize one of the bloggers here is a postmodernist, no?

  79. WharGarbl says

    @Beatrice, an amateur cynic looking for a happy thought
    #80

    And it’s not even necessarily about being suicidal, I think much more people would find the idea of never dying horrifying.

    No being able to die is horrifying.
    Having a choice on when to die, on the other hand…

    @PZ

    That’s exactly what a cancer is: a unit of the organism, a liver cell or skin cell, that has successfully shed the governors on its mortality and achieved immortality…and grows selfishly, at the expense of the organism.

    Are you talking about reproductive immortals as a whole or individuals?

    A slight tangent. One other reason that immortality suck is that, if you still have a sex-drive, sooner or later the only sex you can have is either masturbation or bestiality.

  80. alyosha says

    Beatrice @ 80,
    A big part of what made atheism so comfortable, to me at least, was knowing that one day I would die. A Paradise or Hell is more terrifying than death itself and a hypothetically unending life, even on my own terms, is undesirable. I’ve met people who express a fear of death, pathological death phobias but afterlife aversion is less common, it seems to me.

  81. consciousness razor says

    No, a perfect copy of me is not me. It’s a perfect copy of me. Not all philosophy is masturbatory, but that sure as hell is.

    My impression is that folks like Yudkowsky are thinking of “uploading” or “transporting” as a non-destructive process. Any time I’ve explained this to people just like you did here, they’ve had no problem agreeing with it. But it’s not at all clear to me how they think a non-destructive version would work. I’m not sure anyone does have a clear idea about it.

    Very little is likely to produce real immortality, even if I would very much like it to and think it’s kind of an asshole argument to say “Well, you’d be displacing future people! Who are more important to you! So you have to die!” because I am not pro life, but whatever.

    I don’t see the connection with being “pro-life.” It’s not about “potential” people being “more important” than I am. There are a lot of things which I think are good and worth doing, because I think people will continue to live for some reasonable length of time after I’m dead. So if in some way I can make the world or society a better place for someone else (wherever and whenever), that’s what I should do instead of making it worse for them.

  82. says

    (let’s set aside the ambiguity of ‘reasonably certain’ for now)

    Yes, because just ignoring the massive difference in degree between “The particles that comprise me went into low-power-consumption mode for a while” and “The particles that comprise me were broken apart and a bunch of other particles were put in the same shape” is entirely logical.

    I dispute your notion that what you said has anything to do with what I said. You can’t even begin to guess how such particles might actually work, and you’re entirely inventing this “low power consumption mode”, so your reference to logic is inappropriate.

    I seriously don’t understand why anyone thinks the “It’s a perfect clone of you! Obviously that means that the first you exploding is ok!” argument isn’t ridiculous.

    Is that what you got from my comment? Let me set you straight: it’s the “it’s a perfect clone” part that I find ridiculous, so I’m only arguing the philosophical, not the personal, “ok-ness” of the exploding part. I consider it ridiculous that heavy objects don’t fall faster than light objects, but it is true anyway.

    If those chemical reactions are ceased, some other chemical reactions that look the same popping up somewhere else isn’t going to suddenly cause that consciousness to transfer over. No, a perfect copy of me is not me. It’s a perfect copy of me. Not all philosophy is masturbatory, but that sure as hell is.

    The problem is that you are implicitly imbuing the first set of chemical reactions with a magic they don’t actually possess to begin with. From the last ten seconds to this one, a vastly different set of chemical reactions have occurred, but your consciousness was maintained. Their physical continuity may or may not cause your consciousness to “transfer over”, but whether or not that is the case, we still don’t know precisely how or why it happens. And it doesn’t always happen. It is possible for you to go insane at any moment, and all of our theories stop making sense, which doesn’t make them very good theories IMHO.

    The question isn’t really whether a perfect copy of you is you. The question is whether a perfect copy of you is possible; whether your consciousness would ‘transfer’ like spooky action at a distance or not is just the presumed result based on previous assumptions. Philosophically, the real question is whether you are nothing but a perfect copy of you, without any special magic maintaining your continuity at all, just your own (and our) inherent assumption that you must be you and not just a copy, or even a figment of the imagination.

    Not all philosophy is beyond you. But this may be.

    That isn’t to say that a gradual replacement scheme of some sort couldn’t potentially result in a continuation of the same consciousness…

    “Potentially”? Which is it. Will it or won’t it? Pick a side so I can call it ridiculous and insinuate you are masturbating…

  83. Anthony K says

    Just imagine living for 500 years, and having to work for at least 450 (I’m being extremely generous with retirement time here, obviously).

    We can all get promoted to upper management, and spend our days in meetings alternating between spouting vacuous buzzwords and complaining about how hectic our busy schedules are!

  84. WharGarbl says

    @consciousness razor
    #86

    So if in some way I can make the world or society a better place for someone else (wherever and whenever), that’s what I should do instead of making it worse for them.

    How can you be sure that you dying would make the world a better place?

    @alyosha
    #85

    a hypothetically unending life, even on my own terms, is undesirable.

    That argument sounds disturbingly like the argument anti-choice and anti-gay uses.
    “The possibility of gay marriage, even on my own terms, is undesirable.”
    “The possibility of an abortion, even on my own terms, is undesirable.”

  85. archi says

    Entropy rules. There is no escaping it.

    Errors in DNA are accumulating with time. But in not distant future everybody will have their DNA sequence written down for less than 10$ on the day of their birth. So there’s blueprint or maybe we could use frozen stem cells and multiply them. Then we could grow spare parts like liver, heart, kidney, bladder and so on in vitro, to replace the old ones.

    Later we could develop new techniques to replace old cells with new containing original (or augmented) not corrupted DNA, and repeat process every couple of decades.

  86. Beatrice, an amateur cynic looking for a happy thought says

    Anthony

    We can all get promoted to upper management, and spend our days in meetings alternating between spouting vacuous buzzwords and complaining about how hectic our busy schedules are!

    What is this… I thought atheists didn’t believe in hell!

  87. says

    jetboy:

    I know I’m going to die, I just don’t want to. Who does?

    Well…

    Right now, at this moment, I don’t want to die. I’m enjoying myself. That said, I’ve found myself, more than once, in such pain with an illness that death hasn’t seemed to be a bad option at all. I’d say when it comes to wanting to die, everything depends on the situation.

  88. says

    @82: Yes. I’d like to live as long as there were interesting things to do, to learn, and to experience — which could well be a very long time. But if I had to spend some large fraction of that time doing something boring to stay alive and afford the interesting things, I’m not sure that would work out so well in the long run.

  89. Beatrice, an amateur cynic looking for a happy thought says

    That argument sounds disturbingly like the argument anti-choice and anti-gay uses.

    No it doesn’t. How do you even manage to go there?

  90. consciousness razor says

    How can you be sure that you dying would make the world a better place?

    That’s not what I meant. I mean that certain things I care about (environmentalism, for example) wouldn’t make any sense, if I knew, say, the whole planet was going to explode ten minutes from now and there was nothing anyone could do about it. If I knew people weren’t going to be around to benefit or suffer because of my actions, there wouldn’t be any effect to take into consideration. So it’s not that I think future people matter more than me or something silly like that. It’s just that I care about other people, not just myself.

  91. WharGarbl says

    @Beatrice
    #95

    No it doesn’t. How do you even manage to go there?

    Unless I misread it, her argument sounded like.
    “The thought of being immortal is undesirable, so the possibility shouldn’t exist.”
    The key part was the “even on my own terms” phrase that sounded like even if she has the choice to decline, it is still undesirable.

  92. alyosha says

    ‘On my own terms,’ means I would like to live on indefinately regardless of what my existence costs to others. Poorly worded, mayhaps. Assisted living, even on my own terms, a modification of the current conversation, would likewise be undesirable. I know I have to relinquish my life at some point, but the costs of immortality are totally different to marrying whom I desire and making the decision to abort a fetus in that the latter two don’t drastically affect the here-&-now. As it stands, on mortality I’m glad I have no say on whether I get to live forever or not haha, so I guess you could argue that I’m an apathetic pro-immortalist.

  93. says

    WharGarbl@#81:

    […]“Your Brain Thinks Your Future Self Is a Different Person”

    Your brain is correct. The fact that the future you bears similarities to the current one is largely coincidence and social convention.

  94. machintelligence says

    maxdevlin @ 87

    The question isn’t really whether a perfect copy of you is you. The question is whether a perfect copy of you is possible; whether your consciousness would ‘transfer’ like spooky action at a distance or not is just the presumed result based on previous assumptions.

    OK. So you are a dualist. Most rational people have abandoned that idea, but suit yourself.
    BTW Postmodernist argument, once identified, can be safely ignored.

  95. says

    100:

    OK. So you are a dualist. Most rational people have abandoned that idea, but suit yourself.

    You are incorrect. In fact, you are more of a dualist than I am, but you are both unable and unwilling to recognize it.

    BTW Postmodernist argument, once identified, can be safely ignored.

    I’ll be waiting until that happens.

  96. Anthony K says

    I know I’m going to die, I just don’t want to. Who does? I’d like to watch the stars change, see new comets, watch the continents come together and pull apart again. I’d like to see new forms of life change, adapt, and evolve. I’d like to measure the advance of ice sheets and the number of times Mt. St. Helens will reform and erupt before the hot spot moves on, and I’d like to see where it goes. I’d like to observe and measure the progress of this world and the universe it exists in. If I have the opportunity to extend my life, I’ll do it. If it’s an extra hundred years or an extra billion years, I’ll do it. The point is, of course, moot. If I could, I would. I have spent my whole life up to this point in a state of great amazement and wonder about the world – I can’t imagine that sense going away. I’m already alone and isolated because of it, so I don’t see much concern in that regard, either.

    One of the things about being amazed by the world is that we’ve come up with ways to condense and disseminate information about it. Geologists and astronomers talk about deep time, and it’s hard to grok because none of us have, or have to, experience it. How long have you been alive? How much time do you spend watching stars just to see them move? Staring at glaciers to actually watch them retreat? Counting the silt particles eroding off a mountaintop? Because we’re not developed to actually see that stuff. We do a lot of analysis via time-lapse. While everything else is moving perceptively slow, we have to fill up our time doing other things.

    I get this, I really do, but actual human immortality is a lot of Spider-Man reboots while you’re waiting for a mountain to do something on a timescale you can see.

    What is this… I thought atheists didn’t believe in hell!

    This one does, only he spells ‘demon’ like this: E-X-E-C-U-T-I-V-E-[space]-D-I-R-E-C-T-O-R.

    Going forward…

  97. daniellavine says

    Shplane@72:

    If those chemical reactions are ceased, some other chemical reactions that look the same popping up somewhere else isn’t going to suddenly cause that consciousness to transfer over.

    Well, no, I don’t think anyone expects consciousness to “transfer” over. But whether or not one needs consciousness to “transfer” over in the first place very much depends on what “consciousness” is in the first place.

    It seems to me that when one assumes there is something that must “transfer” over one is implicitly assuming substance dualism — that consciousness is a sort of “stuff” distinct from physical matter that is continuous in time. Personally, I don’t think consciousness is continuous in time. My evidence: I’ve fallen asleep a few times in my personal history.

    You can try thought experiments like “what if I couldn’t remember any of my life up until today”, but that’s actually not going to do the trick because your ability to speak a language (just one important example) was learned and if your memory was literally obliterated you wouldn’t be able to think in terms of language (which I tend to think would prevent you from doing what we call “thinking” at all).

    Or you can read some case studies about amnesia, especially anterograde amnesia (or watch Memento — “How can I heal if I can’t feel time?”). Or you can think about the reasons why “Slaughterhouse Five” is paradoxical — for example, the scene in which Billy Pilgrim finds himself in a public speaking engagement and to his own surprise ended up being eloquent — because he experienced the speaking engagement before experiencing the public speaking lessons he had taking chronologically (but not experientially) earlier. It make no sense under the assumption that one must actually experience something to learn it.

    One thought experiment I think is pretty good for motivating this sort of thinking about what consciousness actually is instead of what it feels like: imagine you are put under anesthesia and while you are under your body is copied atom-for-atom and the copy is placed in a completely identical room. When you recover from the anesthesia is there any way to tell whether you are the copy or the original? If you can figure out a way to do so and clearly explain it then I will grant you that it’s obvious that the Star Trek transporter scenario is undesirable.

    Otherwise I’m going to stick to my current thinking on this: you are your memories of who you are and if there is a copy of those memories then that copy is also you. A corollary is that it is completely possible and non-contradictory for the same person to exist in two different bodies at the same time — though not for very long since their experiences would soon diverge and become two similar but somewhat different people (albeit sharing the same name and childhood memories, etc.).

  98. consciousness razor says

    maxdevlin, let’s back up…

    You have no way of knowing (let’s set aside the ambiguity of ‘reasonably certain’ for now) that the “you” that woke up this morning was actually the same “you” that went to sleep last night.

    Or the “you” now is the “you” that wrote this sentence.

    Yes, or that there is a “you” at all, but let’s not get too far ahead of the group.

    Could you tell me what is this is supposed to mean?

    We know that we exist, right? Do we not know if we have conscious experiences?

  99. Anthony K says

    Could you tell me what is this is supposed to mean?

    CR, I believe Max Devlin is referring to the philosophical problem (or related ones) referred to as Theseus’ Paradox.

  100. says

    BTW Postmodernist argument, once identified, can be safely ignored.

    Another person who read the back cover of a book by Alan Sokal and thinks he learned something important.

  101. consciousness razor says

    One thought experiment I think is pretty good for motivating this sort of thinking about what consciousness actually is instead of what it feels like: imagine you are put under anesthesia and while you are under your body is copied atom-for-atom and the copy is placed in a completely identical room. When you recover from the anesthesia is there any way to tell whether you are the copy or the original? If you can figure out a way to do so and clearly explain it then I will grant you that it’s obvious that the Star Trek transporter scenario is undesirable.

    Whatever is outside the exit of each room would have to be different eventually, because they’re in different locations. If I’m not allowed to know arbitrary details about how I got into the room… well, I just don’t see why that’s a fair requirement.

    Otherwise I’m going to stick to my current thinking on this: you are your memories of who you are and if there is a copy of those memories then that copy is also you. A corollary is that it is completely possible and non-contradictory for the same person to exist in two different bodies at the same time — though not for very long since their experiences would soon diverge and become two similar but somewhat different people (albeit sharing the same name and childhood memories, etc.).

    My memories and other experiences, including my sense of having a self or identity, is something my brain makes. They are representations or models made by the brain, of the world and some of the brain’s own processes. I’m not an experience of an organism, or a representation of an organism, or a model of an organism. I’m just the organism itself. So when you point to this clone, which is a different organism, even if for some brief instant it’s sharing self-representations with me (assuming I’m not destroyed), you’re not pointing to me. You’re pointing to someone else, who has the same representations of themselves as I do.

  102. leftwingfox says

    John Rogers (who proposed the Crazification principle, and is responsible for both Leverage and The Core) had some interesting thoughts here:

    http://kfmonkey.blogspot.ca/2013/04/arcanum-immortality-is-so-so-creepy.html

    To call out a specific example: no matter who you voted for, wasn’t it a little goddam tiring in the 2000 election to still be refighting the 32-year old Vietnam War records of the two candidates for the US presidency?

    Now imagine it was the Civil War.

  103. ChasCPeterson says

    You realize one of the bloggers here is a postmodernist, no?

    I didn’t or hadn’t realized that.
    I’m not even sure what it means.
    This ignorance = bliss, so far.

  104. Anthony K says

    This ignorance = bliss, so far.

    Now imagine living forever without that bliss. Or Dennis Farina.

  105. intelligentdesigner says

    Personally I would like to live a lot longer that 100 just to satisfy my curiosity. The downside is out living my children. But I wonder if PZ has been reading my blog:

    Probably one of the most annoying laws of science is the fact that entropy tends to increase. I am reminded of this whenever my wireless mouse stops working. When that happens it means that the batteries powering my mouse have reached maximum entropy. Well maybe that’s a bad example of how annoying entropy can be because if it wasn’t for entropy the mouse wouldn’t work at all.

    But when I look into the mirror that’s when entropy really annoys me. That’s when I notice that I don’t have as much hair as I used to and that it is turning gray. Basically I notice that I am growing old. Human aging and its associated diseases and conditions can be traced to a gradual increase in cell division errors in tissues throughout the body. This process begins slowly and increases gradually with advancing age. We can do things to slow the increase in cell division errors (or speed it up) but we can’t stop it. If not by accident, we all eventually die due to the increasing entropy of our own DNA.”

  106. daniellavine says

    consciousness razor@108:

    Whatever is outside the exit of each room would have to be different eventually, because they’re in different locations. If I’m not allowed to know arbitrary details about how I got into the room… well, I just don’t see why that’s a fair requirement.

    Sure, whatever is outside the exits is different. Doesn’t matter. Nor does any concept of “fairness” as far as I can see. I’ll rethink that if you can explain to me how “fairness” factors into the thought experiment.

    The question is when you wake up on the bed can you tell whether you’re the original or the copy. If you can’t tell how can you assume there’s anything special about being the original in the Star Trek transporter thought experiment?

    The purpose of the thought experiment is to get over the intuition of continuity of consciousness because intuition is unreliable.

    My memories and other experiences, including my sense of having a self or identity, is something my brain makes. They are representations or models made by the brain, of the world and some of the brain’s own processes. I’m not an experience of an organism, or a representation of an organism, or a model of an organism. I’m just the organism itself.

    I disagree. To convince me of this you’d have to make an actual argument instead of just asserting it. I do think the “self” is a representation produced by the brain, not the organism itself. Everything I experience — including the experience of being an organism — seems to be a product of the brain that I infer is inside “my” body.

  107. says

    This ignorance = bliss, so far.

    Now imagine living forever without that bliss. Or Dennis Farina.

    Or black rhinos. Or Yangtze river dolphins. Or whitebark pines. Or Joshua trees. Or American pikas.

    Sonoran pronghorns, Mojave fringe-toed lizards, Tui chubs, coho salmon.

    Add to the list as increasing human lifespan allows us to blithely commandeer more of the biosphere for our purposes.

    If there’s someone who’s fine with living forever themselves as all those other species wink out of existence, I don’t think I want to know them.

  108. says

    Chris:

    If there’s someone who’s fine with living forever themselves as all those other species wink out of existence, I don’t think I want to know them.

    A world of wall to wall people is my idea of hell. It’s too crowded as it stands.

  109. Anthony K says

    It would be fun to be able to answer Ken Ham’s “Were you there?” with “Yes, and I can tell you, with all the certainty that I can have about anything, that trilobites were total assholes.”

  110. Scr... Archivist says

    Let’s try a modest first step in this direction.

    How would the statistics for average global life expectancy at birth change if we reduced infant mortality rates to less than three per 1,000 live births in every country in the world? This would be an improvement for many countries, including the United States of America.

    Then we sustain that rate, and bring it down even more. How long will we expect newborns to live at that point?

    Then what would be the next statistic to conquer? Maybe rates of survival to the age of five? Reduction in maternal deaths to fewer than 10 per 100,000 live births, in every country in the world? Universal inoculation against contagious diseases for which vaccines are already available? Modern sanitation infrastructure worldwide? Protection of fresh water security? Better buildings in earthquake zones? Universal access to effective contraception? Sex parity in all levels of education and public-policymaking? A global minimum wage? A basic income guarantee?

    If you want to run the longevity numbers higher, there is a lot that is ready be done, and more-immediate projects than jumping straight to immortality. And as more people are able to participate in the global noosphere (rather than remain in a loop of basic survival), you might get some fresh new ideas for tackling the ever-more-ambitious endeavors.

  111. unclefrogy says

    all these ideas of technological immortality or silicon transition/transfer really drive me nuts. They take so much for granted and base so much on the ideas of European thought with the separate soul that can live on from the body, christian heaven, the Greek after life, the Ka from Egypt all mythological.
    If you want to “live forever” or talk about living forever I think the first thing to do would be to determine what living is or means and how it works and how that differs with none living.
    Along with that and with as much importance would be what is the mind, how does it function, what properties give rise to a mind.
    How can we even approach this subject without a sound understanding of what life is and what a mind, (ego?) is? Other wise it just is a conservation about immortal life and heaven in which we have just eliminated god but kept everything else.

    uncle frogy

  112. Anthony K says

    And as more people are able to participate in the global noosphere (rather than remain in a loop of basic survival), you might get some fresh new ideas for tackling the ever-more-ambitious endeavors.

    If that were the case, John Galt would have cured everyone by now.

  113. consciousness razor says

    I’ll rethink that if you can explain to me how “fairness” factors into the thought experiment

    For the same reason your “completely identical room” factors into it. If I wake up in a different room than I started in, that’s a reason to believe I’m the clone instead of the original. Maybe I should ask how big this completely identical room is, whether I can perform tests on microscopic details of the room(s) before and after, or whether I can only rely on my macroscopic perceptions that it seems “completely identical.”

    The question is when you wake up on the bed can you tell whether you’re the original or the copy. If you can’t tell how can you assume there’s anything special about being the original in the Star Trek transporter thought experiment?

    I don’t think there’s anything special here, just that I’m dead if the original gets destroyed. Like I said before, in some very contrived situations, I could imagine doing that knowing I was going to die and that some other person with my experiences was going to come to life somewhere else. But it’s still some other person living after that point, not me. I’ll just face oblivion, just as I would if I died any other way. (And presumably so will the clone, eventually, unless physics gets to be tossed out the window.)

    I disagree. To convince me of this you’d have to make an actual argument instead of just asserting it.

    I don’t understand what you’re disagreeing with. I don’t think I’m the sort of thing which fits the description of a “representation.” I think I’m better described as an “organism” or something like that, which does the representing. So if we agree on that, I don’t see why equivalence in terms of a representation should lead me to believe there’s just “one” of them, since we’re counting the wrong thing. They may believe they’re one and the same person, but that doesn’t make it so. Like I’m doing right now, we can still meaningfully talk about there being two of them (and use “them” and “they” and so on).

  114. microraptor says

    One thing I see being ignored in the Star Trek Transporter argument: it’s a mechanism that’s supposed to be instantaneously creating an exact duplicate of you from the atomic or subatomic level every time you use it.

    I don’t even want to think about the level of genetic and cellular damage a device like that is likely to cause, thanks to all the mistakes it’s liable to make.

  115. kemist, Dark Lord of the Sith says

    I get this, I really do, but actual human immortality is a lot of Spider-Man reboots while you’re waiting for a mountain to do something on a timescale you can see.

    Exactly.

    Spider-Man reruns, eating, pooping, sleeping.

    Even momentous, once-in-a-few-century events would get boring at some point. Civilisations crumbling, history repeating itself ad nauseam.

    But it may be that only people who have had depression, and have actually felt boredom so intense as to want to die, feel that way.

    Or black rhinos. Or Yangtze river dolphins. Or whitebark pines. Or Joshua trees. Or American pikas.

    Sonoran pronghorns, Mojave fringe-toed lizards, Tui chubs, coho salmon.

    Add to the list as increasing human lifespan allows us to blithely commandeer more of the biosphere for our purposes.If there’s someone who’s fine with living forever themselves as all those other species wink out of existence, I don’t think I want to know them.

    And that would probably motivate one particularly depressive and suicidal immortal to bring the entirety of humanity with him/her, using the momentous amount of knowledge, money and power he or she accumulated over the centuries.

    Mmmmm… That would make for an interesting if a bit depressing book.

  116. stevem says

    re OP:

    David Brin? the author Brin? I recently read one of his latest books, “Existence”, where people were beginning to get the technology to become digital avatars to be flung around the galaxy (universe) to spread knowledge (or something, i.e. unclear). So, these ‘avatars’ are effectively immortal, themselves, but even accepting that premise, I still couldn’t see how such a capability would benefit ME directly. Assume it is possible, a perfect digital representation of me can be created. A somewhat Turing test would not be able to distinguish between me and my “avatar”. But even so, I’m still here, that thing is just a perfect copy, it is not ME, Myself. If I die, I’m still dead, even if my copy can go on forever. It’s the same problem I always had with Star Trek, the “transporter” is just a “xerox”; creates a perfect copy over there and detroys the original here. I am a “monist”, but I’m stuck with the thought that there is an “I” in me that can’t just “jump” into the digital copy of me if my biological functions cease.

    re entropy:

    2nd law of thermo only applies if “immortality” means “forever”, infinite time beyond the existence of the universe. IF, instead, “immortality” means “as long as the universe exists” then thermo doesn’t really apply.

  117. daniellavine says

    consciousness razor@122

    For the same reason your “completely identical room” factors into it. If I wake up in a different room than I started in, that’s a reason to believe I’m the clone instead of the original. Maybe I should ask how big this completely identical room is, whether I can perform tests on microscopic details of the room(s) before and after, or whether I can only rely on my macroscopic perceptions that it seems “completely identical.”

    OK, then, let’s assume you can figure out which of the two rooms you are in. Now how can you be certain that you and the copy/original weren’t switched while both bodies were under anesthesia? Is there any test that can be performed to confirm that?

    It seems to me like you’re just trying to wiggle out of the terms of the thought experiment and thus completely missing the point of it.

    I don’t think there’s anything special here, just that I’m dead if the original gets destroyed….I’ll just face oblivion, just as I would if I died any other way. (And presumably so will the clone, eventually, unless physics gets to be tossed out the window.)

    But what does it mean for “you” to be “dead”? Who are “you” anyway? What do you mean by “oblivion”? Is it any different from the experience of being under anesthesia?

    I don’t understand what you’re disagreeing with. I don’t think I’m the sort of thing which fits the description of a “representation.”

    I’m disagreeing with what I already explained what I’m disagreeing with…this:

    I think I’m better described as an “organism” or something like that, which does the representing. So if we agree on that,

    We don’t agree on that. I disagree with you.

    I don’t see why equivalence in terms of a representation should lead me to believe there’s just “one” of them, since we’re counting the wrong thing. They may believe they’re one and the same person, but that doesn’t make it so. Like I’m doing right now, we can still meaningfully talk about there being two of them (and use “them” and “they” and so on).

    If their beliefs don’t “make it so” then what does? What does it mean to be a person? You seem to be making a lot of implicit assumptions here.

    If we are representations — and I think we are since, as I already explained, everything we ever have experienced or ever will experienced is a representation — then I’m “counting” the right thing and you are not. We simply disagree on this point. How do we resolve that? (Do we bother?)

  118. Anthony K says

    One thing I see being ignored in the Star Trek Transporter argument: it’s a mechanism that’s supposed to be instantaneously creating an exact duplicate of you from the atomic or subatomic level every time you use it.

    I don’t even want to think about the level of genetic and cellular damage a device like that is likely to cause, thanks to all the mistakes it’s liable to make.

    Even more importantly, who’s gonna build and program these transporters? Because if we leave technology to tech people, I can tell you what’s going to happen: half the time you’ll be stuck in the transporter buffer while some fucking ‘loading’ symbol floats above your head because OS MMMX doesn’t play well with KLin(g)ux and the other half the time you’ll find yourself rematerialising with an empty latinum account and three dicks on your head that some L33T hacker put there for the lulz.

  119. consciousness razor says

    You seem to be making a lot of implicit assumptions here.

    One of my starting points is that the “external” physical world exists, although not necessarily exactly as science describes it now. So everything is made of that kind of physical stuff. Unless you’re really claiming we (or just you, never mind me) are made of ideas or some such thing, I don’t see how you get to the claim that the experience of being an organism is somehow more fundamental metaphysically than being an organism. I get that this could be hard to figure out for the people involved, so it’s tricky epistemologically, but so far I have just been taking it as a given that we’re made of physical stuff. And I think it would help if your assumptions were spelled out a little more.

    If we are representations — and I think we are since, as I already explained, everything we ever have experienced or ever will experienced is a representation — then I’m “counting” the right thing and you are not. We simply disagree on this point. How do we resolve that? (Do we bother?)

    If you change which thing is doing the representing, and if that can be changed because there’s more than just you and your mind creating everything else, then that other thing is having the experience rather than the first thing. An experience of the “external” world, which actually exists, whether or not you do and go around representing it in your mind. Something you see was present in front of you, then you re-present that thing, elsewhere, as a kind of map of the territory that’s out there. If you’re not “really” representing it because there is no “it,” or the “it” comes out of the fact of your representation, I just can’t make any sense of that at all. It’s like you’re saying there’s no territory or something, just a map.

    And if that can’t be changed, then I don’t know who you’re talking to or what you’re trying to explain to them.

  120. Scr... Archivist says

    Anthony K @121,

    If that were the case, John Galt would have cured everyone by now.

    Fictional characters can’t do any of that. And Randian Mary-Sues actually get in the way.

  121. consciousness razor says

    Even more importantly, who’s gonna build and program these transporters? Because if we leave technology to tech people, I can tell you what’s going to happen: half the time you’ll be stuck in the transporter buffer while some fucking ‘loading’ symbol floats above your head because OS MMMX doesn’t play well with KLin(g)ux and the other half the time you’ll find yourself rematerialising with an empty latinum account and three dicks on your head that some L33T hacker put there for the lulz.

    This is one of the things I worry about, if we ever get close to making AI. I don’t trust people to seriously know what they’re doing before they start cobbling together half-assed conscious entities that can think and feel and suffer in ways we can’t even imagine, without anyone knowing what to do about it or even giving a fuck because they started out wanting to treat them like slaves in the first place.

  122. Holms says

    <blockquoteExactly.

    Spider-Man reruns, eating, pooping, sleeping.

    Even momentous, once-in-a-few-century events would get boring at some point. Civilisations crumbling, history repeating itself ad nauseam.

    But it may be that only people who have had depression, and have actually felt boredom so intense as to want to die, feel that way.Not that it really matters, seeing as how the underlying premise is still pure bunk, but I’m not sure why the thought of immortality is so scary to you and many others. The premise of biological immortality only means ‘does not die of age related causes’ and can thus be discarded when you inevitably tire of life.

    If you are going further and positing the ‘memory upload’ version… well, now we’re just getting even more removed from reality. My tolerannce for the likes of Kurzweil and other ‘furutists’ is extremely low and I see no point in such a tedious indulgence.

  123. Holms says

    ARGH
    ALWAYS PREVIEW
    ALWAYS.

    The blockquote was supposed capture the text from “Exactly” to “feel that way”. My text begins with “Not that it really matters…”.

  124. Rob Grigjanis says

    daniellavine @103:

    One thought experiment I think is pretty good for motivating this sort of thinking about what consciousness actually is instead of what it feels like: imagine you are put under anesthesia and while you are under your body is copied atom-for-atom and the copy is placed in a completely identical room.

    A fly in the gedanken ointment;

    In 1993 an international group of six scientists, including IBM Fellow Charles H. Bennett, confirmed the intuitions of the majority of science fiction writers by showing that perfect teleportation is indeed possible in principle, but only if the original is destroyed.

  125. Anthony K says

    ARGH
    ALWAYS PREVIEW
    ALWAYS.

    That’s exactly what they’ll say when they teleport you into a brick wall or upload you into last month’s iBorg.

  126. Azkyroth Drinked the Grammar Too :) says

    But I wonder if you lived long enough you wouldn’t lose a lot of your early memories simply because the brain didn’t evolve to contain more than a few decades worth.

    Considering how little most people my age or older remember of being a teenager, I’d say it’s worse than that.

  127. David Marjanović says

    some type of Biomass for our food source

    :-D There’s very little else we can eat! Salt?

    In any event, immortality kind of devaluates life.

    Please explain.

    But the dictators reappear again in new guises. Maybe if people lived longer, we’d remember the last time a plausible, charismatic person came by offering to solve all our problems and remember how well it worked out then. Maybe, for example, Japan is now flirting with nationalism because the generation that can say from personal experience, “Been there, done that. Bad plan,” is now dying and the younger people don’t know what they’re voting for.

    This is not happening over here – the education system is good enough.

    Just imagine living for 500 years, and having to work for at least 450 (I’m being extremely generous with retirement time here, obviously).

    Oh, that depends on the job. :-)

    “Here,” machintelligence? You realize one of the bloggers here is a postmodernist, no?

    Postmodernism is another one of those technical terms that need to be defined. By some definitions, probably everybody in this thread is a postmodernist, by others almost nobody in the world is. I remember an interesting discussion on Jadehawk’s blog that I’m too tired to look for.

    Your brain is correct. The fact that the future you bears similarities to the current one is largely coincidence and social convention.

    Eh, that depends. I’m deep enough in the autism spectrum that… you know how other people do things like acquire tastes? I don’t. I learn a lot, yes; some of that has caused noticeable changes, yes (I’ve deconverted, for starters); but many, many features have been constant as far as I can remember.

    Here is something true
    http://www.smbc-comics.com/?id=2722

    Nah. I much prefer doing all of that at once.

    (Especially the “how” and “why” part. *sigh*)

    Or black rhinos. Or Yangtze river dolphins. Or whitebark pines. Or Joshua trees. Or American pikas.

    On the other hand, I didn’t jump out the fifth-floor window when I learned about the thylacines. Those are a really tragic loss, and I still hope enough DNA is left to clone some.

    Because if we leave technology to tech people, I can tell you what’s going to happen: half the time you’ll be stuck in the transporter buffer while some fucking ‘loading’ symbol floats above your head because OS MMMX doesn’t play well with KLin(g)ux and the other half the time you’ll find yourself rematerialising with an empty latinum account and three dicks on your head that some L33T hacker put there for the lulz.

    *bakes lavender cookies to make Internet out of them*

  128. mvemjsun says

    A problem I see with immortality is bordom. You will eventually do everything you want to do more times then you want to do them and still have infinite time to go. I would think that if heaven existed, everyone there, after a few thousand years at most would be begging god to let them die. Thiests do not like it when I point this out.

  129. David Marjanović says

    A fly in the gedanken ointment;

    People, quantum teleportation is only possible for things that are small enough to be in a superposition of states. The more particles are involved, the more difficult that becomes. It’s been done for C70, which is fucking amazing, but…

    That’s exactly what they’ll say when they teleport you into a brick wall or upload you into last month’s iBorg.

    So true, so true.

    Oh. Wait. Teleporting you into air isn’t that different from teleporting you into a brick wall, isn’t it? That’s always annoyed me about teleportation in Star Trek and the like.

  130. David Marjanović says

    isn’t that different from […] isn’t it

    Preview would have collapsed the wavefunction.

    That would’ve been sad.

  131. daniellavine says

    consciousness razor@129:

    One of my starting points is that the “external” physical world exists, although not necessarily exactly as science describes it now… I get that this could be hard to figure out for the people involved, so it’s tricky epistemologically, but so far I have just been taking it as a given that we’re made of physical stuff. And I think it would help if your assumptions were spelled out a little more.

    That’s fair.

    Although I absolutely agree that the “external” physical world exists I also think Descartes was absolutely right that we have no direct access to it — that all of our experiences are mediated through our sense organs (and ultimately our brains). Thus, the existence of this “external” physical world is an inference on my part. However, it’s such a powerful unifying assumption that I think it’s absolutely perverse to assume otherwise.

    However, that does not entail that “we” or anything else is “made of” physical stuff.

    For example, I think we can agree that numbers are not “made of” physical stuff. Now, we can have an argument about whether or not numbers “exist” per se; I think this mostly comes down to a semantic argument about what we mean by the word “exist”. However, “number” is certainly a facet of my experience of the world whether or not I want to argue that numbers “exist”.

    Where it gets interesting is when you ask whether tables are made of physical stuff (just as one example). This might also seem perverse but I’d argue that in an important sense they are not. Tables are obviously not made of “table stuff” — they can be made of anything but what makes them tables is their functional role. (In this, I’m following Dennett’s idea of functionalism; in his philosophy of mind class he motivated it with the example of using his office door as a desk. Just because it’s “actually” a door doesn’t mean it can’t functionally be a desk.) That is, the part of a table that is a “table” and not, say, a pile of wood or aluminum is a sort of pattern rather than a physical entity. We can see this easily by chopping a wooden pile into kindling. All the physical “stuff” is still there but it is no longer a table.

    Similarly, I don’t think a “human being” — or any particular human being — simply is the physical stuff they are made of. This is motivated partially by the Ship of Theseus paradox mentioned by Anthony K upthread — the actual physical material we’re “made of” gets recycled over time. What makes us “human beings” is not that we are made of physical “stuff” but the pattern that the stuff is assembled into.

    Furthermore, I think that, for example, amputees are every bit as human as anyone else so I conclude that it’s not the pattern of the body that makes us human. Rather, I think it’s the mind that makes us human, and I think the mind is another sort of pattern (a pattern of information being processed in a certain way, or a certain kind of entropic process — a little hard to put into words but hopefully you follow me sufficiently well).

    At that point it comes down to what the “mind” actually is. While one can short-circuit some of the ontological weirdness by making definitional statements like “you are your brain” I think such definitions result in paradoxes and contradictions that can be avoided by getting a little more subtle. A “mind” can be an informational pattern the same way, say, “the linux kernel” is an informational pattern. So I think I am my mind and if you duplicate my mind by implementing the same pattern on different hardware it is no less “me”.

    Hopefully that made at least a little bit of sense. This stuff gets pretty tricky to put into English at a certain point.

    If you change which thing is doing the representing, and if that can be changed because there’s more than just you and your mind creating everything else, then that other thing is having the experience rather than the first thing.

    This seems to me a purely semantic assertion. Sure, we can say that the “physical thing” is doing the experiencing but I don’t find that to be a useful way of looking at the problem.

    For example, we can say that “the computer is logging my keystrokes” rather than “the keylogger program running on Microsoft Windows is logging my keystrokes” and we wouldn’t be wrong, but I think it’s more useful, precise, and accurate to say the latter.

    It’s like you’re saying there’s no territory or something, just a map…. And if that can’t be changed, then I don’t know who you’re talking to or what you’re trying to explain to them.

    Well, as I said before I do think there’s a territory but we have no direct access to it. As far as “who I’m talking to” — I’m typing words into a combox in response to refreshing a page and seeing words appear on my computer screen. The “facts” of your existence and sentience are merely inferences I’m making on the basis of the fact that there seems to be someone reading what I’m writing, making a pretty good go of understanding it, and coherently and intelligently responding to it.

    Likewise, my impression of being a physical being is mediated through representations of a physical body mediated through sense organs (let me include proprioception as a sense organ for the sake of argument). That is, my brain models the physical body and my phenomenological experience is restricted to the model and does not directly apprehend (sorry, there are no good verbs for this) the physical body itself. As one example, radio waves certainly interact with my body (our bodies make decent radio antennae) but I don’t directly experience this interaction because it’s not part of the model.

  132. says

    half the time you’ll be stuck in the transporter buffer while some fucking ‘loading’ symbol floats above your head because OS MMMX doesn’t play well with KLin(g)ux and the other half the time you’ll find yourself rematerialising with an empty latinum account and three dicks on your head that some L33T hacker put there for the lulz.

    Dear Yahoo Answers. I think I may have bricked my husband while trying to hack his snoring circuits mid-transport. He fell asleep just now and he’s showing the Blue Eyes Of Death. How do I restore him from a backup? Preferably one from before that argument we had last week.

  133. daniellavine says

    Rob Grigjanis@134:

    Interesting, but I can rework the thought experiment to accommodate that.

    For example, I can do a Schroedinger’s cat sort of thing where there’s a 50/50 chance that you are copied (and the original is destroyed) or nothing happens at all. How can you determine which assuming the apparatus doesn’t record the decision?

    That’s the thing with gedanken — you can always come up with some pragmatic objection like you and consciousness razor have but usually you end up missing the point of the gedanken by doing so.

    What I’m trying to do with the thought experiment is to motivate a long and complicated ontological argument without going through the trouble of writing a 10-page analytic philosophy paper. If you work with me and try to understand what I’m getting at you can get a lot of intuition out of it (at least I think so) — or you can force me through the painful process of spelling out the details in philosopherese as I started to do in my last response to consciousness razor.

  134. daniellavine says

    We can see this easily by chopping a wooden pile into kindling.

    “Pile” should be “table”.

  135. kemist, Dark Lord of the Sith says

    Not that it really matters, seeing as how the underlying premise is still pure bunk, but I’m not sure why the thought of immortality is so scary to you and many others. The premise of biological immortality only means ‘does not die of age related causes’ and can thus be discarded when you inevitably tire of life.

    I never said it’s “scary” as such (for the person involved), just that most humans, psychologically, wouldn’t last until the end of their first millenium. Boredom can become unbearable faster than most people think.

    On the other hand, I didn’t jump out the fifth-floor window when I learned about the thylacines. Those are a really tragic loss, and I still hope enough DNA is left to clone some.

    Yeah, but imagine seeing species wink out like that, one after the other, and the entire world rapidly filled with only humans and their pet or food species. I remember reading a book where that was happening, and people had made some kind of betting game out of it for the lulz (I think it was Oryx and Crake by Margaret Atwood). I remember thinking that I wouldn’t want to keep living in a world where not only this happened but people actually got off on it to pass time.

    I would think that if heaven existed, everyone there, after a few thousand years at most would be begging god to let them die.

    You’re very generous.

    In xian paradise, I’d beg for death within the first fifteen minutes, just after realizing the kind of thing you do there, and the kind of people you’ll have to spend eternity doing that boring stuff with.

  136. consciousness razor says

    People, quantum teleportation is only possible for things that are small enough to be in a superposition of states.

    For one thing, there’s no rule that only “small” stuff can be in a superposition. It was my impression that lots of “big” things are, but it’s not likely to show “big” effects.

    But it’s not about quantum teleportation anyway. That’s a different thing. This just has to do with the assumption that we could get enough of the micro details right, with no identifiable difference even at the quantum scale, so it’s not just about thinking they “roughly” look “about” the same from a big scale but we never bothered to check if they were. Then you take that information and use it with your techno-magic to put some other matter into the same state. The information, once you have it, could move around however you want. You could write it down on some paper and send it in the mail, if you could find enough paper and enough mail-carriers.

    Preview would have collapsed the wavefunction.

    Or there’s no collapse. That’s one way to interpret it. But something would have happened. :)

  137. guyver1 says

    PZ, can you explain the difference between what you’re arguing and this:
    http://en.wikipedia.org/wiki/Turritopsis_nutricula

    I’m not an expert and have no idea how ‘accurate’ the wiki article is.
    For example, the article says its basically biologically immortal, but does that really mean what it appears to mean to a lay person reading it. Does Turritopsis_nutricula retain any of its ‘conscience/memories’ (if it has any?) from its previous ‘life’. Would there be a way to test this?

  138. Rob Grigjanis says

    daniellavine @143: Oh, I agree with your point. The subject would have no idea whether they are original or copy. But you’d better arrange for everyone else to be unaware as well, or somebody looking at you askance, or counting your fingers, or weeping uncontrollably, might give the game away.

  139. consciousness razor says

    Well, as I said before I do think there’s a territory but we have no direct access to it.

    Agreed, but that seems awfully close to the same thing I would say. If there’s a territory, it’s not the stuff we have “direct access” to, which are the self-representations you were saying we are. So we’re not those, when it comes down to it; we’re some other thing which they’re about. And that’s why I’m saying we’re biological organisms.

  140. daniellavine says

    Rob Grigjanis@148:

    Yeah, part of the fun of this experiment is imagining the downstream consequences like the copy and the original both driving home to the same house and then having to accommodate the fact that there are now two of them while the sinister, shadowy scientists responsible for the experiment maintain a cold, clinical silence about which copy has the better “claim” if you will.

    Thanks for the link, BTW, interesting stuff.

  141. anchor says

    If there be a multiverse of every possible quantum configuration, we are all already (and ‘permanently’) immortal. None of our other myriad (however slightly or vastly) different versions in adjacent bubble domains knows about each other, and everybody in each can die in peace.

    Whew. What a fucking relief.

    One life’s world-line to manage at a time is quite enough.

    PZ is spot-on about the cancer on the pop at large analogy. Cancer is a kind of selfishness that spells disaster.

  142. kemist, Dark Lord of the Sith says

    Does Turritopsis_nutricula retain any of its ‘conscience/memories’ (if it has any?) from its previous ‘life’. Would there be a way to test this?

    It’s a jellyfish kind of animal. It does not have a brain (so no memories or consciousness).

    And it does not actually die. If the same sort of thing happened to a human, it would go back to being a child at some time within its adult life. So child -> adult -> child, indefinitely.

    That would be a weird kind of immortality.

  143. shoeguy says

    We have immortal entities. They are called corporations and the Supreme Court says they are people.

  144. daniellavine says

    CR@149:

    Well, like I said I think it makes sense to say “we are biological organisms” but I think it makes even more sense to say “we are mental representations of sentient beings” or something like that. This goes back to the “computer is logging my keystrokes” vs. “application running on a VM running on a computer is logging my keystrokes” example.

    Unfortunately, I don’t think natural language is really set up in such a way as to allow us to easily comprehend the ontological and epistemological realities of being sentient beings so I try to be open-minded about what we might be rather than trying to establish with certitude what we are using pre-existing concepts which I think may be simply insufficient for the task at hand.

  145. guyver1 says

    @152 – kemist, Dark Lord of the Sith
    Are you stating ‘fact’ on that statement?(citation) I merely ask for the sake of clarity.

    Again, no expert here on sea life, but even something as ‘simple’ as a jellyfish must have some sort of ‘processing centre’/CPU/brain to control all its functions? Whether or not this is to the point where it experiences ’emotions’ and has memories etc but it must ‘process its motor functions, bouyancy, feeding mechanisms etc

  146. arbor says

    “Millions long for immortality who don’t know what to do with a rainy Sunday afternoon.” – Susan Ertz

    I don’t know of anyone I’d keep around past 137.

    What unbearable egos. Not one of these cretins has demonstrated any reason to tolerate them now, let alone for eternity.

    I have at most a decade or so left, and I will welcome death after having enjoyed my life.

    The only thing that provides life with energy and drive is the knowledge that it is short.

  147. microraptor says

    Again, no expert here on sea life, but even something as ‘simple’ as a jellyfish must have some sort of ‘processing centre’/CPU/brain to control all its functions? Whether or not this is to the point where it experiences ’emotions’ and has memories etc but it must ‘process its motor functions, bouyancy, feeding mechanisms etc

    Nope. Cnidaria actually prove that it isn’t necessary to have a central nervous system in order to be a mobile organism.

  148. Rob Grigjanis says

    CR @146:

    This just has to do with the assumption that we could get enough of the micro details right, with no identifiable difference even at the quantum scale

    Big assumption. Optical wavelengths might be OK for reading skin cells, but to penetrate to, say, liver cells, you’d need either gamma rays (I’m guessing), or vodka. Neither very beneficial.

  149. Lofty says

    I imagine a technological approach to extending life would be a device that replaces stem cells with perfect ones as required and keeps a check on errors in cell division. One side effect might be that you could change the programme for the stem cell lines over time. You could order a new body shape every 20 years or so to keep things interesting.

  150. PatrickG says

    Late to the thread, but whatever.

    @ RagingBee, 29:

    Whatever means of life-prolongation we humans invent will, inevitably, be available only to the richest humans — for a very long time after its invention. And that gross inequality of wealth — on top of all the other inequalities we’re already familiar with — will spawn such a backlash of resentment that the 1% will find themselves and their kids hiding in mile-deep bunkers for their entire lives, or abandoning Earth for self-sustaining (and heavily-armed) permanent settlements in space.

    What a succinct summation of Kim Stanley Robinson’s Mars trilogy (Red Mars, Green Mars, Blue Mars). :)

    One of the major drivers of his story was based on the concept of auto-repair of genetic coding, leading to human-wave assault tactics on Earth by “babies raging for their chance at immortality”, as different levels of access to treatment made class almost a form of pseudo-speciation.

    His outcomes were a bit more optimistic and progress-heavy than I tend to enjoy, but I think he hit the nail on the head when he posited (as many commenters have observed here) that any kind of technological medical breakthrough of that nature would almost immediately result in the breakdown of society. If longer life is available in a clinic up north, those who have no access also have nothing to lose, with everything to gain.

    Robinson also had a nice snark about economists in such a world. By his (not novel) argument, paradigms in thought occur not when people change their mind, but when the old guard dies. If the old guard never dies, and has tenure…!

    Which makes me ask: if great expansion of human, er, tenure occurs, will PZ revise his stance on that particular subject? :)

  151. machintelligence says

    We have immortal entities. They are called corporations and the Supreme Court says they are people.

    I’ll believe they are people when Texas manages to execute one.

  152. says

    Razor@104:

    We know that we exist, right? Do we not know if we have conscious experiences?

    Yes, we know that we exist and that we have conscious experiences. We just don’t know what “knowing” is.

    Anthony@105:

    I believe Max Devlin is referring to[…] Theseus’ Paradox.

    Not there, no. Although it was referenced in a different comment, not by name. I’m actually referring to something more similar to the philosophical concept of “P-zombies”. You behave as if there were a “you” (according to the linguistic meaning of the word) and that is indistinguishable from there “being” a “you”. Identity is not an emergent property of neurochemistry, it is a social construction of language.

    Chris@107:

    Another person who read the back cover of a book by Alan Sokal and thinks he learned something important.

    Please, Chris, I don’t need your help. But that was funny.

  153. Anthony K says

    Not there, no. Although it was referenced in a different comment, not by name. I’m actually referring to something more similar to the philosophical concept of “P-zombies”. You behave as if there were a “you” (according to the linguistic meaning of the word) and that is indistinguishable from there “being” a “you”. Identity is not an emergent property of neurochemistry, it is a social construction of language.

    Oh, I see. I misunderstood.

    Are there human counterexamples? People raised without language, or languages without that particular construction?

  154. Anthony K says

    I ask because I would find such things interesting. I’m not playing skepticball.

  155. says

    I tend to think that the argument against calling someone immortal a cancer is correct. They are more like a brain cell. If they retain the capacity to adapt, they will adapt into the current network, but, they may need to change if they do. If they can’t adapt, they are almost certainly going to get clipped out of the network, and its going to become fairly irrelevant (especially presuming they need to still eat, or otherwise gain sustenance), whether or not they are otherwise “immortal”. And, since we adapt and change already, and our relevance becomes a problem even now, as we lose adaptability, its also fairly irrelevant if your current awareness persists, since that is likely to be true anyway, even if every single thing you know now is, by then, slowly, step by step, replaced.

    At the worst.. I can imagine you looking back at something you did a thousand years ago, and being in denial that you where actually the one that did it. lol But, short of everyone reaching such a state, and choosing to… its not that huge of a problem. If everyone did.. it gets way more complicated, depending on your definition of what exactly happened to make you “immortal”.

  156. PatrickG says

    @ Kagehi:

    If they can’t adapt, they are almost certainly going to get clipped out of the network, and its going to become fairly irrelevant

    As I understand it, the argument by cancer is an argument by analogy. Countering it with another analogy is going to lead to bad results.

    Specifically, if you’re going to substitute another analogy (i.e. adaptive cells), at least acknowledge that the previous analogy (i.e. cancer) will almost certainly not comply with your new proposed system. Cancer cells tend to not get clipped out of the network. The adaptive brain cells may protest, but cancer tends to overwhelm them.

    In short, cancer may not care about the future need to “still eat, or otherwise gain sustenance”. That doesn’t preclude it from taking over. That’s kind of the point of the analogy. Cancer doesn’t care.

    Perhaps I’m misunderstanding you? I’m reading your comment as overly optimistic, in that immortality leads to the ability to reflect on something 1000 years ago. If we got there, sure, everything would be more complicated. If we got there, i.e. if cancer didn’t interfere along the way.

    Y’know, immortal Donald Trumps. I dare you to come up with a better extinction event. :)

  157. says

    Specifically, if you’re going to substitute another analogy (i.e. adaptive cells), at least acknowledge that the previous analogy (i.e. cancer) will almost certainly not comply with your new proposed system. Cancer cells tend to not get clipped out of the network. The adaptive brain cells may protest, but cancer tends to overwhelm them.

    Actually, I thought the flaw in the original was both a) self evident, and b) actually addressed by the person I was agreeing with – cancer cells don’t just sit there being immortal, they produce more identical copies of themselves. Brain cells are, up to a point, basically immortal. Yeah, once in a while they get replaced, not just clipped out, but there is almost certainly some level of information loss when it happens. What they don’t do, unless you are talking about a brain tumor, is start making hundreds/thousands of identical copies of themselves. That is what cancer does. Its not, “What is someone like Palin became immortal.”, but, “And, she also had a giant clone factory, which churned out thousands of identical Palins.” One of these is **not** the same problem, and, I would argue, isn’t even relevant, at all, to “individual, non-cloned, non-copied, single version” immortality, which is the most likely version. at least in any practical short terms sense. When you finally get to the point where biology itself is gone, and everyone is Agent Smith, then you can start talking about cancer again. lol

  158. Owlmirror says

    Identity is not an emergent property of neurochemistry, it is a social construction of language.

    Oh, I see. I misunderstood.
    Are there human counterexamples? People raised without language, or languages without that particular construction?

    If there were a language without identity construction, I am pretty sure that it would be listed here:

    http://idibon.com/the-weirdest-languages/

  159. says

    Re: “what if you are a clone made while the original slept”:

    This is a pointless argument. If you are an unaware clone of a sleeping person, that says nothing whatsoever about whether or not teleportation would kill you. In fact, it merely highlights what we already know: that you are you because consciousness is continuous. If it weren’t, then you wouldn’t have to specify the “asleep” part. If you “teleported” someone by building a copy and destroying the original, the continuity is lost; just because the new copy wouldn’t know this does not mean it didn’t happen.

    As for uploading consciousness, why would that be necessary? Suppose that, instead, I found a way to build a nanomachine which could replace a single neuron in the brain, functioning identically to the original for all purposes related to thought, but with the added advantage of automatically repairing damage which would ordinarily destroy the cell (or doing something else, such as flagging itself as damaged so that a replacement could be put in). Now, over a period of (say) ten years, I gradually have these nanomachines replace all my neurons. Then I slowly replace the other parts of my body with sci-fi prosthetics (which are gradually coming along and are much more plausible than any of the other things in this discussion) until every single part of my body is synthetic and capable of piecemeal replacement and repair as needed. I might not be immortal in the sense of “living forever”, but I’d be close enough to live until either thermodynamics shut me down, civilization totally collapsed so that I couldn’t get the parts any more, or I decided to end it. I think this is much more plausible in terms of technology than the “upload your brain into a computer” idea (but then, Kurtweil and his disciples tend to be kind of idiotic, so that’s not difficult). And it would maintain continuity as well — so where’s the flaw?

  160. Anthony K says

    Owlmirror, I don’t see why. Especially if such a feature were not that uncommon.

  161. PatrickG says

    @ Kagehi:

    When you finally get to the point where biology itself is gone, and everyone is Agent Smith, then you can start talking about cancer again. lol

    I apologize. I’d missed you were responding to other comments upthread I’d missed. I do stand by my argument that Analogy vs. Analogy is a terrible way to debate. If nothing else, it leads to people misreading you due to lack of time. ;)

    However, once you say something like “Brain cells are, up to a point, basically immortal”, I must protest.

    Have you seen the response of a brain cell to, say, anoxia? Fire? Cyanide? American Idol?

    Up to a point, my ass*. :)

    * Speaking of things that kill brain cells…

  162. aluchko says

    Why is it impossible? I’ll cite the laws of thermodynamics. Entropy rules. There is no escaping it. When we’re looking for ways to prolong life indefinitely, I don’t think there’s enough appreciation of the inevitability of information loss in any system in dynamic equilibrium, which is what life is — a chemical process in dynamic equilibrium. What that means is that our existence isn’t static at all, but involves constant turnover, growth, and renewal.

    Is it the same you alive right now as was alive at the age of 10 or 20, what about 15 years from now? Why can’t we continue that existence in a meaningful sense in another medium?

    We already have a potent defense against death put in place by evolution: it’s called more death. That sounds contradictory, I know, but that’s the way it works. Every cell replication has a probability of corruption and error, and our defense against that is to constrain every subpopulation of cells and tissues with a limited lifespan, with replacement by a slow-growing population of stem cells (which also accumulates error, but at a slower rate). We permit virtual immortality of a lineage by periodic total resets: reproduction is a process that expands a small population of stem cells into a larger population of gametes that are then winnowed by selection to remove nonviable errors…but all of the progeny carry “errors” or differences from the parent.

    In all the readings from transhumanists about immortality that I’ve read, none have adequately addressed this simple physical problem of the nature of life, and all have had a naively static view of the nature of our existence.

    A generation of ‘immortals’ certainly can slow evolution, but it’s not clear to me we can’t find alternate methods of renewal.

    The undesirability of immortality derives from the proponents’ emphasis on what is good for the individual over what is good for the population. There’s a kind of selfish appeal to perpetuating oneself forever, but from the perspective of a population, such individuals have an analog: they are cancers. That’s exactly what a cancer is: a unit of the organism, a liver cell or skin cell, that has successfully shed the governors on its mortality and achieved immortality…and grows selfishly, at the expense of the organism.

    I have no problem admitting that I’d rather keep a current human alive for 50 additional years then let that person die and create a new human for 50 years. The cost of this renewal is immense.

    but just their idea of individuals living for 10,000 years seemed naive and unsupportable to me. I don’t think it’s even meaningful to talk about “me”, an organic being living in a limited anthropoid form, getting translated into a “me” existing in silico with a swarm of AIs sharing my new ecosystem. That’s a transition so great that my current identity is irrelevant, so why seek to perpetuate the self?

    We may not be close to this, nor ever smart enough to accomplish it. But I see no fundamental reason why we couldn’t give you a virtual mind, body, and environment that’s keeps your current identity completely intact. We live in a materialistic universe, there’s no organic magic sauce that makes us ‘us’, whatever we are now could be replicated in a machine that makes us effectively immortal.

  163. aluchko says

    @The Vicar (via Freethoughtblogs) #170

    If you “teleported” someone by building a copy and destroying the original, the continuity is lost; just because the new copy wouldn’t know this does not mean it didn’t happen.

    What if the teleporter uses a wormhole? Deconstructs and moves the energy? What if you’re cryogenically frozen or your brain just completely shuts down for a second? If it’s a perfect replication of your mind and body it’s you. The only way it’s not you is if you have a non-materialistic component (ie a spirit).

    Suppose that, instead, I found a way to build a nanomachine which could replace a single neuron in the brain, functioning identically to the original for all purposes related to thought

    I think this is much more plausible in terms of technology than the “upload your brain into a computer” idea (but then, Kurtweil and his disciples tend to be kind of idiotic, so that’s not difficult). And it would maintain continuity as well — so where’s the flaw?

    So your alternative to “upload your brain into a computer” is to slowly upload your brain into a computer?

  164. prae says

    The best (and the only valid IMHO) argument against immortality is that it would mean, stupid people with bad ideas don’t die off any more. We would probably still burn lefties and gingers as witches if it weren’t for death.

    Also, yes, absolute immortality is impossible, only the potential one makes sense.

  165. harbo says

    “You and I” don’t have to worry about immortality.
    It will first be available to the maga-wealthy (koch/murdoch etc) and thus it will only be available to them.
    The prol’s won’t get a look in.

    80+/-10 functional years will do me, and then it is my duty to get out of the road.

  166. mikee says

    I find raging bee’s arguments regarding in silico versions of people and of teleportation make the most sense.

    Imagine an in silico version of you were produced while you are alive. Is the in silico version really you? Of course not if you die it still exists but it is a fascimile of the actual you.

    The same argument applies to a teleporter which destroys the original only to replace it with a “clone”. Is the clone made up of the same original atoms as you? No, therefore it is not you. You have been destroyed and replaced with a copy. This copy will go on to live as you probably would have pre-teleport but it is not you – you are dead! So you wouldnt catch me going anywhere near such a transporter.
    Look at it another way – imagine the transporter didn’t destroy the original – you would have you, the original and a copy of you. Is the copy you? No it is made up of different atoms. It may be a perfect copy, but it is not you.

  167. aluchko says

    @mikee

    So if I slowly swap every atom in your body over a year are you now a different person?

    If the transporter doesn’t destroy the original then I’d say it’s two yous, both equally valid as yourself and valid continuations of your consciousness (though they now become separate beings).

    I can’t see how you can justify an irreducible non-replicable identity without adding a spiritual component like a soul. If you accept a materialist universe I don’t know how our consciousness gets a special status as a phenomena tied to our bodies.

  168. consciousness razor says

    I can’t see how you can justify an irreducible non-replicable identity without adding a spiritual component like a soul. If you accept a materialist universe I don’t know how our consciousness gets a special status as a phenomena tied to our bodies.

    I have no idea what you mean by “special status,” but this makes no sense to me. If our consciousness isn’t a soul, then it is “tied to our bodies.” I’m fairly sure my consciousness isn’t “tied” in this way to my coffee cup or the Eiffel Tower or anything on the planet Neptune. And if you made a perfect duplicate of me somehow, he’ll have consciousness too because it’s tied to his body just like mine is to me. Or it was, until I got destroyed in the process, because there’s apparently no way to avoid that. I don’t think it’s really useful to talk about some “tie” or “connection,” since it’s more like a verb: it’s something we do, but if you get a different thing to do it that won’t affect whether the first thing is doing it. So instead of “immortality” or anything like that, you basically just get a different form of reproduction.

  169. Nick Gotts says

    So instead of “immortality” or anything like that, you basically just get a different form of reproduction. – consciousness razor

    The only criteria we have for personal identity, absent a soul, are spatio-temporal continuity and memory. The uploaded version would have both, although the s-t continuity would be of a novel kind. Now if you don’t see it as a continuation of the same person, fine, but I don’t think you’re going to persuade anyone who disagrees: it would be a radically novel situation, and in such situations, intuitions are likely to diverge.

  170. consciousness razor says

    The uploaded version would have both, although the s-t continuity would be of a novel kind.

    What would it mean to say it’s a novel kind of continuity, as opposed to a discontinuity? What would a real discontinuity look like?

    Now if you don’t see it as a continuation of the same person, fine, but I don’t think you’re going to persuade anyone who disagrees: it would be a radically novel situation, and in such situations, intuitions are likely to diverge.

    Agreed. I’m not trying to suggest it’s patently obvious, or people should feel dumb for seeing it another way. And I agree with the sort of Humean perspective others have had, of being unable to tell the difference (at least it could be very difficult) for the original/clone in an “identical room,” and so forth.

    But from a third person’s perspective, they really could tell the difference between the “two people.”* So why would the subjectivity of the person/people using the transporting/uploading machine be given priority over anyone else’s perspective? I mean, it seems like it’s just disregarding what the process would have to look like for someone else watching it: you’re going to see someone’s body disintegrated in the machine, then a new one just like it reconstituted somewhere else. It certainly may look like it’s magically transporting the same thing, if you don’t care to ask how and what causes this to happen; but for the person who mops up the floor in the original room, what happens to the original person is going to be pretty tangible, isn’t it? And the person who refills the tank of person-juice in the clone’s room would have a similar kind of perspective, wouldn’t they?

    *Or “organisms,” or I wish there some more neutral terms I could use.

  171. dancaban says

    The closest I’ll ever get to immortality is my children. And what I write here.

  172. Nick Gotts says

    consciousness razor,

    You’re still trying to persuade people to see it your way, despite admitting it’s unclear. Why?

  173. consciousness razor says

    You’re still trying to persuade people to see it your way, despite admitting it’s unclear. Why?

    Because I think the other kind of interpretation is wrong. They can’t both be correct. Maybe they really can’t persuaded, but I see no reason not to try. And when aluchko or others distort my sort of position so much that it’s made out to be dualist, I feel justified in trying to clear that up and give as close to an undistorted account as I can. At least then the reason they’re unpersuaded won’t be because they have drastically wrong ideas about it.

  174. says

    172:

    And you are?
    The person who wrote the original comment that was being replied to, a reply which you decided required a snarky rejoinder about post-modernism.

    Also known as Max Devlin, the guy Nerd likes to try to gratuitously insult because I’ve annoyed him in the past by kicking his ass royally up and down a thread or two. Obviously he likes to remember it differently. On that topic:

    daniella:
    without going through the trouble of writing a 10-page analytic philosophy paper. If you work with me and try to understand what I’m getting at[…]

    I’m afraid you’ll have to find a different web site if you think they’re going to let you get away with such sloppy behavior. “Try to understand?” There is no such thing to the robot-like intellectuals. If you have not made yourself clear you deserve to be insulted heartily for such failure. Such is the Pharyngula way.

  175. says

    I dispute your notion that what you said has anything to do with what I said. You can’t even begin to guess how such particles might actually work, and you’re entirely inventing this “low power consumption mode”, so your reference to logic is inappropriate.

    This statement means absolutely nothing. And no, no one is “inventing” the condition commonly known as “sleep.” maxdevlin, you have no clue what you’re talking about.

  176. says

    The person who wrote the original comment that was being replied to, a reply which you decided required a snarky rejoinder about post-modernism.

    Well, boy, you’re not exactly making postmodernism look all that useful here. If you don’t like the snark, stop earning it.

  177. says

    The only criteria we have for personal identity, absent a soul, are spatio-temporal continuity and memory. The uploaded version would have both, although the s-t continuity would be of a novel kind.

    How “different” a “kind,” exactly? You seem to be trying to have it both ways, saying “it would be the same, but not really,” while totally failing to provide any specifics. We’re talking about a “transportation” system that DESTROYS YOUR ORIGINAL BODY, so a little more clarity is kinda warranted here, doncha think?

    Now if you don’t see it as a continuation of the same person, fine…

    You’re really not a good salesman, are you? If you’re not going to show any respect for my own concerns, then you’re giving me no reason to use your “transportation” system, or recommend it, or support its development in any way.

  178. says

    If the transporter doesn’t destroy the original then I’d say it’s two yous, both equally valid as yourself and valid continuations of your consciousness (though they now become separate beings).

    “Equally valid” by whose standards? Yours? And why are your standards of who is the “real” me more important than mine?

    I can’t see how you can justify an irreducible non-replicable identity without adding a spiritual component like a soul. If you accept a materialist universe I don’t know how our consciousness gets a special status as a phenomena tied to our bodies.

    So accepting a materialist universe means I have no right to criticize a transportation system that DESTROYS MY ORIGINAL BODY? Who died and made you the Atheist Thought Police?

  179. says

    But from a third person’s perspective, they really could tell the difference between the “two people.”* So why would the subjectivity of the person/people using the transporting/uploading machine be given priority over anyone else’s perspective?

    Why is the “third person’s perspective” important to anyone else at all? Are you saying it’s perfectly okay to kill someone and replace him with some sort of a copy as long as some “third person” either doesn’t see a difference, or doesn’t care?

  180. consciousness razor says

    Why is the “third person’s perspective” important to anyone else at all?

    Because there are third persons, so it’s at least important to them. I think this is just another way of getting to the point that there’s an objective fact of the matter about what happens. You can’t just look at one person’s experiences and build everything from that. You ought to infer that there are other people’s experiences too, not just your own. So, looking only at what a specific individual can figure out (because they may not figure it out) isn’t giving you the whole picture of what anyone else might be able to figure out. Because there is a whole picture, with all of these other people and their experiences in it. So it looks groundless to me, as I was trying to say to daniellavine about maps and territories. We have maps of something, not just maps. The alternative is just hearing stories about perspectives, which don’t really get a grasp on the world those perspectives are supposed to be about. I don’t think I can refute a solipsist who won’t at least admit some sort of objective world which our inter-subjective ideas are about; but for anyone else, there needs to be more than just experiences. There needs to be some stuff underlying them.

    Are you saying it’s perfectly okay to kill someone and replace him with some sort of a copy as long as some “third person” either doesn’t see a difference, or doesn’t care?

    No, not at all. I’ve been saying the person who would be killed and the copy aren’t the same person, and it doesn’t matter much whether or not anyone else sees anything in particular. But for the people who think they are the same person, I don’t see how they could acknowledge a meaningful sense in which anyone is being killed, because they’re only looking at one first-person point of view. That’s why they’re saying they could (in principle) achieve immortality: because they think no one is dying in that situation.

  181. WharGarbl says

    I just noticed that all the argument regarding what counts as immortality basically revolves around how much “change” at any moment a person can undergo to still be consider the same person.

    And obviously, everyone have a different idea on where that threshold was. So for different people, there will be a different threshold on how much change a technology could make for them for them to still view themselves as… well, themselves.

    Even disregarding immortality, we already make somewhat significant changes to human body with prosthetic, organ transplant, and psychoactive drugs (anti-depressant).

  182. Rob Grigjanis says

    Raging Bee @189: I think the selling point is that dead people don’t complain.

    Let’s think about the word ‘copy’, which we’re using rather lightly. Suppose a ‘sufficiently’ accurate scan could be done, leaving the original intact and alive, but horribly irradiated. The copy would hop off their cot, exclaim “piece of cake!”, and carry on happily, while a trained mortal coil counselor would go in to the original’s room and comfort them in their last agonizing moments, with a little “what is consciousness anyway?” speech.

  183. Nick Gotts says

    How “different” a “kind,” exactly? You seem to be trying to have it both ways, saying “it would be the same, but not really,” while totally failing to provide any specifics. We’re talking about a “transportation” system that DESTROYS YOUR ORIGINAL BODY, so a little more clarity is kinda warranted here, doncha think? – Raging Bee

    Well, the bit in all caps totally convinced me.

    Oh, wait, no it didn’t. I’m not “trying to have it both ways”. I’m saying there would be no fact of the matter as to whether it’s the same person or not.

  184. WharGarbl says

    Actually, on PZ’s OP, at what point does “prolonging healthy life” enters the territory of immortality?

  185. WharGarbl says

    @Rob Grigjanis
    #194
    I think that’s pretty much the issue with all “copy your consciousness” scheme of immortality. You’re essentially creating two of you, and one of you will die. To put it bluntly, you’re still dead.

    The key may be that the process needs to be gradual enough that the mind of the person involved doesn’t “detect” that a distinct copy has occurred. Vicar #170’s general idea solve that problem. It still create a copy (in small chunks), but the copy is still connected the source it copied from, so no “distinct” copy of an entire person exist. Hence, as long as the process managed to keep you alive, you never die.

  186. says

    Because there are third persons, so it’s at least important to them.

    So fucking what? If a coal company wants to kick me off my land so they can get at what’s underneath without paying for it, there’s plenty of “third persons” who would all agree that my needs and rights mean less to them than their need for energy. The (uninformed) opinions of (unspecified) “third persons” do not trump my legitimate concerns about the harm a new technology might do to me.

    Even disregarding immortality, we already make somewhat significant changes to human body with prosthetic, organ transplant, and psychoactive drugs (anti-depressant).

    And you’re actually saying that’s comparable to DESTROYING YOUR ENTIRE BODY AT ONCE?

  187. says

    I think this is just another way of getting to the point that there’s an objective fact of the matter about what happens. You can’t just look at one person’s experiences and build everything from that.

    You can’t just disregard one person’s experiences either — especially when it’s the one person whose life is on the line.

  188. consciousness razor says

    So fucking what? If a coal company wants to kick me off my land so they can get at what’s underneath without paying for it, there’s plenty of “third persons” who would all agree that my needs and rights mean less to them than their need for energy. The (uninformed) opinions of (unspecified) “third persons” do not trump my legitimate concerns about the harm a new technology might do to me.

    Whatever you’re reading into my comments, it’s totally out of left field.

    What coal company? What land? What if there’s nothing but your perspective? What if there’s no fact of the matter about whether someone is dead or alive?

    That’s the kind of bullshit I’m arguing against. That we’re both arguing against, if you haven’t noticed. I’m not making any kind of moral claim about what trumps what, or whose opinions are supposed to matter more or less. One moral claim I will make is that people shouldn’t be forced to use the machine or tricked (if I’m right) into believing they’re not going die.

    You can’t just disregard one person’s experiences either — especially when it’s the one person whose life is on the line.

    Where the hell do you get the idea that I am? I think someone in the transporter/uploader will die. That person has a perspective, and at some time they cease to have one because they’re dead.

  189. kemist, Dark Lord of the Sith says

    Are you stating ‘fact’ on that statement?(citation) I merely ask for the sake of clarity.

    They do not have a CNS – at least not something you’d recognize as such. They have nervous ganglia, which are bundles of nerves. Some people designate a loose group of those, the “nerve net”, as a CNS of sorts. It can detect touch by other animals and tidal flux to allow basic reactions. But nothing as complicated as memories, even less consciousness.

    Again, no expert here on sea life, but even something as ‘simple’ as a jellyfish must have some sort of ‘processing centre’/CPU/brain to control all its functions? Whether or not this is to the point where it experiences ‘emotions’ and has memories etc but it must ‘process its motor functions, bouyancy, feeding mechanisms etc

    Simple systems don’t necessarily need centralized controls, or memories, to functions. That is true for both mechanical/electronic devices as well as biological organisms. Simple reaction mechanisms (for instance, “get away if pressure detected” or “grab if pressure detected around mouth”) coupled with pressure sensors can give sufficient autonomy to an organism to survive quite well. Even complex organisms like us have such decentralized systems, for, say, hearthbeat and respiratory rate control. Some organisms have only those, and no brain.

    Brains and their capacities for memories, emotions and consciousness are completely optional.

  190. says

    That’s the kind of bullshit I’m arguing against.

    You seemed to be arguing FOR it. Sorry if I misread you, but perhaps you should have added something like “A person trying to sell a matter-transporter system would say…”

  191. says

    maxdevlin:

    The person who wrote the original comment that was being replied to, a reply which you decided required a snarky rejoinder about post-modernism.

    See, that’s interesting. I thought you were the person trying to tell me how to comment on my own fucking blog.

  192. ChasCPeterson says

    So about what it might mean to self-identify as “a postmodernist”: I consulted, of course, ‘kipedia, and so, of course, I am still not sure what it means. It seems to depend on what’s your medium and interest, with “postmodern” music seemingly identified with minimalism whereas “postmodern” architecture is depicted as a reaction against minimalism, but anyway.
    For myself, I recognize the names of most of the philosophers listed but have read none of them, so I am not knowingly influenced by “postmodern” thought and am no “postmodernist” in that sense. The music I’ve heard and don’t care for, qua music. Architecture what do I know. Seems to refer to a very broad spectrum of Art art, some of which I like for various reasons but I’m uneducated in Art art.
    I did notice, though, that almost all of my favorite novelists are listed. Who knew? Does that make me “a postmodernist” too?

  193. says

    I did notice, though, that almost all of my favorite novelists are listed. Who knew? Does that make me “a postmodernist” too?

    GABBA GABBA

  194. mikee says

    @aluchko 179

    So if I slowly swap every atom in your body over a year are you now a different person?

    That is not comparable to the instantaneous destruction of my body and its replacement with a duplicate.

    As ana analogy, over time one might replace parts of a house as it gets old. Compare this with blowing up a house and building an identical one one another site – the new house is not the old house.

    And I would argue that my position relies on no spiritual component. I am not a set of data – I am a composite of billions of atoms working together in a complex fashion. Destroy those molecules and I no longer exist. Duplicate my composite and you end up with a duplicate, but if I am destroyed making that duplicate then I no longer exist.
    If someone invents such a transporter feel free to use it. The duplicate may be indistinguishable from the original but it is still not the original person.

  195. aluchko says

    #180 consciousness razor

    I can’t see how you can justify an irreducible non-replicable identity without adding a spiritual component like a soul. If you accept a materialist universe I don’t know how our consciousness gets a special status as a phenomena tied to our bodies.

    I have no idea what you mean by “special status,” but this makes no sense to me.

    If your consciousness gets no special status then the thing we care about is our memories and the continuation of our consciousness in a meaningful sense.

    I could obviously tell that the duplicate was created second, but if all the elements of our consciousness are duplicated then I don’t see any reason why that duplicate is a less valid continuation than you from a second ago.

    #206 mikee

    That is not comparable to the instantaneous destruction of my body and its replacement with a duplicate.

    As ana analogy, over time one might replace parts of a house as it gets old. Compare this with blowing up a house and building an identical one one another site – the new house is not the old house.

    And I would argue that my position relies on no spiritual component. I am not a set of data – I am a composite of billions of atoms working together in a complex fashion. Destroy those molecules and I no longer exist. Duplicate my composite and you end up with a duplicate, but if I am destroyed making that duplicate then I no longer exist.
    If someone invents such a transporter feel free to use it. The duplicate may be indistinguishable from the original but it is still not the original person.

    Houses are a poor analogy since they’re a physical objects. We know the second house is a different house, just like the duplicate body or the computer isn’t your original body, no one is arguing different.

    But we don’t really care about the object, cut off Stephen Hawkings body below the neck and give him a robot body and he’s still Stephen Hawkings. What we care about is the phenomena of a single person’s consciousness, and I don’t see why that’s fundamentally tied to a physical body, or why that phenomena can’t be transferred or duplicated in a completely meaningful sense.

  196. consciousness razor says

    I could obviously tell that the duplicate was created second, but if all the elements of our consciousness are duplicated then I don’t see any reason why that duplicate is a less valid continuation than you from a second ago.

    My clone would have a valid form of consciousness, just like mine. And like I’ve said since my first comment, you could talk about my clone’s perception that its identity is the same as mine. But this is not the end of the story.

    The nice, clean transporters on Star Trek give people the wrong idea. Really think about how it would have to happen in reality. I’ll volunteer to be transported from our spaceship down to a planet. You’re the mad scientist who’s going to do something like shoot a fuckload of radiation at my body to get the information you need. Then the results from the spaceship are transmitted to another machine on the surface of the planet, so that it can be used to assemble a whole bunch of matter down there into a clone of me, so that my clone can have sex with a green alien. I won’t be having sex with the alien, because I’ll be a puddle on the floor in some room on the spaceship where you disintegrated me with a fuckload of radiation. I have no idea why I volunteered for that — seems like I’d need a better reason. But fuck it, why not?

    So now you want to ask things like, “does my clone look just like me, experience subjective continuity, remember all the things I remembered, and have experiences just as real as all of the ones I ever had, before I was turned into a puddle on the floor?” Who do you think gets to answer that? Not me. Maybe my clone will answer it for you. I’m willing to bet its answer will be yes: my clone does think it’s me, and it thinks just like me. But why the fuck would you let that kind of question guide your decision, if you’re thinking about doing the same thing I did?

    This isn’t a way to be immortal or to even live for longer than you would otherwise. It’s a way to kill yourself. But if something about the story I just told — the puddle on the floor, in particular — just isn’t right because you have some better idea of how to do it, please explain what that is.

  197. aluchko says

    If the technology was perfect, I’d be fine with it. But I’d only volunteer for the procedure once we actually understood the science behind consciousness, so we could be sure we’re not screwing up something fundamental or creating a personable zombie (seems identical, but not conscious). This is obviously something we don’t understand yet.

    What I’m saying is we have no reason to believe we can’t replicate it perfectly given sufficient technology.

    There’s still some weird issues like what happens if the replication and copy aren’t simultaneous, if there’s a short gap of experience after you’re copied but before you’re destroyed. I don’t have a simple answer for that, I think the copy is still ‘you’, but it’s now a different you and destroying the original might be murder.

  198. jagwired says

    consciousness razor, your last name wouldn’t happen to be McCoy, would it?

  199. Arawhon says

    Aluchko, you are still thinking of consciousness that transfers from one body to another. You, the continuity of flesh and mind, ceases to be when you enter that transporter, and a copy who is not that continuity but a scratch built version is created. You cease. The Other thinks its you, acts like you, has all the same memories as you, but still isnt YOU.

  200. aluchko says

    Arawhon, but we don’t understand consciousness well enough to state that the continuity does cease.

    If consciousness is something reducible to a computer program then there’s no lost continuity, no death, when I pause that process and transfer it to another machine. If it’s something more complicated then maybe there’s a component or characteristic we can’t transfer.

    Now we don’t have a sufficient understanding of consciousness to know if that’s the case, but I don’t see any mechanism in a materialist universe that would allow for a non-transferable process of that nature.

  201. Arawhon says

    A new consciousness with all your memories and experiences is created. There is no transfer of consciousness, and we know enough about it to know its tied intimately to the specific brain that is creating it. So I reiterate, you will cease to be and a new version of you, which is not you specifically, is created. New consciousness, new body, new individual, old memories and experiences.

  202. aluchko says

    Well a computer program is intimately tied to the computer that’s running it, but that doesn’t stop us from transferring it. The only problem with our brains is the fact that we neither know how to make the computer nor the ethernet cable.

    How do you know your current consciousness is even continuous? Maybe every time you sleep, every time you lose your train of thought, that consciousness is destroyed in a manner more effective than any transfer would accomplish. Maybe our brain just creates brand new consciousnesses every five minutes and they all think they’re part of some continuous train because of all the accumulated memories and experiences.

    We just don’t understand it well enough to assume that ‘we’ can’t be transferred around.

  203. cim says

    As ana analogy, over time one might replace parts of a house as it gets old. Compare this with blowing up a house and building an identical one one another site – the new house is not the old house.

    But this is complicated by the “Ship of Theseus” thought experiment mentioned above. What happens if you, rather than blowing up the house, gradually replace the components – but rather than discarding the old ones, assemble them into another house with the same relative positions of components. Which one is then the original house? (The original experiment uses ships, so you can’t use “the one on the original plot of land” to decide)

    This all actually gets very close to a question I’ve been struggling with for a while: I believe I have free will, in that my future actions are not inevitable but depend on choices I make.

    I’m not sure it’s possible to believe that there’s a meaningful distinction between “humans” and “rocks” otherwise (which makes all the ethical discussions irrelevant… “Should we achieve immortality?” “We already have it. Also, in the sense you discuss it, it will happen or not, according to the physical laws of the universe. There is no meaningful sense in which a decision on ‘should’ can happen.”)

    On the other hand, I can see no evidence that this is actually true. It seems almost by definition to require a supernatural mechanism, because the concept of “free will” seems to require being able to do things other than those things which were inevitable by the strict physical laws of the universe. And there is of course no evidence at all for such supernatural mechanism. (And I’m not even sure it would solve the problem rather than boot it down a level if one somehow did exist)

    Does anyone have a good atheist resource on reconciling the two? It’s been bugging me for a while, but I’ve not yet found a convincing answer… and it seems to me that the question of “continuity of consciousness” is in a way the same question of “what does it mean to have an identity”

  204. mikee says

    Arawhon,

    Great explanation, that is exactly how I see it, even if I can’t articulate it as clearly and concisely as you have.

    aluchko,

    How do you know your current consciousness is even continuous? Maybe every time you sleep, every time you lose your train of thought, that consciousness is destroyed in a manner more effective than any transfer would accomplish. Maybe our brain just creates brand new consciousnesses every five minutes and they all think they’re part of some continuous train because of all the accumulated memories and experiences.

    And what (extraordinary) evidence do you have to prove such an extraordinary hypothesis?

    Well a computer program is intimately tied to the computer that’s running it, but that doesn’t stop us from transferring it.

    And when we transfer the programme, does the second computer become the first computer? No.

    I think with regard to teleportation which destroys the original we both agree that an exact copy will have exactly the same consciousness. What I’m saying is that if I was the original I no longer exist, a copy does.
    If rather than destroying the original, the original was duplicated, then you would have two identical consciousnesses in identical bodies at the time of duplication, how does this affect your theory?

    So no matter how perfect the copy is, if a transporter is going to zap me out of existence, I’m not going anywhere near it.

    It’s an interesting argument

  205. aluchko says

    mikee,

    How do you know your current consciousness is even continuous?

    And what (extraordinary) evidence do you have to prove such an extraordinary hypothesis?

    I don’t consider that an extraordinary hypothesis at all. We know we have periods we consider ourselves to be unconscious during sleep or after a knock on the head. We don’t know whether our conscious periods are all the same consciousness, or some phenomena our brain creates at certain period, or if that statement would even make sense to someone with a scientific understanding of consciousness.

    Well a computer program is intimately tied to the computer that’s running it, but that doesn’t stop us from transferring it.

    And when we transfer the programme, does the second computer become the first computer? No.

    As I was saying earlier I don’t actually care about the computer, or the specific body. I care about my consciousness.

    No one is arguing the second body won’t be a copy, but that doesn’t matter. Just like nobody really cares if a house is the original house or if it’s a perfect replica since it performs the function identically.

    What we actually care about is our consciousness, which is a process or a phenomena involving a group of atoms, but we don’t have any evidence that the specific atoms are important or that that process can’t be duplicated (other than the fact it’s absurdly beyond our technology and understanding).

    If rather than destroying the original, the original was duplicated, then you would have two identical consciousnesses in identical bodies at the time of duplication, how does this affect your theory?

    They’re both completely valid continuations of the original, though they are from that point new people. You can make it weirder by moving the destruction time to a couple seconds after the copy point which creates a weird subset of death.

  206. says

    Well a computer program is intimately tied to the computer that’s running it, but that doesn’t stop us from transferring it.

    You actually equated a full-blown human person to a mere piece of software? Seriously?! There’s no reason for anyone to trust you to talk sensibly, or honestly, about human consciousness.

    One of the most common slurs used against atheists is that atheists think of people as nothing more than objects or machines. And now here’s aluchko apparently trying to reinforce that stereotype. And the fact that he/she’s doing this just to sell a hypothetical technology that may well never work, just makes the whole exercise all the more ridiculous.

    We know we have periods we consider ourselves to be unconscious during sleep or after a knock on the head.

    Neither sleep, nor a knock on the head, involve anything remotely comparable to the TOTAL DESTRUCTION OF THE BODY THAT HOUSES ONE’S CONSCIOUSNESS. So talking about sleep here is just plain stupid. All you’re doing here is giving mental masturbation a bad name.

  207. says

    They’re both completely valid continuations of the original…

    Define “valid” in this context, or your argument is vacuous. Who decides what is “valid,” and by what specific criteria do they decide this?

  208. consciousness razor says

    Well a computer program is intimately tied to the computer that’s running it, but that doesn’t stop us from transferring it.

    It certainly doesn’t. And that’s fine if the program is Solitaire or something, but what do you think happens when the program in question is giving feedback about the state of that particular machine? And what about when it’s producing in the machine an experience of being that machine, what its components are doing and how those feel? If the fan stops running on one machine with a program that monitors the fan, must all the identical copies of that program also say their machines’ fans have stopped running? What’s supposed to cause that to happen?

    As I was saying earlier I don’t actually care about the computer, or the specific body. I care about my consciousness.

    Suppose there’s someone outside your light cone (however far) who has exactly the same brain state, thus exactly the same experience, as you do right now. Is that person you? If a moment later the other person* dies, are you going to die too or do you exist independently of each other? If you die, will the other die just because you shared a brain state?

    *Or if you insist you’re the same, maybe we can say the “other person” is the instance of you which is farther away from me right now.

    You can have the same pattern of brain activity, so you’re physically identical locally in your brain and whatever you’re aware of in your environment, but that doesn’t mean you’re in the exact same physical environment in every other respect. Maybe your clone doesn’t see or hear a bus coming up behind it which kills the clone in an accident, so that bus wasn’t represented in the clone’s brain state or in yours. In your situation, there is no bus, so when the bus hits the clone, what are you going to experience? Not the bus, that’s for sure. Are you going to experience anything at all after the clone is dead? Or will your experiences stop, even though it couldn’t depend on any physical, causal influence? Or if it were a physical mechanism, something or other (I have no idea what — a pattern? a form?) would be sending the signal faster than light.
    ———
    cim:

    Does anyone have a good atheist resource on reconciling the two? It’s been bugging me for a while, but I’ve not yet found a convincing answer… and it seems to me that the question of “continuity of consciousness” is in a way the same question of “what does it mean to have an identity”

    Daniel Dennett’s Freedom Evolves, on compatibilist free will. I don’t think he’s substantially wrong about the facts; but I don’t really agree with him that we need to use a confusing phrase like “free will” and wouldn’t agree with some of his other ethical views. But that’s me.

    Thomas Metzinger’s The Ego Tunnel, which is focused on self-identity, but it’s pretty much a general representationalist account of what consciousness is.

    And I’ll mention The Mind’s I, edited by Dennett and Hofstadter. It’s a compilation of essays from various authors, along with the editors’ reflections on them. (So it’s a little biased in their favor, but the essays still give you a variety of different views, which is nice). It’s basically an overview of some of the bigger issues in philosophy of mind. It’s not too dry or too dense, either. It’s pretty entertaining at times.

  209. Rob Grigjanis says

    And I’ll mention The Mind’s I, edited by Dennett and Hofstadter.

    Hofstadter touches on the question in Gödel, Escher, Bach.

    Just for fun, I’d also mention Julian Jayne’s The Origin of Consciousness in the Breakdown of the Bicameral Mind.

  210. aluchko says

    raging bee,

    You actually equated a full-blown human person to a mere piece of software? Seriously?! There’s no reason for anyone to trust you to talk sensibly, or honestly, about human consciousness.

    So all the people comparing consciousness to a house is fine but compare it to a computer program and you become a nutjob?

    One of the most common slurs used against atheists is that atheists think of people as nothing more than objects or machines. And now here’s aluchko apparently trying to reinforce that stereotype. And the fact that he/she’s doing this just to sell a hypothetical technology that may well never work, just makes the whole exercise all the more ridiculous.

    Sorry, I guess someone forgot to mail me the atheist talking points.

    Define “valid” in this context, or your argument is vacuous. Who decides what is “valid,” and by what specific criteria do they decide this?

    It’s valid if whatever gives us what we think is a continuous consciousness in our current bodies is completely maintained in the transfer to the new bodies.

  211. says

    It’s valid if whatever gives us what we think is a continuous consciousness in our current bodies is completely maintained in the transfer to the new bodies.

    First, you now have to define “we.” The “we” currently discussing the issue here don’t seem to have a solid concensus, and I notice some of this “we” have repeatedly ignored certain valid concerns that a person actually USING a matter-transporter might have. If “we” doesn’t include actual transporter users, then “we” have no authority to rule on “continuous consciousness.”

    And second, you’re defining “valid” based on “we” reaching concensus on something that CANNOT BE VERIFIED. So all in all, your definition of “valid” is…well…suspect. And it makes me just a little more grateful that a Star-Trek-style matter transporter will never be possible in the foreseeable future, and maybe never be possible at all, ever. So at least we won’t have to worry about Diebold getting the contract to build the thing.

  212. aluchko says

    @consciousness razor

    but what do you think happens when the program in question is giving feedback about the state of that particular machine?

    We’re assuming that we’re moving the program from one machine to a different identical machine.

    Suppose there’s someone outside your light cone (however far) who has exactly the same brain state, thus exactly the same experience, as you do right now. Is that person you? If a moment later the other person* dies, are you going to die too or do you exist independently of each other? If you die, will the other die just because you shared a brain state?

    Hmm, neat idea. Going by my model if myself and the clone have an identical brain state (including memories) up to that point I suspect we are the same person. If the bus instakills the clone then that clone’s body is dead, but their consciousness is still intact inside my head.

    That being said when you start talking about the clone dying affecting me in some way it’s clear that you’re misunderstanding my argument in a fundamental way. For instance just now when I say “their consciousness is still intact inside my head” I don’t mean there was some mysterious link between them and myself. I mean my consciousness is represented by some information state or some phenomena. Just like every identical copy of Hamlet is a copy of Hamlet, but destroying one copy doesn’t affect the others. If the information state creating my consciousness happens to be active in two places at once those places are both “me”.

    I’ve sometimes wondered about the same thing from a quantum perspective. If the many worlds interpretation of quantum mechanics is true there’s near infinite variations of ourselves being created every moment of time. What does that mean for the many close branches that die?

  213. aluchko says

    First, you now have to define “we.” The “we” currently discussing the issue here don’t seem to have a solid concensus,

    I’m sorry that I haven’t won philosophy by giving a solid definition of consciousness.

    Consciousness IS a scientific phenomena. Assuming we’re smart enough it’s a phenomena that we’ll be fully able to understand at some point.

    My claim is that when we do reach that point I don’t see any reason to believe it’s not something we’ll be able to duplicate or transfer in a way that we’ll understand as being completely maintained.

  214. says

    If the many worlds interpretation of quantum mechanics is true there’s near infinite variations of ourselves being created every moment of time.

    I wouldn’t worry about it, if I were you. The many worlds interpretation is cute but for it to work you have to ignore the problem that you’re creating universes out of nothing, at an near-infinite rate, constantly. It’s one of those theories that works only on a very narrow axis – like religion. You may as well just postulate a soul or a god or whatever other skyhook you need.

  215. John Morales says

    Odd that most people think it’s laudable to seek to prevent death from privation, disease or injury, but baulk at the concept of preventing it from senescence.

  216. consciousness razor says

    For instance just now when I say “their consciousness is still intact inside my head” I don’t mean there was some mysterious link between them and myself.

    A (non-mysterious) link is exactly what you would need in order for some physical process to “transfer” someone in the brain-uploader or transporter machine or whatever it is. You would need a cause. It would be the mechanism which is doing something faster than light if the machine were supposed to produce the clone outside your light cone. (Of course I’m not expecting a real machine to be built to do that, but there would still be something you could point to which isn’t moving faster than light.) But you just keep giving me a lot of handwaving. You don’t seem concerned at all about how it’s supposed to work, just whether or not there’s a pattern. It could work by magic for all you care. Yet you still want to say that it could happen, because “we just don’t know.” But you won’t say what we don’t know, either. It’s bullshit.

    Odd that most people think it’s laudable to seek to prevent death from privation, disease or injury, but baulk at the concept of preventing it from senescence.

    Who says it’s to prevent death altogether?

  217. says

    My claim is that when we do reach that point I don’t see any reason to believe it’s not something we’ll be able to duplicate or transfer in a way that we’ll understand as being completely maintained.

    Your claim is unfounded, and your failure to answer certain hypothetical questions (such as what happens to a subject’s consciousness if he’s copied BEFORE the original is destroyed) doesn’t bode well for your case.

  218. aluchko says

    consciousness razor

    A (non-mysterious) link is exactly what you would need in order for some physical process to “transfer” someone in the brain-uploader or transporter machine or whatever it is. You would need a cause. It would be the mechanism which is doing something faster than light if the machine were supposed to produce the clone outside your light cone

    Not if our consciousness is simply a process or an information state without a necessary physical location. As I said Hamlet is Hamlet everywhere, if two identical copies of Hamlet are created outside the light cone they’re both still Hamlet. Nothing has to move faster than light because nothing has to move.

    raging bee

    Your claim is unfounded, and your failure to answer certain hypothetical questions (such as what happens to a subject’s consciousness if he’s copied BEFORE the original is destroyed) doesn’t bode well for your case.

    You mean the question I posed myself and answered in #209? (it’s problematic, seems to be some form of death but probably less meaningful than a regular death)

    I don’t expect you to have read everything I’ve written but would rather you don’t bash me for specifically not answering something I’ve previously answered without being asked.

    Maybe someone asked it again and I didn’t re-answer, if so it’s either because I didn’t feel like retreading, or I had an issue with an earlier part of their claim and didn’t want to have a giant unnavigable thread with multiple topics.

  219. consciousness razor says

    And the approach being taken is unfalsifiable, as has already been pointed out. The question is whether the person survives the process somehow, not simply if we can find a pattern. After it’s done, you can’t expect a valid answer from the clone. (Maybe they agree with your theory, maybe with mine, maybe they’ve never thought about it — but there’s simply no reason to believe they’ll have anything useful to say.) And the original is a stain on the floor, so you obviously can’t ask them.

  220. aluchko says

    consciousness razor,

    I’ve repeatedly said it’s not something I’d do unless we understand consciousness on a scientific level so we can determine what consciousness is and whether we can transfer it. (or we can maybe transfer and I’m about to die anyways, if so why not?)

    Your claim that consciousness is some nontransferable thing is just as unfalsifiable.

  221. consciousness razor says

    Not if our consciousness is simply a process or an information state without a necessary physical location.

    That’s a pretty big if, with no support at all, don’t you think? Would you want to test this out, to see if you’re going to be alive or dead?

  222. says

    Strictly speaking, this is both tested and *not* tested. Its been tested in the sense that, so long as the hardware, i.e. brain, is working, disconnecting it from the incoming data needed to determine “where” it is causes it to construct a false body, and position, in which to “exist”, hence “out of body experiences”. This doesn’t mean that the “software”, as it where, isn’t located in a specific place. It does, however, mean that, in principle, as long as the right inputs where available, artificial or otherwise, we would “transfer” to a new location without much problem, and even if they where not properly synced, we would, never the less, as long as the hardware was sufficiently compatible, “construct” a perception, based on what data did come it, to explain the new state, and thus, still adapt to the change.

    The problem, of course, is whether or not that state, and where you place ourselves “physically” as a result in the input, was real enough to still interact with the world cohenrently, or purely delusional.

  223. says

    (it’s problematic, seems to be some form of death but probably less meaningful than a regular death)

    “Less meaningful” to whom? To the person doing the dying?

    Your claim that consciousness is some nontransferable thing is just as unfalsifiable.

    True — and all the more reason to err on the side of caution here.

    And besides, if you want to allege that there is a transference, then you have to provide a mechanism for transferrence, not just for duplication. Your failure to describe what, exactly, is being transferred, or how it might be transferred, leads to at least a tentative conclusion that no transferrence can be reasonably expected to happen.

  224. says

    Its been tested in the sense that, so long as the hardware, i.e. brain, is working, disconnecting it from the incoming data needed to determine “where” it is causes it to construct a false body, and position, in which to “exist”, hence “out of body experiences”.

    Um, no, there’s no evidence that out-of-body experiences involve any actual movement or transference of anything to anywhere. I’ve always been interested in astral projection, and I’d love to believe it’s possible, but so far at least, the evidence doesn’t support it.

  225. aluchko says

    “Less meaningful” to whom? To the person doing the dying?

    Our intuition about death isn’t really designed to handle an adult with only a couple seconds of unique experience beyond a second continuation of their consciousness.

    True — and all the more reason to err on the side of caution here.

    And besides, if you want to allege that there is a transference, then you have to provide a mechanism for transferrence, not just for duplication. Your failure to describe what, exactly, is being transferred, or how it might be transferred, leads to at least a tentative conclusion that no transferrence can be reasonably expected to happen.

    I’ve consistently advocated waiting until we can answer it scientifically so I think I am being cautious.

    And I haven’t described anything being transferred because I don’t think there’s anything to transfer. If consciousness arises from a process, and we can perfectly duplicate that process and that continues that particular consciousness then consciousness isn’t transferred, it’s just there.

  226. says

    And I haven’t described anything being transferred because I don’t think there’s anything to transfer.

    Um…this is a TRANSPORTATION system we’re talking about here. If it doesn’t transport my consciousness, then what good is it? I don’t buy airline tickets just to fly my stomach around.

  227. aluchko says

    Um…this is a TRANSPORTATION system we’re talking about here. If it doesn’t transport my consciousness, then what good is it? I don’t buy airline tickets just to fly my stomach around.

    In a practical sense you of course need to transport all the information necessary to recreate the mind.

    But as I covered in consciousness razor’s thought experiment about the light cone you’re not really transporting the consciousness, you’re transporting the information to restart that consciousness in another place. And if by some impossible fluke that happened spontaneously without information transfer that other consciousness would still be you.

    To me it sounds like you’re saying that any break in consciousness is death. So how do you handle cryogenics? If we could freeze you then revive you a year later without issue, would that still be you?

  228. consciousness razor says

    Your claim that consciousness is some nontransferable thing is just as unfalsifiable.

    True — and all the more reason to err on the side of caution here.

    It’s unfalsifiable, just like “there is no afterlife” and “there is no god.” Is that a problem? Well, not for me it isn’t. In any case, I don’t need to commit to any such claim, simply to point out the problem.

    I’ve consistently advocated waiting until we can answer it scientifically so I think I am being cautious.

    What do you not understand about “unfalsifiable”?

    Let’s go over it again, and suppose your idea is true. If you survived an uploader/transporter, you would be the only one to have any reason to believe it. You could consider that “confirmation” of your theory for all I care. Nobody else would know. You may as well be claiming you’re a reincarnation of Caesar, or a time-traveler from the future who has amnesia and can’t make predictions. All of that might be the case, and you might very well be justified in knowing it yourself, but no one else would be able to tell the difference between you and a clone. So what are we supposed to learn from this?

    There’s not going to be a scientific resolution to this, because it assumes a non-physical theory of consciousness. I have no idea what you might think you’re waiting for, but I don’t see how it’s happening unless the whole story changes about what the “transfer” is supposed to be like. As it is, the uploader/transporter machine has no mechanism. You were definitely rejecting one, apparently because asking what causes brains to produce consciousness is just pure silliness. You just expect “information” to say it all somehow. Yet cognitive science (or information science … or who knows what) might validate you someday. I still want to know how.

    But think about the other picture, of people as physical organisms. It certainly doesn’t seem like we’re floating about as abstractions in some Platonic realm. And despite all of the extreme doubts you might want to throw up against relying on our common-sense intuitions about how the world “seems” to be, the scientific evidence in this case does an astounding job of making sense of it anyway, doesn’t it? That matters. Something, whatever we might learn about cognitive science, is causing a physical brain to produce consciousness which is itself a physical phenomenon, and something would likewise cause AI to do the same.

    As far as I’m concerned, this isn’t a “problem” at all for a definition of consciousness (or my definition), as being transferable or nontransferable, anymore than it’s a problem for the definition of life that it’s “not the afterlife.” The whole thing leads down a blind alley, so why would it matter?

  229. aluchko says

    Let’s go over it again, and suppose your idea is true. If you survived an uploader/transporter, you would be the only one to have any reason to believe it. You could consider that “confirmation” of your theory for all I care. Nobody else would know. You may as well be claiming you’re a reincarnation of Caesar, or a time-traveler from the future who has amnesia and can’t make predictions. All of that might be the case, and you might very well be justified in knowing it yourself, but no one else would be able to tell the difference between you and a clone. So what are we supposed to learn from this?

    There’s not going to be a scientific resolution to this, because it assumes a non-physical theory of consciousness. I have no idea what you might think you’re waiting for, but I don’t see how it’s happening unless the whole story changes about what the “transfer” is supposed to be like.

    I’m entirely not relying on the transported person claiming or not claiming they’re the same consciousness. I’m relying on future scientists coming up with a theory that entirely explains consciousness, and them being able to firmly establish whether or not it’s the same consciousness that’s replicated (or if that’s a meaningful question). And I’m guessing they’ll state that a perfectly reproduced human will be the same consciousness.

    The fact we don’t understand the question well enough for it to be currently falsifiable doesn’t mean it’s fundamentally unfalsifiable.

    It’s not assuming a physical or non-physical component. The only assumption my argument makes is that there isn’t a non-transferable soul.

    As it is, the uploader/transporter machine has no mechanism. You were definitely rejecting one, apparently because asking what causes brains to produce consciousness is just pure silliness. You just expect “information” to say it all somehow. Yet cognitive science (or information science … or who knows what) might validate you someday. I still want to know how.

    Well yes. I agree I haven’t invented an uploader/transporter machine. I think it makes sense to talk about in the most abstract terms such as “information transfer” and assuming no bugs because otherwise we end up debugging a mythical machine instead of discussing the actual point.

    But think about the other picture, of people as physical organisms. It certainly doesn’t seem like we’re floating about as abstractions in some Platonic realm. And despite all of the extreme doubts you might want to throw up against relying on our common-sense intuitions about how the world “seems” to be, the scientific evidence in this case does an astounding job of making sense of it anyway, doesn’t it? That matters. Something, whatever we might learn about cognitive science, is causing a physical brain to produce consciousness which is itself a physical phenomenon, and something would likewise cause AI to do the same.

    Why do you think I think we’re not physical organisms? Of course we’re physical organisms, and our brains somehow produce consciousness, but we don’t understand what consciousness is well enough to know if we can’t recreate the same consciousness in a different place. I don’t see any reason why our common-sense intuitions would work any better here than they do with quantum mechanics, we know we consider a duplicate house a different house, and a copied file the same file, but neither of those things are conscious.

    The only practical thing might be split-brain patients but I don’t think they actually have multiple consciousnesses.
    http://en.wikipedia.org/wiki/Split-brain

  230. consciousness razor says

    The only assumption my argument makes is that there isn’t a non-transferable soul.

    So you’re assuming that it is transferable*, or just that it’s not a soul? Why pin the two together like that?

    *Whatever that’s supposed to mean to you, because it’s not at all clear to me.

    Why do you think I think we’re not physical organisms?

    You said some other organism outside your fucking light cone is “you.” For fuck’s sake, what the hell does that even mean?

  231. aluchko says

    So you’re assuming that it is transferable*, or just that it’s not a soul? Why pin the two together like that?

    *Whatever that’s supposed to mean to you, because it’s not at all clear to me.

    Well one way we could be non-transferable is if we have some soul that we can’t duplicate.

    You said some other organism outside your fucking light cone is “you.” For fuck’s sake, what the hell does that even mean?

    When I’m saying I’m a physical organism I mean I’m entirely a product of the physical phenomena going on in my body, ie I’m not some radio receiver.

    But my actual consciousness, I don’t know exactly how to quantify it. I think the physical processes in my brain creates an awareness that is “me”, and if a process somehow outside my lightcone somehow recreates my exact awareness that can be me as well.

    Obviously I’m not aware of that other awareness, but why is that perfect replication not me but my own self after being knocked unconscious is? Or my own self after being cryogenically frozen.

  232. says

    Overpopulation is a major problem right now–in countries with low life expectancies and high birth rates. People worry about underpopulation and lack of replacement level birth in many industrialized countries. Education, lower infant mortality (so people are willing to take the “risk” of only having a few children rather than having 10-20 and hoping one survives), and access to birth control will reduce overpopulation far more than limiting life expectancy.

    I saw this, and it was dumb, so I felt like pointing out some of the problems with it –
    Overpopulation is a global problem. ‘Underpopulation’ is a ‘problem’ because we designed social structures (such as social security in the USA) under the assumption of infinite growth. This isn’t ultimately sustainable, regardless of whether we ‘solve’ the underpopulation problem by generating more babies (Christ I hope not) or not. Pretending the problem can be solved with more babies is just fobbing a worse problem off on a future generation, whether its’ your figurative kids or your figurative great grandkids.

    Secondary note, if the rest of the world were to live in the same standard of living that we do, global resources would be fucked. Unless you contend that we are allowed to unfairly reap the benefits of global resources, you have to grant overpopulation is an issue for you too. It’s a global thing, not one we can put off on the third world.

    Christ, I’d think that’s obvious.

    @21: The argument that immortality or even modest life extension is a bad idea because the Koch brothers and Donald Trump will get it first is one of the more compelling ones in my opinion. But I’m not willing to commit suicide to kill Trump, so can’t really say I’m convinced by the argument that we should all commit passive suicide to get them either.

    This is just rock stupid. I’m perfectly willing to not do everything within the earth’s power to extend my life, just to make sure Trump does the same. You’re only calling it suicide because you know an accurate assessment hurts your case.

  233. says

    Um, no, there’s no evidence that out-of-body experiences involve any actual movement or transference of anything to anywhere.

    I think you badly misread what I said. I didn’t imply that something “could” transfer, just that, if you did manage to shift perception from the existing “body” to some artificial construct, presumably in a way that didn’t destructively disrupt synaptic processes (some sort of method to copy them, one at a time, so that the copies replacement is what fires in each case, instead of the original, for example), then the resulting “consciousness” would either never notice the difference, and/or would construct any necessary information needed to make up for missing inputs. None of which implies that the resulting construct won’t be, as it where, entirely “inside” the artificial head. lol

  234. says

    …you’re not really transporting the consciousness, you’re transporting the information to restart that consciousness in another place.

    …either before or after STOPPING my consciousness in the original place, by completely destroying its original vessel. Which means I’d be DEAD. That’s pretty much the definition of “death,” innit?

    It’s unfalsifiable, just like “there is no afterlife” and “there is no god.” Is that a problem? Well, not for me it isn’t.

    For the person who tries to use such a system, whose life is on the line, it kinda might be a problem, doncha think?

    You’re starting to sound like a corporate propagandist trying to brush aside real safety concerns, by pretending that the people most affected by your product are the ones whose opinions count the least (’cause their feelings are emotional and subjective, while the scientists who designed the system are totally unbiased and rational). And what’s all the more ridiculous is that the product you’re defending doesn’t even exist, so you have no revenue stream to protect, and thus no monetary incentive to engage in such disgraceful handwaving. So why do you do it? Is “Star Trek” such a deep-rooted part of your mindset that you can’t even bear to question one of its basic plot-devices? I loved the show too, but I can love a TV show and still see through its hasty contrivances (as long as there aren’t too many of them).

    But think about the other picture, of people as physical organisms. It certainly doesn’t seem like we’re floating about as abstractions in some Platonic realm. And despite all of the extreme doubts you might want to throw up against relying on our common-sense intuitions about how the world “seems” to be, the scientific evidence in this case does an astounding job of making sense of it anyway, doesn’t it?

    “The scientific evidence in this case?” Are you fucking kidding me? There is no scientific evidence to answer the most important question in this particular case. That’s the one thing both sides of this matter-transporter debate agree on.

    So how do you handle cryogenics? If we could freeze you then revive you a year later without issue, would that still be you?

    That’s a damn good question — one that hasn’t been answered, since it has yet to be proven that a person really can be revived after being frozen. If such revival proves possible, then the person revived is more LIKELY to still be me, since the original vessel for my consciousness would not have been COMPLETELY DESTROYED.

    There’s not going to be a scientific resolution to this, because it assumes a non-physical theory of consciousness.

    No, it doesn’t, as I and others have repeatedly said before. Do you really think I have to believe in a soul in order to be rightly worried about having my body COMPLETELY DESTROYED?

  235. aluchko says

    raging bee,

    You seem to have thrown in a lot of quotes of consciousness razor as well, I’m not sure whether you were agreeing with them, arguing, talking to both of us or just me… either way I didn’t respond to those responses.

    …either before or after STOPPING my consciousness in the original place, by completely destroying its original vessel. Which means I’d be DEAD. That’s pretty much the definition of “death,” innit?

    Resume the consciousness in another location then? If my consciousness keeps going I’m not dead, the question is if we can do this.

    That’s a damn good question — one that hasn’t been answered, since it has yet to be proven that a person really can be revived after being frozen. If such revival proves possible, then the person revived is more LIKELY to still be me, since the original vessel for my consciousness would not have been COMPLETELY DESTROYED.

    Cryonics is hypothetical for humans but completely practical for frogs, so it’s a real question of whether the frog that thaws out in the spring is the same frog that froze in the winter.

    http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=104104

    And I have a hard time rationalizing how the revived frog could be the same consciousness, but the duplicated human couldn’t. If we take apart the frozen frog piece by piece and copy it do we have a new frog? What if we keep half of the frog’s original brain?

    One thing we can be absolutely sure of with cryonics is the entities consciousness has stopped. So if you think cryonics preserves consciousness then you’re now arguing over the proper circumstances in which it can be resumed.

  236. says

    And I have a hard time rationalizing how the revived frog could be the same consciousness, but the duplicated human couldn’t.

    Because revival is different from duplication. You don’t need to “rationalize” anything — that’s just an observable fact, adn an argument that ignores or minimizes such an obvious difference, cannot be considered sound.

    So if you think cryonics preserves consciousness then you’re now arguing over the proper circumstances in which it can be resumed.

    Well, I think TOTAL DESTRUCTION OF THE BODY is a pretty sensible place to draw the line, don’t you? If you don’t think it’s reasonable for me to think of that as a sticking-point (it is, after all, a sticking-point for cryonics, which is why we currently consider it a hoax), then IMO the burden of proof is on you to explain why I’m the unreasonable one here.

  237. aluchko says

    So we freeze the body, everything is stopped, an unambiguous halt in consciousness.

    Then we duplicate that physical body exactly.

    Then we thaw both.

    Both bodies are going to create consciousness from unconsciousness, both identical hunks of matter are going to do it in exactly the same manner, I can’t see how you can consider one consciousness to be a valid continuation and one to not be.

    Lets hop into the grey area by swapping out ever increasing portions of the frozen frog and putting in duplicate tissue.
    Does the consciousness of the semi-duped frog become slightly more destroyed as you swap out more and more matter or is there a tipping point? What happens if I dup the frozen frog, cut it in half, then stick it to the original so I have two frogs both 1/2 original 1/2 dup?

    My framework isn’t really troubled by this scenario.

  238. says

    Frozen frogs have nothing at all to do with a “transporter” system that DESTROYS THE USERS’ BODIES COMPLETELY. Your analogy is ridiculous and invalid, no matter how you fiddle with it.

  239. aluchko says

    You’re dodging the question.

    Take this system.

    1) Cryogenically freeze the person.

    2) Duplicate the person with a transporter

    3) Cut both people in half

    4) DESTROY ONE HALF OF THE USER’S BODY COMPLETELY. Then take the frozen half of the original, and the frozen half of the duplicate, stick them together and thaw them.

    Did the person’s consciousness survive?

  240. howard says

    If my consciousness is nothing more than a software running on the hardware of my body…

    And you clone me and I wake up, unaware of whether I am the clone or the original…

    Then I am instantiation of the same, and it doesn’t matter to me, as I have all the right memories and I have the right body.

    But the original instantiation? Might be dead and have no further awareness of any sort, with no continuity to the new creation.

    So even if consciousness goes on, uninterrupted, with no halt in identity, with no real difference from the original me, if the original me is dead then who the fuck cares?

    It’s all word games and shell games to avoid dealing with the fact that it may be the same program running, but it’s not this instantiation, and I have a great fondness for the idea of continuity of this instantiation.

  241. consciousness razor says

    2) Duplicate the person with a transporter

    3) Cut both people in half

    In step two, the person’s body gets destroyed, like we’ve been saying over and over. You can call it “duplication” all you like, but when one of them is a stain on the floor, I’m pretty sure you can’t keep counting them as a person, or a frozen person, or even half of a frozen person. So by step three, you don’t have “both people.” You’ve got a clone. Put the clone’s two halves back together, if you’re getting bored with all of this pointless carnage, then take them to the original person’s funeral. I’m pretty sure people usually wait to do the cremation part until after someone’s death; but since you’ve already taken care of that, you could move straight into a memorial service, then have lunch or something.

  242. ChasCPeterson says

    the consciousness of the semi-duped frog

    less than zero.
    I’ve known a relatively high number of frogs. None were conscious, I’m certain.

    (unless they have perfected a state of stoic zen boredom that is behaviorly indistinguishable from not-conscious)

  243. aluchko says

    In step two, the person’s body gets destroyed, like we’ve been saying over and over. You can call it “duplication” all you like, but when one of them is a stain on the floor, I’m pretty sure you can’t keep counting them as a person, or a frozen person, or even half of a frozen person.

    Please re-read the post you replied to, there’s a reason I said ‘duplicate’ in step 2, talked about both people in step 3, and referred to both the original and the duplicate in step 4.

    I’ll rephrase step 2 to make the process more clear and call it a duplicator instead of a transporter.
    “2) Duplicate the person with a duplicator”

  244. consciousness razor says

    Please re-read the post you replied to, there’s a reason I said ‘duplicate’

    I can read just fine. A bunch of words isn’t going to make it happen.

    I’ll rephrase step 2 to make the process more clear and call it a duplicator instead of a transporter.
    “2) Duplicate the person with a duplicator”

    Okay… Explain that. I don’t want too many messy details, but could I just say “duplicate ChasCPeterson” and *poof* there’d be another one of him? Would it require, say, energy or time or anything like that?

  245. aluchko says

    Okay… Explain that. I don’t want too many messy details, but could I just say “duplicate ChasCPeterson” and *poof* there’d be another one of him? Would it require, say, energy or time or anything like that?

    Who cares?

    It’s magic technology that duplicates someone perfectly without harming them. You had me handle a perfect double existing outside of my light cone, I think you should be able to handle a duplication machine. If you want to get a second ChasCPeterson with a single word than go for it, as long as you deal with the scenario and don’t start debugging that machine.

  246. consciousness razor says

    There’s nothing magical about stuff existing outside your light cone. The fact that you gave a bullshit answer to it is not my problem.

    And as long as you’re not still claiming science will ever do anything like that, be my guest and make up whatever you want. I bet the unicorns would help you become immortal, if you asked nicely.

  247. aluchko says

    Seriously?

    That’s your lame excuse for dodging the completely valid hypothetical question of what happens to the consciousness of the original if we make a person who’s half-double half-original?

  248. Rob Grigjanis says

    aluchko, since we’re all about the gory thought experiments now, here’s one. Let’s assume, against all the physical laws we know, that a copy of you can be made without damaging the original. And you know you’re not the copy because you’re conscious, and in a blue room, while the copy is made in a red room you can see through a glass wall. The copy gets up, walks out of the room, and a few minutes later walks into your room and shoots you dead (maybe because xe’s worried xe might be frozen and chopped in half :)). Is it murder? Did anyone die at all, since consciousness is, you might claim, continued in some sense?

    No hypothetical frogs were harmed in the production of this comment.

  249. aluchko says

    @Rob Grigjanis

    To put it simply the person got forked. The instant the copy was made they were both valid versions of the original consciousness, but from that point on they became distinct consciousnesses. The copy killing the original is murder, though not a hugely meaningful murder since there was only a few minutes of existence that was different from the original.

    ps. thanks for caring about the frogs.

  250. consciousness razor says

    That’s your lame excuse for dodging the completely valid hypothetical question of what happens to the consciousness of the original if we make a person who’s half-double half-original?

    I’m down with hypotheticals, so fire away with those, but you’re insisting on this one making no sense whatsoever. It’s fine if you drop all the unimportant details because it’s super-duper futuristic and such, but there’s not even a coherent story you can tell me of what is supposed to be happening. So what the fuck am I supposed to say? And how the hell are you coming to some sort of judgment, when you don’t have the faintest idea either?

  251. aluchko says

    I’m down with hypotheticals, so fire away with those, but you’re insisting on this one making no sense whatsoever. It’s fine if you drop all the unimportant details because it’s super-duper futuristic and such, but there’s not even a coherent story you can tell me of what is supposed to be happening. So what the fuck am I supposed to say? And how the hell are you coming to some sort of judgment, when you don’t have the faintest idea either?

    I gave a detailed 4 step process and you spontaneously added a step liquefying the original and then when I clarified you just ignored it.

    I gave an abstracted process and now you want a story. I feel like anyway I pose the hypothetical you’ll just find a way to ignore it.

    It’s not a hard hypothetical concept. Use technology to duplicate the person, then somehow take half the matter away from the original and replace it with matter from the duplicate. If you think there’s a necessary detail lacking from that abstraction then tell me what it is so I can specify.

  252. Rob Grigjanis says

    though not a hugely meaningful murder

    Well, pretty meaningful to you, I’d guess.

  253. aluchko says

    @Rob Grigjanis

    True dat.

    Drilling further gets into questions such as our consciousness being fully continuous or non-continuous. Plus the question of how well our intuition about consciousness works for scenarios such as this. It’s weird and nuanced enough that I don’t want to go to far on the question other than to say it’s problematic and not something I want to dive into at this point.

  254. says

    I have a great fondness for the idea of continuity of this instantiation.

    And there is the heart of this absurd argument being made. You have a “fondness”. Well, I can certainly understand that, I occasionally feel that my replacement Kindle seems a bit off, somehow, despite containing everything the old one did, but I realize that its probably not rational. I never the less have a “fondness” for the old one. Fondness, however, is not much of a logical argument, with respect to the actual factual realities of what is going on.

  255. howard says

    Fondness, however, is not much of a logical argument, with respect to the actual factual realities of what is going on.

    Right, that was totally the argument there.

    I have a fondness for continuing to be aware of the world around me.

    I’m pretty sure when this instantiation stops running on this hardware, then I’ll be gone. Dead.

    And if there’s another instantiation of this consciousness software running on another set of hardware, then it’ll be running around feeling and thinking exactly like me.

    Except that it won’t be me, in the sense that the word me refers to this sense of being alive that I have right now.

    But feel free to continue with the word games. If it makes you feel like you have a hope for life after death, then by all means.

    I mean, I used to believe in an invisible sky wizard to get the same effect, so I totally understand why it’s so important for you to believe that.