A philosopher agrees with me


I don’t know whether this is a good thing, or a bad thing, but at least he’s agreeing for a different reason. On the question of whether we’ll someday be able to download brains into a computer, I’ve said no for largely procedural reasons: there is no way to capture the state of all the complex molecules (and the simple ions, either) of the brain in any kind of snapshot, and the excuses of the download proponents are ludicrously ignorant of even the current state of biology. John Wilkins says no for a different and interesting reasons: a simulation (and that’s all a computer version of a person could be) is not the thing itself. The map is not the territory. So even surrendering the idea of a high-fidelity transfer and saying that you’ll just develop a model of a brain doesn’t get you anywhere close to solving the problem of immortalizing “me” in a machine.

Sorry, trans-humanists. I can believe that there can be a post-humanity, but it won’t include us, so I don’t know why you aspire to it. I can sort of imagine an earth transformed by human activities into a warmer, wetter, even more oceanic place that allows more squid to flourish, and it’s even a somewhat attractive future, but it’s not a human transformation.

Comments

  1. dianne says

    I agree with you, but not really with him. Assuming a perfect simulation were physically possible, what’s the difference between it and the original?

  2. says

    I remember having a similar discussion years ago over the problem of teleportation (particularly in conjunction with the Star Trek universe). The people I was talking to were saying that the teleporters take apart your molecules then assemble other molecules in the exact same arrangement elsewhere. My argument was that I would die so that a duplicate could be made of me just to save on some travel time. They thought that was fine because the duplicate of that person would essentially be that person in every respect. BUT IT WOULDN’T BE ME, I’d argue, BECAUSE I’D BE DEAD!

    I started making headway when I explained how a copy of the Mona Lisa down to materials and brushstrokes used isn’t the Mona Lisa.

  3. Howard Bannister says

    Dianne @ 1

    The difference to the copy, or the difference to the original?

    To the copy, there may be no difference.

    To me, it’s the difference between something that’s me and something that’s not me. The continuation of my personal sense of being alive.

    To an outside observer, the difference is trivial.

    To somebody trying to stay alive as long as possible, non-trivial.

  4. says

    “We actually made a map of the country, on a scale of a mile to a mile!”
    “Have you used it much?” I inquired.
    “It has never been spread out, yet,” said Mein Herr: “The farmers objected. They said it would cover the whole country and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.” — Lewis Carroll

  5. The Mellow Monkey: Non-Hypothetical says

    dianne @ 1

    Assuming a perfect simulation were physically possible, what’s the difference between it and the original?

    Perhaps there’s no difference to the observer, but the original me who dies would experience a difference. I don’t really give a shit if an observer feels like I still exist because there’s a perfect copy of me.

  6. says

    We have no real understanding of what subjective consciousness is and, for that reason, can’t explain what creates continuity of consciousness. If Bertrand Russell was correct that continuity of consciousness is an illusion brought on by memory (and I think that’s the best physicalist explanation), there is no enduring subjective “us” to transfer “into” a machine; our minds are themselves a series of simulations, and life annihilates us as consistently and as surely as death does.

    What’s more, this definition of consciousness reveals the futility of trying to escape death. If we are, subjectively speaking, emergent episodes of the brain, the only way to preserve our minds is by stasis—but since death and stasis are subjectively the same thing, that accomplishes nothing.

    If continuity of consciousness is not an illusion, then scientists have yet to demonstrate any external evidence that it is preserved from one moment to the next. We have no idea whether or not we would “be” the machines, because we don’t know what makes us what we are in the first place. But it certainly seems unlikely that there is any realistic model of consciousness that would allow us to simply “transfer” our subjective experiences over to machines; that’s an artifact of dualism.

  7. remyporter says

    If one could build a simulation of a thing, with perfect fidelity, how is the simulation different from the thing? If we could simulate a brain with perfect fidelity, the simulation would be indistinguishable from a real brain.

    The real issues I have with brain uploading are these:
    * You aren’t going to get perfect fidelity.
    * Most brain-uploading strategies are painfully compartamentalist- nobody stops to think about how gut flora impact cognition- and they do. And let’s not even get started on the endocrine system.
    * We already have an exceedingly practical way to make new human-level intelligences, and to simulate human intelligence- by building new humans. We’re quite good at it.
    * Even if we could build a perfect fidelity system, what would be the point? Why would we even want to? Because once we turn thought into a process that can be manipulated mechanically, we can do far more exciting things than build mere human brains in jars. Any mechanized consciousness would very quickly evolve into something completely unlike human consciousness.

  8. says

    I’m not sure this is a productive area of discussion. It seems to me that this whole subject says much less about whether this can be done and more about how helplessly incapable we are of thinking clearly about this.

    The discussion might end up shedding light on other subjects and that might be important, but as for the question of the viability of uploading itself, I’m happy to leave that one for our bionic squid overlords.

  9. doubter says

    But…Singularity! Our benevolent, infinitely intelligent A.I. Overlords will figure it all out for us!

  10. Jonathan, der Ewige Noobe says

    The only way we’ll ever do this whole digital immortality thing is to start plugging chips into our brains and letting Thesean drift work its sweet, sweet axe-of-my-forefathers magic. On a related note, I’ve been leaving memos every night for the person who wakes up the following morning.

  11. says

    But…Singularity! Our benevolent, infinitely intelligent A.I. Overlords will figure it all out for us!

    They’ll figure it out for themselves, you mean. But maybe, just maybe, they’ll keep a few humans around and amuse themselves by watching us try to follow their explanations.

    *snrk* Look! That ape thinks it understands identity. *facetentacle*

  12. remyporter says

    I sincerely doubt that super-human intelligences would be quite that interested in humanity. What’s the point of eradicating humanity? And think about the impact that would have on the ecosystem in which these intelligences evolved- they’re going to be dependent upon humanity in the same way humanity is dependent upon bees and earthworms.

  13. Geoffrey Brent says

    According to http://content.time.com/time/magazine/article/0,9171,936455,00.html 98% of the atoms in your body are replaced each year. (Calcium and phosphorus in bones last longer.)

    Does that mean that the “current you” (by the time you read this, no longer even remotely the *original* you) is doomed to die within a year? Do you lose sleep over the idea that over the next 12 months you would experience being dissolved and excreted/exhaled/perspired while those replacement atoms go on to live in your place?

    If not… well, what makes simulation or copying-based teleportation any worse? Does the difference between a year and a few seconds really help here?

  14. remyporter says

    My resolution to that sort of Theseus’s Ship Paradox, Geoffrey is to point out that the ship only exists in our heads. We create the experience of a ship because our brains divide the world up into objects. It’s our way of understanding the world. There isn’t a ship there- it’s just a pile of matter. We call it “a ship”, and we assign it to this entity we call “Theseus” because that’s how our brains work.

    The same is true for our personal experience of self. Our continuity of experience arises from how we perceive our experience of the world.

  15. dianne says

    My argument was that I would die so that a duplicate could be made of me just to save on some travel time. They thought that was fine because the duplicate of that person would essentially be that person in every respect. BUT IT WOULDN’T BE ME, I’d argue, BECAUSE I’D BE DEAD!

    Wasn’t there a faction in the ST universe that held this opinion? IIRC, McCoy was a believer in this view. The duplicate would think of itself as the same person as you are. It would argue that it is you. For what that’s worth to the argument.

  16. joeschoeler says

    The transporter argument. I don’t really care about the distinction between me and a good enough copy. As long as I can avoid a messy situation where two of me’s exist simultaneously, I’m fine with me being destroyed and a copy continuing. I think it just means that there are some edge cases where what we call consciousness doesn’t make much sense.

  17. dianne says

    What’s the point of eradicating humanity?

    Well, we attempt to eradicate cockroaches, MRSA, and malarial parasites. With little success. But we might be as damaging to the super-intelligent beings in which case…

  18. dianne says

    I don’t think it would be a simulation anymore if it were perfect.

    So if one could make a PZ2, would both of them be you? (I agree with your reasoning on why the perfect copy is impossible, so don’t think it is a problem that will ever come up, but if it did…)

  19. dianne says

    Even if we could build a perfect fidelity system, what would be the point? Why would we even want to?

    Because the current hardware wears out rather quickly. Might be nice to have at least the descendents have a shot at a reasonable life expectancy.

  20. Geoffrey Brent says

    Re. “perfect fidelity” (#7), that seems like a higher standard than what we achieve naturally. My brain today is noticeably different from what it was yesterday, and VERY different from what it was ten years ago.

    If you subscribe to certain religions, you can assume that each person gets a single indivisible soul at conception/birth/whatever and that’s what makes “you” you. (I talked to one lady who believed the soul resided in the physical heart… not sure where that leaves transplant recipients!)

    But outside those religions, maybe we need to question whether “identity” is really such a meaningful concept.

    On a tangent, anybody who’s interested in this stuff might care to read Charles Stross’ “Glasshouse” and/or “Saturn’s Children”. Both of these deal with settings where minds can be uploaded and copied, and where copies may then diverge from the original. Sometimes they get merged back together, pooling memories and (I guess) smooshing out minor personality differences; other times they go their separate ways.

    (And then sometimes people realise that their nemesis is actually a forked version of their own personality, operating under a different name, and the plot gets complicated enough that it really needs a tree diagram of the different minds involved…)

  21. ragarth says

    First, his argument eliminates more than just mind transfer, it would also mean that true AI is impossible. He’s saying that anything that is a simulation is not real, and assumed in this is that anything running on a computer is a simulation. In other words, any AI is not real and is not actually a living entity under this logic because such an entity would be a mere simulation–a simulation of intelligence is not real intelligence, a simulation of life is not real life.

    A basic paradox that arises from this viewpoint would be if our intrepid philosopher were to suddenly discover that he, himself, is a mere simulation running on a computer. Would he then have to admit that he’s been wrong his entire life and is actually not alive and not intelligent? Would he not actually be alive because he’s not thinking (and what would he think about his nonthinking state?), or would “I think, therefore I am” not apply to virtual entities?

  22. Nathair says

    The duplicate would think of itself as the same person as you are. It would argue that it is you.

    You would only argue that because the original is (for no apparent reason) destroyed at the same time as the copy is created. Would you still argue that the copy is me if the original me was standing right there arguing that it was not? What if the new me killed the original me, are you saying that nobody would have died? What if the “transporter pattern” were preserved and hundreds of copies were banged off rather than just one, are they all “me”? Are they still me even though I have now had new experiences since the pattern was created? What if a backup of the pattern were preserved and a new me was assembled every time my red shirt number came up? Does that make me immortal?

    It’s only a very thin illusion of continuity.

  23. unclefrogy says

    the new “copy” may have a memory of the “original” and feel like they are the same but the “original” would be dead. The experience may seem to be continuous but there would be a distinct discontinuity there.
    this whole idea smacks of immortal soul.
    The subject does seem to have a fascination for us
    it also has a very subjective idea of the nature of time involved some how which we experience as linear and constant and finite. but is more relative.in reality
    uncle frogy

  24. Randomfactor says

    Regarding the Star Trek transporter, it’s demonstrated that the system can move something (a malfunctioning half-breed space probe, for example) from one point to a point in “empty space.” It’s obviously not building the “duplicated” probe out of local materials…there aren’t any. So it’s GOT to be moving the entire mass of the object transported. It’s a fancier way of doing so than the car I drove to work this morning, but I don’t see any reason not to see the person-at-the-far-end as different from me.

    (Yes, I know there are episodes where one person disappeared and two formed, sometimes years apart. I choose to ignore those and assume they’ll eventually be retconned.)

  25. stevem says

    re OP:

    My argument is this question: What do they think gets “uploaded”? There is nothing in us to “upload”. Being a monist, the “self” is simply an “emergent behavior” of the bio-mechanism we call “the brain”. Even a “perfect copy” is still a copy, not the thing itself. We may, one day, be able to build perfect copies in computers of human brains; perfect copies of some individual person.; but it will still be its own entity. The original will still exist, or be dead.

    In the book, Spock Must Die”, the author has the characters discuss this idea (briefly). Spock’s line, agreeing that it is a “perfect” copy, retorts, “The difference is indistinguishable”. A third party will see the “same person” walk out of the transporter as who walked into it, the person who walks out will have the memory of walking in with the same personality as him, but what of the person walking into the transporter? *He* will be no more, only a “perfect” copy walks out.

    I sometimes think the flaw in this reasoning is the analogy of walking through a conventional door. How do I know that I wasn’t destroyed walking through and a “perfect” copy was created to walk out? Maybe so, there’s no way I can tell if I am the “perfect” copy, but that doesn’t imply that I can just get “uploaded” into a computer someday. If I am a copy, I still can’t be “uploaded”, “uploading” will just create yet another copy. Doesn’t help *me* at all.

    tl;dr. What are you “uploading”? There’s no separate “thing” inside your brain to “upload”.

  26. consciousness razor says

    tomhead, #6:

    We have no real understanding of what subjective consciousness is and, for that reason, can’t explain what creates continuity of consciousness.

    Well, brains create it. What, you wanted a better explanation? ;) I think we’ll have to wait a while.

    If Bertrand Russell was correct that continuity of consciousness is an illusion brought on by memory (and I think that’s the best physicalist explanation), there is no enduring subjective “us” to transfer “into” a machine; our minds are themselves a series of simulations, and life annihilates us as consistently and as surely as death does.

    Hume was one of the first in Europe, with lots of Buddhists before that. (Philosophy Bites actually just had an episode about how Hume may have been influenced by a few Jesuits who had brought back some knowledge of Buddhism to Europe.)

    It may be confusing to call it an “illusion,” because the implication seems to be that an illusion needs somebody to experience it. But there isn’t a separate somebody inside the brain watching what it does, because the “somebody” (our identity) is like you said a representation created by the brain. It’s also doesn’t seem as if our identity is “annihilated” from moment to moment, because no process seems to be required to get rid of it, just processes which are constantly creating new representations of our brain states.
    PZ:

    A “perfect simulation”? What is that? I don’t think it would be a simulation anymore if it were perfect.

    Come on. They mean a perfectly accurate one. (Of course, that would require your complete destruction; not sure why they left that part out.) If they said it wasn’t perfectly accurate, you’d have plenty of reason to complain about that.

  27. dianne says

    Would you still argue that the copy is me if the original me was standing right there arguing that it was not?

    I didn’t say I would argue that it was you but that IT would argue that it was you. At least, that’s what I said if I didn’t make a typo. Suppose you went to bed one evening and woke up the next morning with an exact replica of you lying beside you in bed. How do you know whether you’re the original or the replica (you both have a memory of having gone to bed last night…)

  28. dianne says

    It’s obviously not building the “duplicated” probe out of local materials…there aren’t any.

    “Empty space” isn’t really empty, just low density. So in principle it could be just collecting matter from further and further away until it gets enough. Or maybe it’s using the vacuum energy of space after converting it to matter.

  29. consciousness razor says

    I didn’t say I would argue that it was you but that IT would argue that it was you. At least, that’s what I said if I didn’t make a typo. Suppose you went to bed one evening and woke up the next morning with an exact replica of you lying beside you in bed. How do you know whether you’re the original or the replica (you both have a memory of having gone to bed last night…)

    Neither you nor the copy may not be able to tell. If the people designing the thought experiment were nice, they would probably tell you, because they would be able to know. (Also, one of you would actually have been destroyed, unless physics gets tossed out for the sake of argument.)

    So that doesn’t really matter, does it? (After all, you could be a brain in a vat right now, for all I know.) It certainly doesn’t imply “you” can be “uploaded” or “transferred” that way.

  30. consciousness razor says

    Neither you nor the copy may not be able to tell.

    Sorry for the extra “not.” Consider that a “yes, that’s correct: you might not know.”

  31. jamessweet says

    Perhaps there’s no difference to the observer, but the original me who dies would experience a difference. I don’t really give a shit if an observer feels like I still exist because there’s a perfect copy of me.

    This assumes that continuity of consciousness isn’t an illusion. That there is some “ghost in the machine” experiencing all of your qualia.

    I don’t buy Wilkins’ argument at all. I think PZ’s argument is sound for the foreseeable future (thus thwarting the transhumanists), but since there are no actual physical laws preventing it, I hesitate to rule it out entirely. It seems dubious on practical grounds, but then again lots of things that have seemed to be dubious on practical grounds turned out to be quite doable after all. The majority haven’t, of course, but enough have that I will only say that brain downloading seems unlikely, not impossible.

  32. brucegee1962 says

    From the reading I’ve done on the Turning test, my understanding is that trying to mimic the way the brain functions doesn’t really get you very far towards a machine carrying on a convincing conversation. Modelling how a conversation works, though, does a much better job.

    I suspect that the path towards simulating humans will lie through SIRI. Over time, our phones are going to get smarter and smarter, better and better and holding conversations with us, and better at passing Turing tests. Probably by 2050, when you call up a company’s customer service line, I predict that only trained experts will be able to tell whether they’re speaking to a real person or a simulation.

    And then, once you can simulate generic humans, the next step is simulating individuals. In Japan, everyone will be downloading simulations that allow them to carry on convincing and realistic conversations with their favorite pop stars. You can talk to Brad Pitt or Angelina Jolie for as long as you want — and he/she will act just like the real person would if he/she was your boyfriend/girlfriend!

    It won’t actually be Brad Pitt or Angelina Jolie. But if most people can’t tell the difference, then Turning suggests, does it really matter?

  33. daniellavine says

    Agree with jamessweet.

    There isn’t necessarily a difference between an information process and a simulation of an information process. “A simulation of Newton’s method” is not actually different from “an implementation of Newton’s method.” “A simulation of neurotransmitters crossing a synapse” is obviously not the same as “neurotransmitters actually crossing a synapse” but that’s not really what we’re talking about. We’re not talking about simulations of brain but of mind.

    Another way to think of it: isn’t a simulation of Microsoft Windows really just a copy of Microsoft Windows?

    So if mind is an information process similar in relevant ways to Microsoft Windows or Newton’s method then a simulation of the thing is really just an implementation of the thing. That’s a big “if”, granted, but there’s some good circumstantial reasons for thinking that’s the case. I think Wilkins is making an ontological error by assuming mind is the sort of thing that must be simulated (like a physical process) rather than implemented (like an information process).

    The transporter paradox already mentioned by others in the comments here is still in force but I tend to agree with jamessweet that continuity of consciousness is an illusion; all we have at any moment is the lived presence and memories of a lived past, but our “transporter clones” have their own lived present and the exact same memories of a lived past. The protests of “but it’s not really me!” seems to me to make a lot of implicit assumptions about the meaning of “me” — the problem of identity when it comes to sentient beings is a tough one and what it means to be “really me” is still up for debate. The conclusion that my “transporter clone” is not “really me” is based on unreliable human intuition rather than rigorous philosophical analysis or empirical data.

  34. daniellavine says

    brucegee@39:

    A friend of mine asked Siri what her favorite movie is with rather amusing results (try it!).

  35. consciousness razor says

    First, his argument eliminates more than just mind transfer, it would also mean that true AI is impossible. He’s saying that anything that is a simulation is not real, and assumed in this is that anything running on a computer is a simulation. In other words, any AI is not real and is not actually a living entity under this logic because such an entity would be a mere simulation–a simulation of intelligence is not real intelligence, a simulation of life is not real life.

    This is a little slippery, so I get where you’re coming from, but I don’t think that’s right. He’s saying that we are organisms. (Or if we were robots, we’d be robots.) That is what we are. Our personal identities and experiences however are representations of those same organisms. Everything you ever experience is a simulation of the world, not the “physical” world itself. I am not an experience of myself, just myself … which is a particular organism. With me so far?

    The confusion arises because the thought is that merely having that same representation floating around in representation-space (or whatever) is all you actually need to preserve or transfer the same sense that I’m me or that you experience yourself as you. But even though the representational theory of mind seems to be right, that isn’t so. The representations are caused by a physical substrate: our nervous systems. As long as there is a substrate that can do it, we’re fine. An AI could do the same thing, because it likewise has an appropriate substrate (potentially, since we haven’t done it yet).

    What he means is that you need to look at what is actually happening in the physical world, not with respect to some abstract representation of it created in our own minds, to see whether it’s really the original you or really a copy. He’s not claiming the representations don’t exist altogether (because they’re not “real” — admittedly the language gets pretty elliptical here), but that they’re not causing everything all by themselves, so we shouldn’t confuse representations with the tangible stuff that we are.

  36. prae says

    The map might not be the territory, but if you start making it more and more precise, at some point it will be an identical copy of the territory, and then you could ask yourself: what’s the difference?

    Also, the simulation doesn’t have to be perfect in every aspect, just close enough. I strongly assume the current biochemical implementation has a certain error margin, behaving differently depending on temperature, air pressure, the things you ate recently etc. So, as long as the simulation does not make errors greater than that, everything should be fine.

    And of course you would need to either simulate a body as well or build an artificial one, everything else would be stupid.

    That being said, I do share the opinion that a Star Trek transporter kills you, and if mind uploading would involve shutting you down in order to dissect your frozen brain or something like that, I wouldnt do it.
    I suppose you would have to begin with prothesic neurons which replace your brain over an extended period of time. If that works, you could extend them with a feature to communicate their state to a computer etc, but I think at that point you could save you the trouble and just move your new artificial brain into a new robot body and enjoy your potential immortality alerady.

  37. voidhawk says

    I do believe that AI has a significant part to play in our future, but it won’t be through brain-uploads but similar to the way infant minds develop, that is to say a number of pre-programmed ‘instinct’ responses with a learning and developing algorythm on top of it. All the best AI systems are learning ones which grow almost organically from the ground up.

    The post-human future could very well be electronic, rather than biological, but it won’t be a nice smooth transition from gooey flesh humans to cyber-humans, if we succeed in creating a sentient AI then it will be much more different to us than celaphopods or cretaceans.

  38. nogginscratcher says

    How we attach a persistent identity to a particular clump of matter is a tricky one, quite possibly a futile goal, but we can try.

    You can draw a boundary line around the cells of my body for a first attempt, and make an arbitrary decision as to whether my gut flora is a necessary part of me (I’ll be bolding things to signify the difference between ‘me’ in the normal sense, and me the slightly-mysterious attached identity.

    But then, I don’t feel like I’d be a different person if I lost a toe, or an arm, so maybe me is embodied in my brain. But we’re typically don’t actually feel attached to the particular cells and atoms – losing or gaining a few brain cells doesn’t change who I am, and if I had a magic device to instantaneously swap a single randomly selected atom in-place with another atom of the same element… well I expect I could do that until I statistically had none of my original atoms left and still feel like me.

    So does the ship of theseus save us? Only if we allow our minds to be clouded by old dualist notions of souls, or any other sort of non-physical mind that needs to be slowly poured like a fluid from one vessel to another. If the brain I have now, and the brain I have after a full atom-swap are both me at either end of a gradual process, they ought to both be me if we go in a single step.

    What we really seem to care about is the mind, but I don’t mean that in a magic soul-based way, but the patterns embedded in my brain that represent my personality, thoughts, memories, desires, intentions and so forth. But then… those can also change over time; I indentify myself as the same person as the me of 10 years ago, and 10 minutes ago, despite the subtle and not so subtle differences. I propose that the determining factor, in a world without souls or magic mind-fluid or bizarre attachments to specific atoms, is in truth the immensely strong causal connections from one me to another, across time and space and changes of atoms.

    I resemble my past self, not by coincidence, but because how I am today is causally determined by how I was yesterday.

    So, coming back around to the real question – a simulation of my brain, or (if we can sufficiently unpick the algorithms) of my mind, running on a computational susbtrate instead of a biological brain. If it has truly been flawlessly copied from my real brain then it shares the same degree of causal connection to my past self as my surviving brain would. It thus becomes an equally valid successor-me, just as entitled to call itself me as I am.

    The problem then is that there are two me‘s, and only one of them gets to live in an immortal computer brain. I would see that more like leaving a descendant than uploading myself into the cloud – there may be a surviving me in the future, but not the same specific instance of me as is writing these words. I’d really rather prefer not to leave any me behind, so I’d be more comfortable with a ship-of-theseus approach that slowly blended one me into the other, but I’m aware that that’s a sentimental hangup rather than a logically defensible argument.

  39. A Masked Avenger says

    Long time lurker, this one finally prompted me to sign up and comment. Hello, folks!

    PZ cites two objections, one technical and one philosophical. Could it ever be technically feasible to simulate PZ’s mind in silicon? Hard to say; he’s right that the complete state of his physical brain is mind-bogglingly complex. On the other hand, it may not be necessary to simulate its complete physical state. Who knows how relevant stray ions are to his thought process? It may not even be necessary to simulate his exact neural mapping; it might be possible to abstract the brain’s function using some mathematical structure that doesn’t closely resemble a neural net.

    Overall, you won’t go broke in your lifetime betting against, it, but we have way too little information to confirm or deny that it could ever be feasible.

    The philosophical objection is much tougher. I’m no philosopher, so I welcome correction, but I’m pretty sure we have no real definition of what a “self” even is. You’re convinced that you’re you, and that you’re sentient for that matter–and you’re some 99% the same physical material as the thing that yesterday claimed to be you–so we give you the benefit of the doubt. But most skeptics would agree, unless I’m missing something, that your “self” is essentially an emergent property of software running in your head. Which makes it highly debatable whether this “self” thing is objectively real in the first place. Which of course makes the whole discussion moot.

    In my childhood I believed in a soul. Since then I still can’t shake the mystical conviction that my consciousness is a real thing. That I’m not just a machine that convincingly pretends at consciousness. My own conviction that I exist is what convinces me: I don’t just say I’m self-aware; I firmly believe it myself. There’s got to be some sort of thing doing the believing, right? Right?

    Dawkins severely undermined this with a suggestion in (I think) the Selfish Gene. He suggested that my brain is a computer which contains a model of itself, so that I can forecast my own future state if I choose X or Y action. He suggested (IIRC) that this might account for “self-awareness.” It doesn’t prove anything. It’s just a ghost of an idea. But it seems so plausible to me that it makes me question my own existence (as more than a software artifact).

    (On a very tangential note, this might account for religiosity as well. If we have self-awareness machinery, which pointed at others becomes empathy machinery, then pointed at everything/nothing/inanimate objects it might well be the religion machinery. Projecting human consciousness into the universe at large.)

    If there’s anything to all that rambling, then it seems entirely possible that an artifact could be constructed that thinks just like PZ, and is actually convinced that it is PZ in a fully sentient way. It might also pass turing-like tests, so that it would be indistinguishable from PZ except that one happens to have custody of PZ’s body. And if the artifact is created by a teleporter, Star Trek fashion, then it might be impossible to determine which is in fact PZ’s original body.

    Apologies for such a long first post–please be gentle. My last question is, why do we assume that the “self” has continuity in the first place? You go to sleep–or enter a coma–and awaken convinced you’re the same “self” as before. Your prior “self” is unavailable for comment. If the download process involved entering an induced coma, is it more than a mystical belief that convinces us that the thing that awakens isn’t the same “self”? If so, then I’m a mystic too, because I believe it. But personally I can’t come up with a way to test this hypothesis that doesn’t engage in heavy circular reasoning.

  40. A Masked Avenger says

    @nogginscratcher #45:

    Wow, we seem to have typed up remarkably similar posts at approximately the same time. Jynx!

    I very much enjoyed your post.

  41. consciousness razor says

    The philosophical objection is much tougher.

    Indeed, PZ’s is not much of an objection.

    I’m no philosopher, so I welcome correction, but I’m pretty sure we have no real definition of what a “self” even is.

    Definitions are things we all make, so if you lack one that can be changed at any time. I’m sure you basically do understand it: it’s your experience of being something which has some kind of perspective in the world.

    But most skeptics would agree, unless I’m missing something, that your “self” is essentially an emergent property of software running in your head. Which makes it highly debatable whether this “self” thing is objectively real in the first place.

    Being emergent certainly wouldn’t make it fake or nonexistent. So what exactly would it mean to debate whether it’s “objectively real”? There’s no denying that there objectively is something which constitutes your self or your experiences. The old problem is mainly just that there isn’t a “soul” or a “little man” inside your head — those aren’t real. That of course doesn’t mean you aren’t real.

    In my childhood I believed in a soul. Since then I still can’t shake the mystical conviction that my consciousness is a real thing. That I’m not just a machine that convincingly pretends at consciousness. My own conviction that I exist is what convinces me: I don’t just say I’m self-aware; I firmly believe it myself. There’s got to be some sort of thing doing the believing, right? Right?

    Why could machines only pretend? Who’s saying that? You’re definitely conscious. And that doesn’t require magic. It doesn’t need to be any more complicated than that.

    Apologies for such a long first post–please be gentle. My last question is, why do we assume that the “self” has continuity in the first place?

    Depending on what is meant by “continuity,” we don’t. But in an important sense, my body isn’t continuous with Julius Caesar’s in the way it is with the younger version of CR that I remember being. I only have memories to go on, so I could be wrong, but that’s the best anyone can do. But if there are any causal connections between myself and Julius Caesar, they are extremely remote, and they’re not the causes of who and what I think I am. If I lose a skin cell or grow a new one, I certainly do change, but that sort of thing doesn’t matter.

  42. jack lecou says

    I think this discussion happened here before a while back. I still find it weird that PZ and others, who reject magical dualism in other spheres, accept it implicitly in arguments like Wilkins’.

    I mean, the transhumanists are definitely grasping at straws technologically and scientifically on the whole brain uploading business. It’s obviously very likely to be totally impossible, and even if not, a better guess for it’s realization would be centuries or millennia from now, not “next decade” or whatever.

    But while it may be a total fail on a practical level, it’s on much firmer ground “philosophically”. If you’re rejecting the silly idea of dualism, and you’re prepared to believe that “you” are not some intangible bit of essence with an existence and continuity separate from the arrangement and interaction of the purely tangible and law abiding atoms and molecules making up your body and brain, than it seems to me you’ve got to accept that various kinds of snapshots and re-creations of those patterns (with *sufficient* — not perfect — fidelity assumed) have equal claim to being continuations of the “real” you. Even virtual ones.

    Our language and conception of ourselves is really invested in this whole “me” concept, so it’s hard to swallow, but this business of “I die” if my pattern gets copied or chopped up shuffled around in unsettling ways just doesn’t fly in a non-dualist world.

  43. transenigma32 says

    I have a metaphor that I like to use about consciousness:

    Imagine a car in motion. That car is moving from point A to point B, at a varying speed, slowing down, stopping, and it starting again until it gets to its destination, where it shuts off for good.

    Consciousness is not this car. Consciousness is the motion of the car. It’s the velocity, the direction, the inertia of the car. Even when the car is stopped at a stop light, the engine is still going, the car is still turned on. The continuity of consciousness is better compared to the continuity of motion of a car, with the car being the brain. Your consciousness does not exist as a single entity at any given point in time; it’s a whole continuity of events created by the brain, your genetics, and external factors. A snap shot of the car in motion is only going to show the car, not the motion. You can run it through simulations that might guess where the motion came from and what the motion would be, but none of that is the consciousness. None of that is you. It’s just a snapshot of the brain. You are the whole continuity of motion, past, present, and future.

    When you are asleep, the car is at a stop sign. When you’re awake, the car is driving. When you being to develop consciousness (around 24 months, iirc), you got through the steps of turning the car on. When you die, you go through the steps of turning the car off. There is a time before you truly you die where you have the car sitting in idle; when physicians bring people back from the dead, they’re generating movement out of a car that was in idle and set to neutral rather than drive. When you turn the car off, the motion goes away; the car is no longer moving nor has the potential to move until it’s turned back on again (yes, I know it’s still technically moving, but I’m talking about what an observer would see watching it via camera from beginning to destination).

    If you were going to take a highly detailed, fine resolution picture of every part inside of the car in three dimensions at any point from point A to point B and run it, you’d get a high resolution picture of every part inside of the car in three dimensions at that point in time – but not the car’s movement. I’m sure there’s a way to capture the movement and then simulate the movement on a computer, but that’d require something vastly more complicated than what most people imagine. A single MRI image isn’t going to cut it, even if you go down the molecular state of the brain, since that’s no different than going down to the molecular state of the car. It doesn’t get you the movement.

    I’m willing to leave room that I could be wrong and that there’s more to it than that. I’m living proof a little knowledge is a dangerous thing; I know just enough about a wide array of subjects to make people who know nothing about them think I know a lot and make people who know more cringe and wish I’d shut up since I don’t know anything. I think it’s premature to say it’s impossible, but it’ll be far more technologically challenging than the computer boys who’s only brush with biology is the evopsych they use to justify their misogyny think it will be. The brain is not a traditional computer. The mind is not software. Uploading the mind is not going to be a walk in the park. Any AGIs that we create are not going to be achieving sapience the same way that humans did it, if they can at all (Godel’s specter and all that).

    But I’m willing to keep an open mind, even if it’s not going to be easy (hell, a brain transplant into a synthetic, vat-grown biogenic body might be easier), After all, physicists at MIT and Harvard just created a new type of matter based off of light – photonic molecules. I mean, if you can make lightsabers real, I’m willing to accept that just about anything is possible, but without evidence or proof, it’s not plausible.

  44. Ingdigo Jump says

    Unless the original is somehow consumed and deleted in the process of uploading (for example rather than uploading into a computer infecting a host with nanites that gradually replace organic cells with synthetic replicas) I’d consider any “Uploaded” being to be an offspring not me, the mental equivalent of a plant runner or clone.

  45. says

    Indeed, PZ’s is not much of an objection.

    Oh, really? It’s the objection that matters. It’s like listening to people arguing that if only we had perfect and absolute resolution of the position of every molecule in a 1.5 kg sphere we could…and I interrupt to say that that is actually physically impossible, and so you tell me to sit down, that’s our premise, so you have to accept it.

    Right. Go on. Keep your discussion in the plane of total batshit unreality, and then you can pretend whatever you will.

  46. Ingdigo Jump says

    My philosophical argument is more that given it’s possible, I don’t think it would be DESIRABLE

  47. consciousness razor says

    Our language and conception of ourselves is really invested in this whole “me” concept, so it’s hard to swallow, but this business of “I die” if my pattern gets copied or chopped up shuffled around in unsettling ways just doesn’t fly in a non-dualist world.

    I don’t understand. Getting chopped up in a non-dualist world usually involves death, does it not? It’s typically if you have a soul that can go to an afterlife, when getting chopped up doesn’t mean certain death.

    So maybe what you ought to do is explain exactly what you think is supposed to happen. Say how that’s (1) physically possible, and (2) how it’s me which survives, not a copy of me after I am destroyed. (Because that may well be possible assuming dualism’s false, but being a copy doesn’t imply it’s me.)

  48. A Masked Avenger says

    consciousness razor,

    Why could machines only pretend? Who’s saying that? You’re definitely conscious. And that doesn’t require magic. It doesn’t need to be any more complicated than that.

    You’re right: there’s an assumption of mine sneaking in. For a sci-fi reference, think D.A.R.Y.L., or of course Do Androids Dream of Electric Sheep?. We can make a robot that swears that it’s conscious, and if threatened, begs for its life. But it’s impossible to objectively confirm or deny that it is in fact conscious. The scientist in the movie cited the Turing test when she said, “A machine is alive when you can’t tell the difference.” But that’s a convention we adopt; in fact it’s impossible (so far as I know) to answer that question.

    As a software engineer, I’d always have said that a deterministic program is more or less by definition not conscious. Back when I said it the most, though, I had no knowledge of non-deterministic programs, nor of how they might achieve non-determinism by other than virtual coin-flipping. I’m still somewhat doubtful, though less than before, that software can truly be “sentient”–whatever that even means. But I’m confident that it can emulate sentience well enough to pass the Turing test.

    Similarly, I know I can pass a Turing test. I’m doing it, by having this conversation. But you have no way of objectively knowing whether I’m actually sentient, or just a futuristically clever emulator. Which once again calls the existence of sentience itself into question: it’s impossible to verify empirically. I believe it exists only because I’m so damned sure that I’m sentient, and I derive everyone else’s sentience from the assumption of my own, by means of empathy.

    The core “mystical” assumption I’m making is roughly that I’m not a deterministic machine–specifically, that I have “free will.” That my decisions are in some meaningful sense my own, and are not derivable from a complete, godlike knowledge of the complete state of my brain and all external stimuli.

    Depending on what is meant by “continuity,” we don’t. But in an important sense, my body isn’t continuous with Julius Caesar’s in the way it is with the younger version of CR that I remember being.

    Agreed. But there’s circularity lurking here. In affect we’re privileging the fact that the physical artifact in your head is more or less the same artifact that was there yesterday. If physical custody of your brain defines “you,” then by definition no copy can be you. But that’s a question-begging definition. Faced with two otherwise-indistinguishable consciousness razors, the “real” one is the one that has the special meatball?

  49. consciousness razor says

    Right. Go on. Keep your discussion in the plane of total batshit unreality, and then you can pretend whatever you will.

    I haven’t disagreed with you. There’s probably no way to do it (you left out a key term there, if we’re being “realistic”). But people will insist, no matter how unlikely it sounds to either of us, that maybe future technology will make it so people can do “procedures” you never dreamed of. What then? Well, then reality is not quite what you think is, but they still don’t get uploading because it destroys the original. I’d say that’s as practical and as to-the-point and as down-to-Earth as it gets.

  50. says

    We could also argue that maybe future technology will get around the laws of thermodynamics. But then we’re not actually talking about technology anymore, but magic.

  51. jack lecou says

    I don’t understand. Getting chopped up in a non-dualist world usually involves death, does it not? It’s typically if you have a soul that can go to an afterlife, when getting chopped up doesn’t mean certain death.

    Well, stressing that this is all purely hypothetical – and that I agree that granting the premise of various magic technologies is all a bit masturbatory. But if we’re philosophizing

    Suppose I’ve got a carbonite machine from Star Wars, and I freeze you solid, let you sit for a century or two, then unthaw you. Your life and consciousness effectively stopped completely all that time. And normally you’d be long dead after a couple hundred years. Is the “you” that wakes up the same as the “you” that got frozen? Did you “die” at some point?

    Suppose I freeze you solid as above, then chop your corpsicle up into a lattice of 1 inch cubes with a magic subatomic laser knife. I UPS the cubes in separate packages a couple thousand miles, stack them back up in the right order, and thaw you out (I guess after a pass over the joints with the magic rebinder beam). A body wakes up with a heartbeat and says “hello, where am I? Did it work?” Is that not you? Did “you” die along the way just because you were in different pieces for a while? Did someone else wake up in your place?

    Etc., etc., ad infinitum.

  52. Ed Seedhouse says

    I’ll just arbitrarily assume “perfect copy” because I want to talk about the philosophical part.
    I’ll also assume the copy process is fast and done under anesthesia (or instantaneously and painlessly which amounts to the same thing).

    First take the “transporter” situation where the first copy is destroyed and the second identical copy is recreated elsewhere. I say “first copy” because we all seem to be copies already. I don’t know where my “self” went overnight and actually think it didn’t exist and so couldn’t go anywhere anyway. Seven hours later “I” woke up with a memory of going to bed last night. But much of me changed during that night, so I seem to already be a rough copy of the “me” that went to sleep last night.

    Generally I don’t think there is any evidence that we are or have “self’s”. The Buddhists are probably right. For example the bit of matter typing this message seems to have a memory of consciously experiencing life for short periods without that “sense of self” that it normally carries around. And remembers that as being pleasurable for the short time it lasted.

    Talking that way is complicated because of the structure of our language so I’ll go back to the “normal” conventions, just for convenience. Anyway if “I” am killed by the perfect copy process I won’t know it. Something that no longer exists can’t “know” anything, can it?

    Now I’ll assume that the “original” remains alive. Two exist that are identical in every way and both think they are the original. But wait, they are NOT identical in every way, they can’t be. The MUST at the very least occupy different positions in space. So if I go under anesthesia in one bed and remain there during the duplication process while the copy is made in another location the doctors or technicians who make the copy will know which is the original and they could tell “me” that I was the original. Or they could switch our locations while we were unconscious. Then perhaps”I” would wake up in a different spot than I remember being in and be told I was the copy. But if we weren’t switched the copy would be told the same thing and both of us would still think we were the original. How then can I tell for sure that the technicians were lying and am really the copy? And even if I wake up in the same place and am told I am not the copy how can I be sure that I am not being lied to?

    But still there are two copies who are constrained to remain in different locations in space, and our thoughts and histories would diverge from there. We would be “different people” from that point on. But as I said before, the “original” is really just a copy of an earlier version anyway.

    All these confusions arise from the belief in what I think is a nonexistent “continuing self”. Of course that sentence is formally incoherent in English, but I think you’ll know what I mean.

    Our language isn’t designed to cope well with this kind of discussion. One implication of the “I don’t exist” position is that no one has a “sense of self”, since senses are designed to give us information about what really is there. If my position is correct it’s more like “illusion of self”.

    But illusions can be very useful. After all you are not looking at letters and words on your computer screen, but merely an arrangement of dots (which are themselves arrangements of molecules) that our brains conveniently delude themselves into seeing as letters and words. Yet using this illusion we are enabled to convey thoughts from one to the other.

    Evolution didn’t produce us as beings who see the “actual” world but as beings who interpret the information that we take in well enough so that we are able to pass this ability to our children. Whether we see the “world” as it actually is is irrelevant to evolution.

    All in all I think it may be just as well if it turns out that making copies or “uploading” my “mind” is impossible for physical reasons.

  53. A Masked Avenger says

    Oh, really? It’s the objection that matters. It’s like listening to people arguing that if only we had perfect and absolute resolution of the position of every molecule in a 1.5 kg sphere we could…and I interrupt to say that that is actually physically impossible…

    I agree that’s indeed “much of an objection.” If a perfect, quantum-level copy of your brain is required in order to have a working copy of your mind, then it’s probably even provable that this copy can never be generated. I’m no physicist, but I strongly suspect that uncertainty and exclusion principles team up to guarantee that it can’t be done.

    The potential counterargument that I offered was that it’s unproven whether such a physical copy is actually necessary in the first place. Neural nets, for example, are mathematical abstractions which simulate only a very particular aspect of your brain: that (1) pairs of neurons are “connected”; (2) a signal “fires” only when a threshold is exceeded; and (3) the thresholds are adjustable. If a sufficiently complex neural net can simulate your brain–which is far from proven–then massive amounts of your brain’s complexity can be ignored.

    Neural networks themselves, though, can be represented with other mathematical abstractions that look nothing like neurons. Depending on the particulars, for example, matrices can be used, and neural (sub)networks can be reduced to linear algebra.

    It may still be “batshit unreality,” in that we’re certainly light years away from anything approaching the complexity of a human brain. But it’s batshit unreality on the order of intersteller travel or teleportation. It’s not in the same class at all as sky fairies or voodoo dolls.

  54. troybmason says

    PZ. A perfect simulation? What is that?

    Well for the purposes of this discussion, let us assume it is a simulation so close to the original as to be virtually indistinguishable. This is a “what if” senario after all. If humanity manages to stick it out for a few thousand more years or even a million, there is a chance we could achieve the technology required. (I suspect that the simulation could be way short of perfect and it would still work – our meat brains are far from perfect when it comes to memory). Then it becomes an argument as outlined above about whether there actually is a “me” or just the illusion of me created by an un interrupted string of memories. If you go to sleep, then die, then a simulation is woken with your memories intact – Then I believe that will be you (virtually). As long as any illusion of continuity is maintained. The belief that identity (me-ness if you will) is somehow dependent on the vehicle is a fairly non-athiest viewpoint. It almost implies a soul or separate spiritual entity.

    The master of this stuff in SF is the Australian author Greg Egan. If you haven’t read Permutation City or any other of his fine books, I urge you all to do so.

  55. jack lecou says

    We could also argue that maybe future technology will get around the laws of thermodynamics. But then we’re not actually talking about technology anymore, but magic.

    Brain uploading falls apart on practical grounds, yes. But you’re posting Wilkins’ argument approvingly, and, for the sake of argument, assuming away the technical problems. Wilkins’ philosophical argument stands or falls in the realm of silly technological hypotheticals, so it’s no fair defending that argument by falling back to the much more solid ground of physical or technical impossibility.

  56. ricepad says

    Ugh. This philosopher really has no clue. Lots of words, but I don’t really see any substance.

    Have you ever heard of a virtual machine? It’s basically a simulator of the computer that it’s running on. A computer program can run on the computer itself, or in a VM, and it doesn’t have to even notice any difference. The VM can be moved to different hardware, with the program still running.

    It’s exactly the same with our brains, it’s just that we’re not able to extract enough information from a running (or even dead) brain. If this would ever be possible (advanced MRI, whatever, who knows, though I doubt it), then a copy of me running on a computer would be initially be exactly the same as me in every respect. And then we would rapidly diverge – still both being conscious beings, probably both claiming to be me.

  57. consciousness razor says

    Suppose I’ve got a carbonite machine from Star Wars, and I freeze you solid, let you sit for a century or two, then unthaw you. Your life and consciousness effectively stopped completely all that time. And normally you’d be long dead after a couple hundred years. Is the “you” that wakes up the same as the “you” that got frozen? Did you “die” at some point?

    I’d say I was in suspended animation or frozen or whatever. I might have been dead, then came back alive. No big deal, I guess. And if you somehow chop me up and put me back together, same thing. I’d be alive again, and I’d still be me.

    But that’s not like brain-uploading, which is destroying me, in order to make some other thing that thinks it’s me. The fact that it believes that is pretty neat I guess, but it isn’t my problem, because I’ve been destroyed.

  58. A Masked Avenger says

    But that’s not like brain-uploading, which is destroying me, in order to make some other thing that thinks it’s me. The fact that it believes that is pretty neat I guess, but it isn’t my problem, because I’ve been destroyed.

    “Uploading” is a computer-science term, and it always implies a copy. The original is not destroyed in the process. A “destructive upload” probably needs a different name.

    If, as PZ implies, a perfect copy of the brain is required, then “destructive upload” is really something like quantum teleportation, which actually raises similar philosophical questions: is a quantum teleport of my favorite mug, still the same object? It’s indistinguishable in every way–no contradiction will ever present itself if I assume that my mug “jumped” rather than being “destroyed and then duplicated.”

    Even if a perfect copy is not required, PZ’s objection has some weight: somehow or other we need pretty darn complete information about the state of his brain, before we can attempt to simulate it. That information is unobtainable today. It might be unobtainable even in principle, if it depends in any important way on processes governed by the uncertainty principle.

  59. says

    #61, a Masked Avenger:

    That’s the thing about Wilkins’ argument. I say, you can’t make a perfect copy, it’s physically impossible. Then the transhumanists reply that they aren’t going to, they’re going to model the brain with a simpler simulation that may use a very different substrate…and boom there’s Wilkins waiting to bash you. The map isn’t the territory. You’ve made a map and model of me, for instance, but it’s not me; it’s only going to mimic me within a narrower range of experience over which you’ve determined the parameters of its operation.

    How are you even going to test the accuracy of your model? Because I can tell you that a lot of people have very divergent views of who I am, and what you’re actually going to build is a reflection of the creator’s perception of PZ Myers, not an actual copy of PZ Myers.

  60. consciousness razor says

    “Uploading” is a computer-science term, and it always implies a copy. The original is not destroyed in the process. A “destructive upload” probably needs a different name.

    I get that, but we’re talking about actual human beings being uploaded, so the question is how that is supposed to work. Tell me one way to do it that isn’t destructive. Many people have already tried and failed. Before got to the point of worrying about quantum uncertainty, you’d need to shoot a fuckload of radiation at my body to see what everything is, where it is, where it’s going, etc. What do you think happens then? I just get a sunburn during the upload? Does software ever get a sunburn when it’s uploaded/downloaded? So how analogous do you expect the situation with software to be?

  61. says

    John Wilkins objections to this idea are not good ones. The map is not the territory except when it is. If in fact a simulation of your brain was made with enough fidelity to be functionally equivalent it would have the same mind as you do. What would actually happen is that after the inputs to the computer were changed from the sensory inputs that your brain receives there would be two entities who claimed to be PZ Myers. They would both be equally right. The biological version of you would still not be wanting to die any time soon, but resigned that it is inevitable. The computer version would get to live for a very long time. I see no reason to favor the blog posts of the biological Myers. None of this is really possible anyway because a functionally equivalent computer would be impractical. But it does raise interesting points about the personal identity of a AI since it would have much more capacity to change who it was and could replicate itself very easily.

  62. says

    If in fact a simulation of your brain was made with enough fidelity to be functionally equivalent it would have the same mind as you do.

    What is “enough fidelity”? What is “functionally equivalent”? I read those as “not quite identical” and “mostly similar, but with some differences I’m unable to test”, and they don’t quite align with “the same mind”.

  63. says

    First off, I don’t bash. I use Ockham’s Razor to cut.

    Second off, I am no dualist. Information is not a physical property (as no less than Norbert Weiner observed). If the brain causes mentation, as I think it does, then it does so physically. So a simulation of the informational properties of the brain is not the brain, and it is only because we now have this thing about computers the way we once had a thing about clockwork that we think information is objectively real.

    Third off, did any of the critics actually read my argument? It seems not…

  64. says

    I’d say I was in suspended animation or frozen or whatever. I might have been dead, then came back alive. No big deal, I guess. And if you somehow chop me up and put me back together, same thing. I’d be alive again, and I’d still be me.

    It would seem to me that it would follow that if the transporter took your original molecules and reassembled them at your destination, you’d agree that that was you. Correct?

  65. Rob Grigjanis says

    daniellavine @40:

    The conclusion that my “transporter clone” is not “really me” is based on unreliable human intuition rather than rigorous philosophical analysis or empirical data.

    If the copying process leaves the original a dying mass of raw flesh, is there still any doubt about “really me”? To the person in question, gasping their last “what the fuck was I thinking?”, there would be no doubt whatsoever. I guess the philosophical discussion could continue after the mess has been cleaned up :)

  66. consciousness razor says

    Information is not a physical property (as no less than Norbert Weiner observed).

    Interesting. Pretty sure I don’t understand what that’s saying or what you mean by “objectively real.” It’s a non-physical property? It’s caused (or embodied) by physical stuff, but it’s not a property of that stuff?

    It would seem to me that it would follow that if the transporter took your original molecules and reassembled them at your destination, you’d agree that that was you. Correct?

    Possibly. I’d want more specifics before I signed up. And if it’s cheaper, I’d rather take the bus.

  67. says

    I could explain again (as I did once years ago) about my own experience with anterograde amnesia.

    Bottom line is, whether you’re talking teleportation or downloading your mind to a computer, the ONLY way the result is the same you would be if you think there’s something to you besides the collective result of what all that makes you.

    If you don’t think there’s something else, then the downloaded copy or the teleported copy are just that – copies, perfect and everything, but you’re still DEAD.

    If you think somehow your personal consciousness gets teleported or downloaded too, then there most be something else.
    Sounds very “soulish” to me.

    When the mechanism in my brain that stored new memories was not working, there was no continuity.
    I was talking, interacting, but I wasn’t there. There was no ME. I didn’t exist. BLANK. I had no experience of existing.
    Those weeks are still blank.

    When it started working again, I started having snatches of consciousness.
    After it came back fully, I still had to learn to experience myself in my life despite having “knowledge” of who I was and what had happened in the years before my accident.

    The following year was like getting used to living in the identity I had “stored” from before. I’m told my personality changed “like night and day.”
    And my memories of the 18 years before my accident feel distant, feel like “downloaded” memories.

    From my point of view, my life feels like it began in March 1984, when my body was 18 years old, and I started (again) having a continual thread of memory. The “before” is like a stored memory from someone very like me, with gaps in it.

    No way in hell you’d ever get me to step into a teleporter.
    I’ve done that once, and I don’t want to have to start from scratch again.

  68. says

    Entropy is a physical property (of ensembles of microstates). Structure is physical. Mass is physical. etc. Basically whatever is studied by physics is physical. But information is not physical. It is an abstract property that obtains in the relation between observer systems and the objects or states being observed. And that depends rather crucially on what aspects of the states observed the observer system finds salient or to which it responds. If there were a god or Laplacean Demon, then presumably all states would be observed, and so the information would coincide with the physical reality, but we can’t do that. So information involves us abstracting away from the (causal) physical world to (acausal) formal relations.

  69. says

    Jafafa Hots @ 75, thanks for that. I think in discussions like these, the wish for continuation overrides reality, in particular, the reality of our brains, which can do some damn strange stuff when damaged or changed.

  70. says

    @consciousness razor
    In the carbonite example, you said that suspending you, chopping you up, putting you back together and then reanimating you, would leave the same “you”. If the transporter takes you apart and reintegrates the same molecules at the new location, how is that any different?

  71. Ingdigo Jump says

    @Jafa

    Yes it does sound Soulish. Reminds me of the series Ghost in the Shell where the setting has “Ghost” as a philosophical (and apparently legal) term to address that. It doesn’t seem to presume an actual thing, just a social construct

    *************************************

    People saying this means AI is impossible are way way way WAY missing the point.

    No if an AI could imitate a person or mind well enough, it would be “conscious” as it says it is. What it would NOT be is the original person. Saying this print is not the painting doesn’t mean that the print isn’t real or doesn’t have artistic value.

  72. consciousness razor says

    johnwilkins, #77:
    Thanks, that helps clear it up.

    LykeX, #80:

    In the carbonite example, you said that suspending you, chopping you up, putting you back together and then reanimating you, would leave the same “you”. If the transporter takes you apart and reintegrates the same molecules at the new location, how is that any different?

    Yes, and I meant to say it could, since it was just a hypothetical after all. I don’t assume that any way of chopping me up and putting me back together must work. It might not. But it also might. So I think it’s possible. (Though it may not be, and anyway it’s extremely improbable.)

    Chopping me into 1 inch cubes is a little different from disintegrating me atom-by-atom, though. One process is a bit more “invasive” than the other. (If not outright “destructive,” you know?) And I don’t really get why I’m being disassembled and reassembled, if this is a “transporter” which would ordinarily mean keeping me intact while moving me through space.

  73. says

    Chopping me into 1 inch cubes is a little different from disintegrating me atom-by-atom, though. One process is a bit more “invasive” than the other.

    Why?

    Sure, we can agree that one is more extreme than the other, but it’s more a difference in degree, not quality. You’re still being taken apart to a degree utterly incompatible with normal function.
    Besides, then we just get into a big discussion about at what point there’s too much disintegration. E.g. What if it’s half-inch cubes? What if it’s cell-by-cell? What if half your brain is dematerialized, but the other half is kept intact and transported whole?

    Furthermore, once you accept that your mental processes can be completely suspended and your physical self disintegrated (to whatever degree), what’s the problem with reassembling the body from new materials? If doing that means it’s a new “you”, then why isn’t it a new “you” when you’ve been frozen and diced?

  74. says

    To avoid being trapped in conceptual loops or recursions, start with an experiential definition of consciousness: “Consciousness is that which is hearing these words now.” It is conventional to assume that consciousness is localized, limited and personal, but if you examine the evidence for these assumptions you won’t find any. Remember, I’m not talking about the observation that “consciousness” as characterized by the purely pragmatic Turing test, i.e. when viewed as an object, is clearly associated with brains. The problem with this is that consciousness isn’t an object, and no amount of specious reasoning can make it so. Return to the definition: “Consciousness is that which is hearing these words now.” This consciousness certainly exists, and yet the way in which it is known to exist is quite unlike the way I know that this computer exists, in fact I cannot readily explain the source of the certainty.

    In this light, therefore, most of what’s been written here, and more generally, on issues of downloadability concerns minds and bodies rather than consciousness. If it were possible to copy a body-mind, would consciousness also be present therein? I don’t find this question particularly interesting, since in any event I wouldn’t be able to directly experience that consciousness, just as the consciousness of people around me is also not accessible. But there is a way to establish the true nature of consciousness, and the first step is to be open to the possibility that consciousness is not localized, limited and personal.

  75. says

    No if an AI could imitate a person or mind well enough, it would be “conscious” as it says it is.

    Except maybe not. I had no consciousness during the weeks in hospital when I was talking to my family (somewhat nonsensically).
    They certainly treated me as though I did, and despite knowing that I had the amnesia, I’m sure they didn’t think of me as a non-conscious “thing.”
    Even though I was.

    But of course since you can’t know another being’s experience, the only answer is to treat that being as conscious anyway.

    Incidentally, I sometimes use this as an example of what death is like when talking to religious people. I know what death is, I’ve BEEN there.

    There’s not much to see.

  76. says

    Jafafa Hots:

    Except maybe not. I had no consciousness during the weeks in hospital when I was talking to my family (somewhat nonsensically).

    I found The Tell-Tale Brain by V.S. Ramachandran enlightening when it comes to the vagaries of consciousness and just how strange and nonsensical things can get when it comes to our brains.

  77. says

    But there is a way to establish the true nature of consciousness, and the first step is to be open to the possibility that consciousness is not localized, limited and personal.

    Souls.

  78. Ingdigo Jump says

    @Jafafa

    For sake of the argument presumed I meant something that basically is a person in behavior, ie all external signs indicate it is as much as it would for a person.

  79. says

    Sorry if my response #88 was flippant.

    To elaborate, if consciousness is not localized and limited and personal, you have to propose a mechanism. Something to carry that information beyond your physical brain.

    Quantum!

  80. evandrofisico says

    johnwilkins @77, On complex systems, similar emergent behavior may arise from systems of different nature.

    Such systems (networks, fractal surfaces, syncronization systems, etc) are usually way simpler than a brain, but are a good indication that total awareness of a system state are not necessary to replicate its emergent properties.

  81. consciousness razor says

    LykeX, #84:

    Why?

    Sure, we can agree that one is more extreme than the other, but it’s more a difference in degree, not quality.

    There could be a qualitative difference, I guess, but I’m not arguing that. I’m just saying it is different, and it’s hard to tell whether that’s relevant with so little information. I don’t know how I’m being disintegrated and put back together, and it may be just as sloppy as chopping me into big chunks. I’m not saying the big chunks are a better option (probably not if I had to guess), just different.

    Furthermore, once you accept that your mental processes can be completely suspended and your physical self disintegrated (to whatever degree), what’s the problem with reassembling the body from new materials? If doing that means it’s a new “you”, then why isn’t it a new “you” when you’ve been frozen and diced?

    I want to know how it is that I am supposed to have access to the experience of this new material. Is anything supposed cause that to happen? The continuity of having the same parts seems like there could reasonably be a causal pathway wherein I could still exist when it’s reassembled as it was before (not existing as some “new” me, but the same old me that was existing before). But if it’s just manipulating some material into the same abstract pattern that I was in, then I’m not seeing any sign of a physical mechanism. Where’s the causation supposed to be in that kind of a set-up? I think my experiences are caused to be what they are, not just floating out in Platonic heaven or whatever. So having a copy of my pattern isn’t having me. Maybe with it you could make another person just like me, but that wouldn’t cause me to be him. Clearer?

    Jafafa Hots, #86:

    Except maybe not. I had no consciousness during the weeks in hospital when I was talking to my family (somewhat nonsensically).

    You at least don’t remember whether you were conscious. You wouldn’t be able to tell the difference between not remembering and not being conscious during that time. If they say you were, I’d believe them.

  82. Holms says

    #1
    Assuming a perfect simulation were physically possible, what’s the difference between it and the original?

    I have never understood the case argued by the transhumanists, given that we have reason to believe that this assumption is impossible, rendering the point moot. All I ever see in discussions with the True Believer type is hand-wave after hand-wave, wherein every difficulty is magically overcome by the vaunted (and equally implausible) technological singularity.

    #18
    As long as I can avoid a messy situation where two of me’s exist simultaneously, I’m fine with me being destroyed and a copy continuing.

    This thinking is altogether bizarre and foreign to me. The fact that another copy of you continues won’t make you any less dead nor your oblivion any less complete.

    #26
    A basic paradox that arises from this viewpoint would be if our intrepid philosopher were to suddenly discover that he, himself, is a mere simulation running on a computer.

    In a similar vein as the reply made to #1, the paradox falls apart if that condition never arises. Since we have no reason to believe it could possibly arise… the paradox kinda vanishes.

    #32
    Of course, that would require your complete destruction; not sure why they left that part out.

    Why does it require that? This entire topic requires us to first assume that we can break reality. If we are granting ourselves machines that can make perfect simulations, brain uploads etc. then why not go a tiny step further and assume that it doesn’t destry us in the process? Either way, we are essentially assuming magic.

  83. says

    Only @43:prae seems to have touched on this so far, but the way to avoid the problem of making a copy is to perform a gradual replacement of the brain in situ — i.e. maintain the continuity of experience and uniqueness of identity. People who keep the same car for decades still treat it as the same car they originally purchased, even if they have had most of it replaced, piece by piece, over the years.

    I don’t think there is much doubt that artificial implants will eventually be used to enhance the brain, or to replace damaged part/functions like, say, the ability to generate domapine, and thus preventing Parkinson’s disease.

    Now, whether we could eventually find some way to replace the parts responsible for the higher functions of the brain, I have no idea, but if it was ever to become possible, then a gradual replacement of the brain, in situ, over a period of weeks or months would bypass the problem of creating a duplicate through downloading or by some other means. Once the transformation was complete, it would then presumably be better suited to be placed in a more secure and durable “container” for the long haul.

  84. Holms says

    #64
    Have you ever heard of a virtual machine? It’s basically a simulator of the computer that it’s running on. A computer program can run on the computer itself, or in a VM, and it doesn’t have to even notice any difference. The VM can be moved to different hardware, with the program still running.

    It’s exactly the same with our brains, it’s just that we’re not able to extract enough information from a running (or even dead) brain.

    Uhhhhhhhhhhhh say what.

    Ugh, this topic always gets me cranky. Adios from Santiago Badass, before I start yelling at people.

  85. says

    You at least don’t remember whether you were conscious. You wouldn’t be able to tell the difference between not remembering and not being conscious during that time. If they say you were, I’d believe them.

    I don’t think you’re following me.
    I was “conscious” as in awake and talking to them. I just wasn’t experiencing it myself.

    I don’t find that kind of consciousness personally useful.

  86. says

    Wilkins: And even then it won’t be exactly like me, unless you also simulate the body, the environmental affordances for that body that match the world I do inhabit, and every last possibly important biological function or process that might affect me. I look forward to hearing from the person who has to simulate my lower colon.

    I agree that simulating the body could be important — I’ve actually written a short-story based on that idea, why would they have to simulate a specific lower colon to produce an accurate “silico-me?”

    After all, would having surgery or taking a drug that fixes his gastroenterological issues made suddenly mean he was no longer John Wilkins? I know someone who’s had his entire colon removed, and he’s still very much the same person. So, you can indeed simplify to some extent. Modeling one or two well-functioning human colons may be all you need for all simulated human beings, and even then, there are people who live pretty normal lives without any sense of smell, touch, sight, etc. If Wilkins suffered a stroke and was deprived of one of those senses, he would still be him, even if his daily life experiences had changed.

  87. says

    @Jafafa Hots #90

    If consciousness is not localized and limited and personal, you have to propose a mechanism.

    No I don’t: you’re the one making a positive claim, that consciousness is localized and limited and personal, so the onus is clearly with you. But I don’t mind helping you out a little: to the extent that you can free yourself from the notion that consciousness is thought-like, you’ll be more available to the possibility that consciousness is also space-like and matter-like.

  88. Nerd of Redhead, Dances OM Trolls says

    that consciousness is localized and limited and personal, so the onus is clearly with you.

    If it isn’t localized and personal, it is a claim by you, and where the fuck is your evidence after months of presuppositional evidenceless whining? Your unevidenced word isn’t good here….

  89. says

    No I don’t: you’re the one making a positive claim, that consciousness is localized and limited and personal, so the onus is clearly with you. But I don’t mind helping you out a little: to the extent that you can free yourself from the notion that consciousness is thought-like, you’ll be more available to the possibility that consciousness is also space-like and matter-like.

    We can measure brainwaves. We can see them go away when the brain dies. We can tell that people, once dead, are more difficult to have conversations with.
    We have the experiences of people like myself who have had brain damage.

    Consciousness is space-like and matter-like?
    We can measure and detect space, we can measure and detect matter. We have theories that explain space and time and matter.

    WHO is positing something?
    WHO is the onus on?

    The onus is on ME to disprove your “thought particles” or “consciousness waves” or a “space-time-smartness continuum?”

    What a load of BS.

  90. says

    @Jafafa Hots #103

    If you check my original post at #85 you’ll see that I’ve already addressed what consciousness is and is not: you really need to get over this obsession with brains.

  91. says

    you really need to get over this obsession with brains.

    My brain is where I keep my consciousness. I dunno about you.

    And when my brain was injured? My consciousness was too! Imagine that. It’s almost like consciousness is the product of a functioning brain!

  92. Nerd of Redhead, Dances OM Trolls says

    If you check my original post at #85 you’ll see that I’ve already addressed what consciousness is and is not: you really need to get over this obsession with brains.

    And your EVIDENCE is where? That has been your problem for months. Nobody believes your evidenced WORD.

  93. says

    Vijen #105, if you read my comment you were replying to, you would see that I not only referenced your comment #85, I quoted part of it.

    Anywho, I just clicked through to your blog… and I think I can safely say we aren’t going to agree.

    From the top post explaining miracles:

    In fact, close observation suggests that we experience an intermittent stream of discrete moments, each projected from an unseen source (as discussed in another post)

    The other post includes your anecdote about how you started tripping once BEFORE you took your dose of LSD.
    Uh huh.

    I think we’re done here.

  94. says

    @Jafafa Hots #110

    If I were seeking agreement I wouldn’t be posting here. “I” am the form which consciousness is taking now to remind “you” that consciousness is all there is. Thoughts don’t think – who is it that does?

  95. consciousness razor says

    Jafafa Hots:

    For what it’s worth, one time I saw my asshole before I sat on the mirror. You can’t explain that.

  96. lpetrich says

    This issue reminds me of a certain Ed on the messageboard FRDB.

    He believes in a metaphysic that I call “stuffism”, the idea that an object’s identity is exclusively dependent on what constituent parts it has. That seems like a caricature of metaphysical materialism, but he believes that we have immaterial souls, and that those entities are essentially static.

    When I asked him about waves, he claimed that waves are made of energy.

    He has also maintained that time travel is impossible, because there’s no way that two copies of any of us can coexist at some time. That’s because of our Souls.

  97. says

    If I were seeking agreement I wouldn’t be posting here.

    Happy to please, then.

    “I” am the form which consciousness is taking now to remind “you” that consciousness is all there is. Thoughts don’t think – who is it that does?

    You tell me.

    My brother in law had the same ideas as you do.
    It killed him when he tried to apply it to an otherwise treatable cancer.
    (Well, it killed his BODY, his consciousness is apparently still out there, hopefully kicking itself in its ass for leaving my 4 nieces fatherless.)

    I am the form consciousness is taking now to tell you you’re full of it.

    “Consciousness is all there is.”
    What egotism. “God made me in his image,” replaced by “thinking, like my thinking, is all there can be!’
    Because it’s so awesome, my thinkiness.

  98. says

    @Jafafa Hots #117
    But you are also asserting that there is only one real substance: you call it matter, I call it consciousness. And clearly, if there is experience there must be an experiencer. Your terminology obliges you to pretend that you don’t exist! The very incoherence of this position should help you to realize that “scientific materialism” is a religion as absurd as any other, its overwhelming popularity notwithstanding.

  99. Nerd of Redhead, Dances OM Trolls says

    I call it consciousness.

    We can provide evidence matter exists. Can you provide any evidence that your alleged consciousness exists anywhere and everywhere you claim it does? We all know you have nothing but your presuppositions and delusions…So, where is YOUR EVIDENCE?

  100. Ed Seedhouse says

    Vijen says “clearly, if there is experience there must be an experiencer. ”

    Just how is this statement different from “if there is creation there must be a creator”?

    Just because we say “it is raining” does not mean that there is a mysterious immaterial “it” that is responsible for the rain. There is a process we call “raining” and the process does not require a separate “it” to do the raining. There is another process we call “experiencing” but there is no need for there to be a separate “experiencer” apart from that process.

    This seems to me to be the same fallacy as the common fallacious argument for the existence of God.

  101. mankoi says

    I’m not sure I’d agree with that argument. I’m not a transhumanist myself, but I’m not buying the second argument. I don’t really have the biology to refute the first argument, so… I won’t try. The second argument really comes down to the hard problem of consciousness, for which we don’t have an answer. Why a system of cells, electrical potentials, and chemicals should be able to have experiences, or be aware is something we don’t know yet. To say that a simulation of a brain couldn’t be conscious seems to rest on the idea that consciousness requires a biological system. That may be true, but it seems unlikely to me that there’s something special about a biological network that would allow it to be conscious, where a theoretical artificial network would not. Rather it’s seemed more likely to me that the complexity of the system is what leads to consciousness. Even a simple model may be able to have the consciousness, or a copy of the consciousness, of another person. I understand we’re arguing a simplified system, but we also don’t know how much needs to be replicated to capture consciousness.

    I find that last part to be the best argument against transhumanism. We really don’t know how consciousness comes about, or how much we need to replicate it. I’d also argue that a perfect replica of me doesn’t matter much to me. I’ll still be dead when I die. But that argument seems alive and well by others here. I’m not sure if we’ll ever be able to understand consciousness to the degree needed to replicate that of a person, when consciousness itself is so abstract. Even if we did replicate it, we wouldn’t be able to tell for sure, much as I’m not technically able to determine if anyone else is conscious.

    While I think we may one day understand enough to create a system that can become complex enough to be conscious, or mimic consciousness to the degree where we’d be forced to give it the same benefit of the doubt we give everyone else, I doubt we’ll ever understand it enough for transhumanism. I actually doubt we’ll ever be able to build a system that’s conscious straight out of the box. A system capable of learning and growth to a sufficient degree, without having to be directly programmed with new information after it’s original parameters are set, might be able to in time.

  102. Ed Seedhouse says

    Vijen says “clearly, if there is experience there must be an experiencer. ”

    If there is an “experiencer” then this experiencer must be able to be observed. If it can’t in principal be observed then we have no basis to claim that it exists. But if I can experience the thing that experiences, then I must also be able to observe the process of experiencing the experiencer. And then I must be able observe the thing that experiences the experiencer experiencing the expiriencer. And so on forever.

  103. mankoi says

    Oh, as a further argument that came to me while getting a cup of tea, there is also the issue that, even if a copy is not perfect it may still functionally be the same person.

    Let’s take it as granted that a person five minutes ago, and the same person in the present are, indeed, the same person. I know this is arguable, but it seems resonable to me to use the common idea that a person is, indeed, the same person over time. After all, if we don’t accept that, no one is ever accountable for any action they take, because that action isn’t really theirs. And it would make any discussion of transhumanism pointless, because it would be impossible to copy anyone. So, let’s assume that there is a personality that is fairly stable over time that, when added with memories and consciousness, can be called a self.

    Now, for the sake of argument, let’s say we can copy PZ’s brain, down to the very last atom, and make an artificial replica. Let’s say we put it in the body of a Terminator that’s been given a flesh coating to look like PZ Myers as well, because I see nothing that could go wrong there.

    Well, from the moment the artificial PZ is created, the two will immediately have different experiences, in spite of being the same person. The original PZ will be looking at an inert replica of himself. The artificial PZ will, from his point of view, have suddenly gone from normal everyday life to being blind and deaf, as his brain is unable to process the new and different information being brought in by his electronic eyes and ears. Unable to speak, as his brain won’t be wired to control the synthetic voice box of his new body, or even feel the existence of said body, he will be trapped in a nightmare of incomprehensible sensation. Obviously this will lead to changes in the brain. I can’t say for sure how the original PZ would react, as I don’t know him, but let’s say he’s miffed and slightly peckish, whereas the artificial PZ has no mouth and must scream.

    In short, not all the tiny, unattainable details may be needed to replicate a person, as many of them change so often anyway. In theory, a replica of white sands need not have every single grain of sand in the exact same place as the real deal. White Sands doesn’t stop being White Sands as soon as the wind picks up.

  104. says

    @Ed Seedhouse #122,4
    Go back and read my original post at #85 wherein consciousness is experientially defined: Consciousness is “the experiencer”. This “I” you mention is a body-mind. Again: thoughts don’t think, body-minds don’t experience.

    From the perspective of consciousness experiencing objects – whether gross like brains, or subtle like minds – each and every object is in fact a form imagined by consciousness and made of consciousness. Thus in an absolute sense your assertion that the experience and the experiencer are one is true; but it’s false from the relative perspective in which consciousness imagines itself to be localized, limited and personal to a particular body-mind.

    When “it” rains, consciousness is raining. When “you” think, consciousness is thinking. “It” and “you” are one and the same.

  105. =8)-DX says

    Wow long thread, but from the OP:

    Sorry, trans-humanists. I can believe that there can be a post-humanity, but it won’t include us, so I don’t know why you aspire to it.

    This is always true for all humans living at any given time: there will be some future time that doesn’t include any of us. The future belongs to the children. If the children are robots or squid, it changes nothing for us apart from our particular love of the idea of passing on our genes. This is also why nationalism is such a stupid idea (who cares if most humans are of Chinese descent in 100 years, who cares if they speak Russian? What is your problem with tentacles?)

    And it’s been argued before: if we had proper brain-computer interfaces, we could actually become partially conscious inside a computer (after learning to use the computer part as our brain), and theoretically one could then slowly move to using *only* the computer brain and at last discard the body, similarly to this comic.

  106. chigau (違う) says

    Vijen
    I offer the notion/hypothesis/theory that you are a dipshit.
    I offer this with my meatbrain and my soul.

  107. ricepad says

    @96 yeah, it’s the same. The physics underlying our everyday life, including all biochemical stuff, is completely understood. From that follows that your brain is ‘just’ a computer running a hardcoded program (though it’s entirely unlike any computer and program we’ve managed to build so far).

    This is not something that people like, because it seems to take away free will, consciousness and all that. But as long as we deny it, we’ll never understand these things.

  108. says

    @Nerd #102
    Even though you’ve never listened to a word I have to say, there are others present, and at least you have the courage to make a fool of yourself, so I’ll reply to this.

    You say: your consciousness is localized, limited and personal. I say: suck it and see. You haven’t done the experiments which would disabuse you of this delusion, obviously, or you would share my experience that consciousness is much more remarkable. I’ve already alluded to the preparatory work, it’s simple: just be open to the possibility that consciousness is not localized, limited and personal. Those who prefer to continue to pretend that they are separate beings are, of course, welcome so to do; indeed life would be no fun without you.

    :-)

  109. mmLilje says

    It took me an uncomfortably long time to dig up this (http://www.schlockmercenary.com/2001-03-20) old Schlock Mercenary strip on the subject, so I may as well post it although it’s already been said a hundred times so far in this discussion. I always found it to be a quick, concise argument on the subject.

  110. says

    Those who prefer to continue to pretend that they are separate beings

    I’ve been waiting for you to come out with the “baboons on different islands all learned to wash their food simultaneously despite the lack of contact” story… is this the closest we’re going to get?

  111. says

    Mankoi @125, you just said what I was going to say, except I was going to put it more like:

    Looking back along my own timeline, and speculating about the different NelC’s that might have evolved in other realities who would all insist that they were the real me despite their differences, it’s clear to me (this one, in this reality) that the phase-space of ‘me’ covers a wide range of possibilities.

    It also seems to me that human minds have a tendency towards dynamic stability whereby we try to reconstruct our identities after breaks in consciousness or minor injuries to our brains. Sleep, coma, retrograde amnesia, strokes, the ongoing depredations of dementia, our minds seek to rebuild themselves after or during all of these, to recreate the person we think we are. Not a perfect system, of course.

    Therefore, my personal conclusion — which I won’t hold anyone else to, because as far as I’m concerned this is a discussion about science fictional ideas that won’t ever affect me (barring Tranhumanist terrorism, perhaps) so it’s not worth getting terribly, all-caps excited about — is that an imperfect copy of me can be as much me as the consciousness generated by this collection of atoms tapping this post (or rather, the original post, a duplicate of which you’re reading now).

    What am ‘I’ after all, but a collection of ideas that thinks it’s NelC? If I was replaced mid-sentence by an imperfect copy that remembers the specifics of certain non-traumatic events slightly differently to how I remember them (which happens with the passage of time, anyway), who would know? You wouldn’t, my closest friends and relatives wouldn’t, the surveillance state wouldn’t, even I wouldn’t — assuming a destructive replacement, the new me would think he was me, and the old me wouldn’t think anything, being dead.

    Now, there’s the thing: the old me would be dead. But does that matter? I don’t hold much stock with continuity as a guide to identity; I’m often not here, what with sleeping, the occasional anaesthetic experience, and those moments when I’m working on something creative where there is consciousness of the work but not of myself. Now, my genes and culture inform an attitude where I feel compelled to keep this meat alive, which is not a bad thing, but it’s an idea, a meme, there’s nothing actually sacred about the idea of staying alive as long as possible. It’s just a way of propagating patterns of genes and memes across spacetime; it lasts because it lasts, and not for any other reason like God’s will or manifest destiny or ego.

    Saying all that, I would be reluctant to undergo an upload myself, but I’m only about five-eighths progressed in this terminal condition we call life. Call me back when I’m more like an eighth from the end, or a sixteenth, and everything is breaking down and wearing out. A destructive upload will probably look like a good bet by then, and even ‘forking’ to a copy while this instance of me keeps going for a while won’t be much of an inconvenience.

    BTW, I wish my Japanese (and yours) was good enough to write this in that language, so I could drop all the confusing pronouns and not raise the question of which particular definition of ‘I’ is being used in any particular sentence.

  112. says

    A destructive upload only seems like a good idea if you’re convinced you’re special enough that a future needs a you in it even if you don’t experience it yourself.

    Flattering, but if the future without me needs or to have or can tolerate having copy of a person in it that I don’t personally experience, I can think of better people than myself to replicate.

    Copy someone more talented, more intelligent, more compassionate, harder working, whatever.
    Because I am not the best person I can think of in this world.

    Again, ego.

  113. astro says

    i have nothing to add to the debate about whether a person’s consciousness can be transferred to another medium.

    i just wanted to note myers’ endorsement of R’lyeh as “a somewhat attractive future.”

  114. Nick Gotts says

    I continue to be amazed at all those who think there is a fact of the matter, as to whether an uploaded or simulated or copied or matter-transported version of you is “really you”, given that none of us believes in a soul.

  115. Nick Gotts says

    life would be no fun without you – Vijen

    OTOH, your tedious, evidence-free stupidity adds nothing of value to any discussion.

  116. Rob Grigjanis says

    Nick @137: Another thought experiment, just for larffs. You watch as a copy of a loved one is made (and for the sake of argument, we’ll assume the impossible; that no harm is done to the original). Only one of them can go home with you. Are you saying you’re not bothered which one?

  117. Barkeron says

    Assuming a perfect simulation were physically possible, what’s the difference between it and the original?

    That’s the very point the followers of the Church Of Transhumanity don’t get (small wonder, given that this cult is predominantly made up of white, male, technocentric computer manchildren).

    Basic biology: the human mind is an artifact of our organistic brains. PERIOD. Ergo, a copy is a copy is not you.

    And besides, you had to create an artifact that recreates the complexity of the brain to the last prion. Yeah, good luck with that using silicon and platinum…

  118. consciousness razor says

    I continue to be amazed at all those who think there is a fact of the matter, as to whether an uploaded or simulated or copied or matter-transported version of you is “really you”, given that none of us believes in a soul.

    Why would it follow that if there aren’t souls, there’s no fact of the matter? The non-existence of souls is a fact, no? A claim to that effect isn’t an opinion or vacuous or meaningless or I don’t know what….

    So I don’t understand what you mean. Are you saying that you aren’t “really you” because that would have to refer to a non-existent “soul-you”? Why would that be the only thing that could be “real”? (I doubt that’s what you think, but I’m having a hard time coming up with anything else you might mean instead.)

    To reiterate, I don’t think there’s some distinct, tangible self that sits inside my head being me. That experience of being myself, the sense of having ownership of my body and what happens to it and the sense of having some perspective or relationship to everything else in the world, is a model (or representation or simulation) of its internal states which is created by my brain to do what it does. That’s an empirical claim, and it’s one of the reasons why there is a fact about whether that is (or could be) transferred to some other brain or computer or what-have-you. I think the answer is no: it could create a representation for itself, which is distinct (even if identical in form) from the original organism (me) making my self-representations. It wouldn’t be me because it would lack the necessary causal connection between the model and the organism; and that causal connection, whatever the details may be about exactly what it is and how it works, is entirely a factual matter.

  119. A Masked Avenger says

    #67, PZ Myers

    I don’t have the heart to argue very hard, because we’re talking about the hypothetical limitations of imaginary technology. And although I disagree with you on certain details, I still agree with your bottom line conclusion that it’s impossible anytime soon, and more than likely it’s provably impossible ever. With that disclaimer…

    Then the transhumanists reply that they aren’t going to, they’re going to model the brain with a simpler simulation that may use a very different substrate…and boom there’s Wilkins waiting to bash you.

    If the brain is a completely deterministic machine, producing predetermined outputs for a given set of inputs, then it is in principle reproducible. It doesn’t matter if the reproduction is a bunch of matrix equations instead of actual neurons in actual jelly.

    Even in this case, though, I’d place my bet that it’s a chaotic machine. Chaos is everywhere. Connect two oscillators with a feedback loop, and you’ve probably got chaos. If that’s true, then infinite precision is necessary. Make an error in the millionth decimal, and RoboPZ will start out seemingly indistinguishable from RealPZ–but twenty years out, we’ll see RoboPZ convert to Orthodox Judaism or something.

    The map isn’t the territory. You’ve made a map and model of me, for instance, but it’s not me; it’s only going to mimic me within a narrower range of experience over which you’ve determined the parameters of its operation.

    It’s hypothetically possible, again assuming a purely deterministic brain, to make a complete model that will mimic you in all circumstances. Except that it will immediately start to diverge from you: the fact of not having to urinate will start to change how RoboPZ sees the world.

    How are you even going to test the accuracy of your model? Because I can tell you that a lot of people have very divergent views of who I am, and what you’re actually going to build is a reflection of the creator’s perception of PZ Myers, not an actual copy of PZ Myers.

    THIS! 1,000 THIS!

    Yes, that’s an unanswerable objection. I can’t even prove objectively that I’m anything but a deterministic machine, cranking out (in principle) predictable outputs, even though I believe I’m a bit more than that. I can’t prove there’s such a thing as “free will.” It’s not remotely possible to prove whether or not two selfs are the same. The closest thing we have is the Turing test, and it’s woefully inadequate.

    If some of those questions are answered in this imaginary future, then it might hypothetically become possible to develop ways to prove the fidelity of a copy, but that’s so… hypothetical.

  120. Nick Gotts says

    Rob Grigjanis@139,
    Yes.

    consciousness razor@141

    It wouldn’t be me because it would lack the necessary causal connection between the model and the organism

    That’s only true because that’s the way you are choosing to define “self”. If you choose to define it simply as memory-plus-spatio-temporal-continuity, then it’s false. There’s no fact of the matter because we’ve never had, in practice, to choose between such alternative definitions, because there is no uploading or simulating or copying or matter-transporting. For that reason, intuitions differ; mine happen to be different to yours.

    As an aside, people can lose that “the sense of having ownership of my body and what happens to it and the sense of having some perspective or relationship to everything else in the world”, as a result of brain injury. Are you saying they are no longer the same person?

  121. A Masked Avenger says

    #68, consciousness razor

    I get that, but we’re talking about actual human beings being uploaded, so the question is how that is supposed to work. Tell me one way to do it that isn’t destructive… you’d need to shoot a fuckload of radiation at my body to see what everything is, where it is, where it’s going, etc.

    Like I just said in #142, we’re arguing about hypothetical properties of nonexistent technology, so we’re pretty far out on a limb and sawing vigorously. But since we’re here… It seems awfully unlikely that the nonexistent technology in question would hypothetically resemble an X-ray. It seems at least equally plausible that it would resemble an MRI instead. Or even a futuristic version of the EEG. Assuming that the nonexistent technology would be fatal seems a bit much, doesn’t it?

  122. jack lecou says

    I think the Schlock Mercenary cartoon posted by mmLilje at #132 sums up what I see some folks here arguing pretty well – but the reverend’s argument is flawed.

    The argument in panel 3, “From my own personal point of view, I’m dead,” only seems to makes sense to me if you are still imagining that “you” have a point of view when you’re dead. That “you” can experience death somehow, if only momentarily. That somehow, rather than actually just ceasing to exist, “you” will experience a tiny moment of blackness or something, and it will matter to “you”.

    In short, dualism.

  123. says

    @Nick Gotts #138

    When you (and many others here) say “evidence”, what you mean is ideas. But ideas are navel fluff, which every idiot has. When I say “evidence”, what I mean is experience. Until you are capable of distinguishing consciousness from mind your life will remain a problem to be solved – instead of a mystery to be lived.

  124. says

    Rob Grigjanis @139, speaking only for myself, why should I? Because only one of them is the “real” one and therefore “deserves” to go home with me? Whereas if the other went home with me it would be a massive inversion of justice?

    Sure, it would bother me that there are now two people in the world who are everything I love and who love me, and one of them is going to be prevented from going home with me by a cruel thought-experimenter’s version of Sophie’s Choice, and I would do everything in my power to win her freedom so that we could all go home together and discuss how best to deal with this unusual problem. But, no, as stated there is nothing to choose between my spouse and the doppleganger, so why should the fact that I might pick “wrong” bother me at all?

  125. consciousness razor says

    That’s only true because that’s the way you are choosing to define “self”.

    Well, okay, but I think it’s a reasonable definition, given the nature of the philosophical problem.

    If you choose to define it simply as memory-plus-spatio-temporal-continuity, then it’s false.

    Explain why it’s false. There’s certainly not spatio-temporal continuity. The uploading scenarios require discontinuity. There could be (if it works as advertised) continuation of memory, and I’ve acknowledge that the “new” person would remember being the “old” person, meaning they wouldn’t be able to know the difference. (Unless additional information other than memory is allowed, of course: maybe someone gives them evidence or they notice other clues to that effect.)

    There’s no fact of the matter because we’ve never had, in practice, to choose between such alternative definitions, because there is no uploading or simulating or copying or matter-transporting.

    There’s a fact of the matter about whether the Sun will become a red giant in the distant future. We’ve never had to deal with it, true, but that doesn’t change anything. So I’m not following how the foreignness of the hypothetical situation is supposed to support your argument. Just being a hypothetical or potential situation (which we have no “practice” with) doesn’t mean there’s no such fact.

    For that reason, intuitions differ; mine happen to be different to yours.

    How are yours different from mine? You don’t think it could work, right? It’s not our intuitions which affect whether it’s an empirical claim about empirical objects undergoing an empirical process which would have some definite empirical result. After the uploading procedure, either I’m dead or I’m alive. That’s as much a fact as my life or death right now, just as it is in any other situation.

    As an aside, people can lose that “the sense of having ownership of my body and what happens to it and the sense of having some perspective or relationship to everything else in the world”, as a result of brain injury. Are you saying they are no longer the same person?

    Yes. People can have different kinds of selves. It’s not a uniform phenomenon for every person, or static throughout one person’s life. If they’re in altered states or suffer from various disorders, they have a different sense of self than they would otherwise. To take a really mundane example, your sense of self even when you’re asleep is nonexistent or at least radically different from your experience when awake. The fact that brains change implies our experiences do as a result, because they are products of the brain not independent of it (however intuitive it may seem that they are independent). That’s the relevant concept of a “self” in this case, because I want to know whether I am going to consciously experience this new copy’s life (after my body’s destroyed in an uploader). And I don’t think I would.

  126. jack lecou says

    Jafafa Hots – Your experience is fascinating. (Well, to me. To you I’m sure it has a lot of other meanings.) I’m not sure how it supports the idea that hypothetical uploads would be less ‘you’ though.

    I gather that, during the amnesia, there was a sort of period of not-you-ness, and then you experienced a certain distance from your former self once you regained use of your memories. So it seems to me that just means that a clone with your thoughts and memories – uploaded just before the onset of amnesia – would almost have greater claim to being the former you than the current you does.

    Perhaps I am not following your argument correctly. But to me it sounds like you are extrapolating your experience of a relatively long discontinuity in memory function and assuming that a hypothetical upload would experience a similar experiential discontinuity. And I’m not sure that follows.

    In your case, there was a period of some weeks where your brain was ‘awake’, functioning, and changing – not well enough to be “you”, apparently, but enough that when normal function started to come back, the substrate of “you” had changed sufficiently that you couldn’t quite feel that the new “you” and the “old” you matched up.

    But it doesn’t seem obligatory that uploads, of whatever hypothetical sort, would have the same experience. Presumably, the copies would be made either instantly, or while one’s brain was in at least a mild state of suspended animation. There might be an external time lag for one or both of the copies – maybe one of you thinks you went to sleep in March and suddenly wakes up and it’s August – but the continuity of memory would presumably be effectively unbroken. There need not be any intervening period of alienation where one’s brain has been conversing with people on it’s own.

  127. Nick Gotts says

    consciousness razor@148

    Explain why it’s false. There’s certainly not spatio-temporal continuity. The uploading scenarios require discontinuity.

    Um. Evidently we define “spatio-temporal continuity” differently! There’s an information transfer between the original version and the upload, which causes the latter to be a copy of the former in certain respects. If a new version of you simply materialized without any such connection with the original, but with all the same memories, physically identical, and claiming to be you, there would be a lack spatio-temporal continuity.

    Just being a hypothetical or potential situation (which we have no “practice” with) doesn’t mean there’s no such fact.

    No, but what situations we have practice with affect how determinate our concepts need to be. If uploading or whatever became routine, this would affect our concept of “self”; I would guess it might lead to the concept being split, so that in one sense of “self” the upload would be the same self, and in the other sense, not.

    How are yours different from mine?

    My intuition is that the upload would be me, at least to the extent that I’d consider it worth being uploaded (setting aside any moral considerations) if the alternative was dying without being uploaded.

    After the uploading procedure, either I’m dead or I’m alive.

    No, supposing the upload is destructive, whether you’re dead or alive depends on how you define self; there’s no fact of the matter.

    The fact that brains change implies our experiences do as a result, because they are products of the brain not independent of it

    Suppose it were possible, not to upload you, but to integrate artificial neurons, or the functional equivalent, into your brain (much more feasible, in my opinion). As your own neurons die, these arti-neurons reproduce themselves to take over their functioning. Eventually, you have no bio-neurons left, but you still have all your memories, encoded in the connections between the arti-neurons. Are you still the same person?

    I want to know whether I am going to consciously experience this new copy’s life (after my body’s destroyed in an uploader). And I don’t think I would.

    It depends how you want to define “I”.

  128. Nick Gotts says

    Vijen@146

    When you (and many others here) say “evidence”, what you mean is ideas. –

    No, I mean don’t mean ideas, or experience, I mean evidence: publicly accessible grounds for considering that your claims might be true. But anyone stupid enough to be taken in by a fraud like Osho clearly wouldn’t know evidence if it bit him in the bum.

    Until you are capable of distinguishing consciousness from mind your life will remain a problem to be solved – instead of a mystery to be lived.

    You fuckwitted, arrogant arsehole, I know very well the difference between mind and consciousness, because I know that many of the operations of the mind are unconscious; and my life, about which you know nothing, is neither a problem to be solved nor a mystery to be lived. Until you realize that the size of your ego does not imply that you have anything worthwhile to say, you will continue to annoy people with your pointless drivel. I will ignore it henceforth so far as this thread is concerned.

  129. Rob Grigjanis says

    NelC @147: The only firmly arguable difference would be if the two versions knew which was the original, and which the copy. I suspect the sense of abandonment would be rather more acute for the original, if the copy were chosen.

  130. Nick Gotts says

    Rob Grijanis@152,

    Then the “copy” isn’t, which undermines the point of the thought experiment.

  131. consciousness razor says

    No, supposing the upload is destructive, whether you’re dead or alive depends on how you define self; there’s no fact of the matter.

    It depends on what a self is, which is a fact. My definition of what a self is might be wrong (you haven’t demonstrated that), but a self is what it is. I’m alive now. That’s a fact. One day I’ll die. That’s also a fact. It’s just absurd that you’d claim that sort of thing depends on a definition, for fuck’s sake.

    Are you still the same person?

    In one sense no, in another yes. Just the same as the situation is now, as I’m changing all of the time under “normal” circumstances. That doesn’t mean there’s no fact of that matter about whether I’m the same self (as you, for example). I’m not in fact the same self as you. I don’t experience your life. If we were identical (except for our positions), meaning we had the exact same brain states, I would still not experience your brain states. They would be yours, and mine would be mine. Simply having that formal relationship doesn’t imply there must also be the needed causal relationship. I don’t see how any such machine could produce the causal relationship. Definitely not just by waving our hands and talking about “definitions.”

  132. says

    Rob Grigjanis @152, in that situation, I can imagine the copy feeling the loss as acutely as the original, and also suffering the confusion of identity that would result as a consequence of being told she isn’t the original, whereas if I picked the copy to take home, the original would feel the loss, and the righteous feeling of injustice as a consequence of being told she is the original and not being picked. It’s a wash, really. I would hope that whichever one I picked would help me free the other (else I would not love them).

    But as Nick Gotts says, that’s adding a difference between the two, so not the original thought-experiment. But thought-experimenters are cruel like that, so it’s only to be expected.

  133. jack lecou says

    That’s the thing about Wilkins’ argument. I say, you can’t make a perfect copy, it’s physically impossible. Then the transhumanists reply that they aren’t going to, they’re going to model the brain with a simpler simulation that may use a very different substrate…and boom there’s Wilkins waiting to bash you. The map isn’t the territory. You’ve made a map and model of me, for instance, but it’s not me; it’s only going to mimic me within a narrower range of experience over which you’ve determined the parameters of its operation.

    But the counterargument is that, while we certainly don’t begin to understand it well enough to be sure, the functioning of the brain itself may ultimately just be another sort of map. And one map can be copied to another map with no real loss of information.

    In other words, the position and state of every single subatomic particle may not (probably does not?) matter. Perhaps just something much higher level, like a list of neurons and a map of their connections. (I am not saying this IS the case, only that IF you cannot rule out a possibility of this sort, than the argument about the map not being the territory loses force.)

    If it helps, perhaps when we are, arguendo, assuming away the practical difficulties in brain uploading, we should imagine that we are not talking about human beings at all, but something more like, say, the robot humans in Neptune’s Brood, with well defined and reproducible neural architectures. Because that is the arguendo situation. (It’d foolish to claim with certainty that we will ever have a similarly well defined model of the human neural architecture, but saying “the map is not the territory” seems a very weak way to argue that we never will.)

    (It’s fairly possible I’m not familiar with some of the transhumanist proposals. By “model the brain” do they mean they’re going to do something like build an AI (first, assume a can opener…) and then randomly twiddle the parameters or something until it says “I am PZ”? If that’s the case, it does seem a bit flimsy. I’m working under the assumption that “model” means something like (a) understand in detail how the brain works, (b) measure all the (billions of) “important” individual parameters off a working brain somehow, and then (c) use those as input to some kind of simulation of the brain’s functioning as understood in (a).)

    How are you even going to test the accuracy of your model? Because I can tell you that a lot of people have very divergent views of who I am, and what you’re actually going to build is a reflection of the creator’s perception of PZ Myers, not an actual copy of PZ Myers.

    This doesn’t seem outrageously hard. I mean, if you’re building a “model” of someone’s brain based on mapping out their neural network or whatever, and then you switch the simulation on and it says “Oh, wow did it work? My name is Ralph, and I had a cat name fluffy when I was 11, and oh, the weather looks nice today, how about a picnic” rather than “[clank] gibber gibber oook 010111110000000000000000 [initialization error 4a5fbb3#]” or something, than you’re probably pretty damn close.

    There’s also all the arguments about piecemeal replacement – you go in every week and somebody micro-surgically peels off another layer of your brain and replaces it with silicon until it’s silicon all the way down, and at no point did you cease to be you. (You don’t even have to do that with everybody – maybe they do that to a few people, or test animals, as part of the research process, and then use the resulting hardware simulation to test the accuracy of other less invasive techniques.)

    And finally, at some level, if everyone has different ideas of who you are, even you, who’s to say what’s really you anyway? That’s a bit postmodern, but I do feel it tangentially touches on my core point, which is that a lot of the clinging to the “continuity problem” in evidence here is just deeply held dualistic thinking. It’s based on the idea that on some level “you” aren’t just a machine made out of meat clockwork going through the motions, but something more, with continuity.

  134. foolish wolf says

    @14 Geoffrey Brent

    I have to say I agree with this part. All our atoms are replaced all the time so in this case what’s wrong with them being replaced all at once?

    There seems to be far too much certainty in regards when talking about conscience. How do you know your consciousness isn’t being constantly eroded and replaced as you atoms change? How do we know this isn’t a simulation?

    “What is “enough fidelity”? What is “functionally equivalent”? I read those as “not quite identical” and “mostly similar, but with some differences I’m unable to test”, and they don’t quite align with “the same mind”.”

    How do you define “the same mind”? Because your brain that houses your mind isn’t identical to the to the brain that housed your mind yesterday so because they are inextricably linked then you don’t have the same mind on a day to day basis.

    I view the mind as the pattern of energy (not spiritual energy but the connections and strengths of the various electrical and chemical signals just in case that isn’t clear) in the brain so in this case the energy in the original body could be recorded and your mind saved.

    So if that energy pattern isn’t duplicated by the process then I would be happy to say I hadn’t died even if the original body was destroyed by the process. I think in this state would be essentially like blacking out but instead of just your conscience mind becoming dormant, all of your energy pattern would freeze in place and my ‘body’ would by the system that is storing my conscience. Duplication becomes trickier but I generally see all the clones as me but a me in a different part of space-time. So, just as there’s a me in a different space-time one day ago and (hopefully) one day in the future there could also be a me 100 metres across to the left. So they would all be me. Past me has no memory of future me and left me has no knowledge of what right me is doing.

    Anyway, I agree that this is essentially impossible at the moment and the whole uploading your brain to a computer seems to fail for even more reasons. I can see a snapshot image of a mind being stored but to allow that mind to interact with it’s new electronic environment seems much more difficult. I wouldn’t mind that though, not for ego reasons or to continue my life as a human but to see what it’s like to live as an electronic entity.

    Damn, just read this through and I sound way more certain than I actually am…way too lazy to rewrite thought so just begin every sentence with something like “I believe it might just be possible..” or “I think there’s a chance that”
    Thank you for reading this line.

  135. says

    Consciousness razor @154, if your heart stops, are you dead? Prior to the definition of brain death, you would have been. Now, however, you’d still be alive. Your state hasn’t changed, just the definition of “alive”.

    You can be particular about your definition of “self” such that it includes, say, you in your present state, and not during your amnesiac episode, but we can disagree with that without disagreeing about what actually happened during that period. It can only be a matter of opinion, or definition. Perhaps a third party, a neurologist, say, might define your amnesia as a third state of self-hood.

    Similarly, you can say with perfect logic and reiteration of the facts of the case that the upload of you is not you, and that you are dead, and that that matters. And others can say that their upload is them, that one version is dead, and it still matters, though in a different way, with equally good logic and grasp of facts. In the end, lacking legal or exact scientific definitions, the selfhood and identity of the upload is down to our opinions.

  136. says

    I gather that, during the amnesia, there was a sort of period of not-you-ness, and then you experienced a certain distance from your former self once you regained use of your memories. So it seems to me that just means that a clone with your thoughts and memories – uploaded just before the onset of amnesia – would almost have greater claim to being the former you than the current you does.

    Perhaps I am not following your argument correctly. But to me it sounds like you are extrapolating your experience of a relatively long discontinuity in memory function and assuming that a hypothetical upload would experience a similar experiential discontinuity. And I’m not sure that follows.

    To some degree the first part is true, but that’s not what I base my conclusion on, it’s more of just an interesting aspect of what happened to me.

    More important is simply the fact that for weeks I was awake, interacting with people, but had NO EXPERIENCE of it. I was not THERE.

    If someone stepped into my vision, I would greet them as if I had just seen them, despite the fact they had been in the room with me all along and stepping into and out of my vision throughout the day.
    Of course, I only know these things because I’m told them after I regained my ability to form memories.

    I don’t think a hypothetical upload would necessarily experience that aspect of things, it might think it’s me (and actually be me in a sense) and think the lights happened to flicker.

    But THIS me? I’d be GONE. Forever.

    There was a documentary I saw about a man who had the same condition permanently. A british man who’d been in a motorcycle accident.
    He kept a diary but shown his entries moments later would deny ever having written them. Every time his wife stepped into view, he greeted her like he hadn’t seen her in ages.

    He didn’t have an existence, he had a series of frozen moments.
    Of course, nobody can say how HE experienced it, and since brains differ and brain injuries differ his experience may well have been different from mine.

    But I do know that during those weeks that my brain wasn’t functioning properly, I had no experience of myself.
    And it’s not just that I DID but don;t remember it, because the process of regaining function served to underline it.

    When things started to work again, it was in snippets.
    A few seconds here, a few seconds there.
    So during that transitional time, I had consciousness that was like a series of snapshot moments.
    A word or two from my father… then suddenly he was gone and it was hours later and other people were in the room.
    And then suddenly it was dark, nighttime, and I was alone.
    etc.
    No feeling of time missing or having been lost (until I pieced things together later).
    Just discontinuity.

  137. says

    As far as the piecemeal brain replacement approach, I think to know if that works (and I guess it might theoretically, but technically is very likely unachievable) you would have to do the process to individuals who are awake, interview them during and after (in the same way they do now with some brain surgeries) and then draw your conclusions based on that.

    If they are conscious as you incrementally replace parts and they experience no disturbance of any kind, then I guess at the end the result would still be the same “them.”

    We do that in a sense already, don’t we?

  138. says

    I should also point out that when I interacted with people, I didn’t make much sense. I said some pretty bizarre things, and reportedly had my father and sister laughing out loud in the ICU.

  139. says

    NelC, I have no argument with the idea that the copy or download (if ever technically possible) would be a real conscious being, and be a real Jafafa Hots.
    I just mean that (assuming a destructive method of transfer) that the Jafafa Hots I most care about, the one I currently am, would at that point stop.

  140. consciousness razor says

    Consciousness razor @154, if your heart stops, are you dead? Prior to the definition of brain death, you would have been. Now, however, you’d still be alive. Your state hasn’t changed, just the definition of “alive”.

    Well my state certainly has changed, but sure I get what you’re saying. And if I’m resuscitated, there’s no sense whatsoever in asking whether I’m actually now a robot created from the pattern of my state just prior to brain death. Because that’s not what happens in that kind of situation: there’s no robot or any other relevant person, nor any of the accompanying “disassembling and reassembling” of the body I had prior to the incident.

    If my experiences are “different” after being resuscitated, well that’s good to know, but it doesn’t mean there’s a complete change in who is doing the experiencing. There is still someone, that same someone, having those different sorts of experiences. That’s what some people don’t seem to get. Yes, who I am and how I experience myself can change, but that doesn’t mean replacing me with a different person altogether (even one with identical properties) means I’m suddenly transformed into that new person as if by magic. And it is purely magical thinking, unless there’s some actual, specific physical process involving me becoming that new person. Not just looking like them or being unable to tell the difference. There are epistemological issues with it, certainly, but puzzling our way through those sorts of games is not figuring out what metaphysically is actually the case.

    Similarly, you can say with perfect logic and reiteration of the facts of the case that the upload of you is not you, and that you are dead, and that that matters. And others can say that their upload is them, that one version is dead, and it still matters, though in a different way, with equally good logic and grasp of facts. In the end, lacking legal or exact scientific definitions, the selfhood and identity of the upload is down to our opinions.

    If they think the person they are prior to it will continue to have experiences, they’re wrong. That’s not opinion. If they’re right, it’s not an opinion. That’s the same kind of fact as whether I’m experiencing my own life right now, rather than yours or anyone else’s. With some sophistry, sure, you can twist it around to talk about it being “the same person.” The thing is, people don’t just think “their upload is them” in some abstract sense that doesn’t have any significant impact on the world. They think there’s a point to doing it because they believe uploading is a way to go on living in some new form, when that is not the case. It’d be a copy who is having that life, not you having that life. Maybe that’s fine and you’d do it anyway. Whatever. The substantial and significant point right there is still a fact, no matter what words or legal or scientific definitions we come up with to talk about it.

  141. says

    Ugh. PZ, this guy is peddling bullshit mystic supernaturalism, don’t buy it. His argument forces the listener to conclude that either (a) physics (which is computable, i.e. you can do science on it) isn’t enough to explain the mind, therefore the mind is supernatural (and beyond the reach of science); or else (b) the person being simulated on the computer isn’t you, but it’s still a real person (see Star Trek: TNG episode 6×24 “Second Chances”). He doesn’t spell this out because he’s trying to pull a fast one on his audience.

    (Note that the second fork of the argument, “the simulated person is not you”, applies equally well to a person who undergoes general anesthesia. Anesthesia is not like natural sleep or even drugged sleep; it disrupts most brain function entirely, and the brain activity A’ you have when you wake up from anesthesia is discontinuous from the brain activity A you had when you went under. But A and A’ are similar enough that we still consider you to be the same person, except arguably in the philosophical sense.)

  142. jack lecou says

    If my experiences are “different” after being resuscitated, well that’s good to know, but it doesn’t mean there’s a complete change in who is doing the experiencing. There is still someone, that same someone, having those different sorts of experiences. That’s what some people don’t seem to get. Yes, who I am and how I experience myself can change, but that doesn’t mean replacing me with a different person altogether (even one with identical properties) means I’m suddenly transformed into that new person as if by magic. And it is purely magical thinking, unless there’s some actual, specific physical process involving me becoming that new person. Not just looking like them or being unable to tell the difference. There are epistemological issues with it, certainly, but puzzling our way through those sorts of games is not figuring out what metaphysically is actually the case.

    This just seems hopelessly dualistic to me.

    After all, the “copyist” faction has described the specific physical process involved in moving or copying “you” around: copying and reproducing all the relevant brain/body state. (Obviously, we are assuming that knowledge of exactly what’s ‘relevant’ and a technique to copy and reproduce it has become available.)

    So I’d contend it’s on you to explain precisely what magical property is NOT being copied by this (by assumption, complete) physical process such that “you” are not copied.

  143. consciousness razor says

    After all, the “copyist” faction has described the specific physical process involved in moving or copying “you” around: copying and reproducing all the relevant brain/body state. (Obviously, we are assuming that knowledge of exactly what’s ‘relevant’ and a technique to copy and reproduce it has become available.)

    Copying is exactly what that is: making another one. What they don’t say is a process that’s supposed to “transfer” the person, instead of making another version of the person. I’ve explained this at fucking length. Once more with the “dualism” bullshit and I won’t even bother responding.

  144. jack lecou says

    Copying is exactly what that is: making another one. What they don’t say is a process that’s supposed to “transfer” the person, instead of making another version of the person. I’ve explained this at fucking length. Once more with the “dualism” bullshit and I won’t even bother responding.

    Transfer process described: put unit A to sleep. Copy mental state to unit B. Wake up unit B and not unit A.

  145. consciousness razor says

    Transfer process described: put unit A to sleep. Copy mental state to unit B. Wake up unit B and not unit A.

    You said it yourself: A is not awake. If that’s the sort of question being asked, there’s the answer. That’s what I’ve been saying this whole time. If there’s some other question which you think requires an answer, I’d like to know what you think it is. I probably also need to know how it isn’t implicitly dualistic (for starters, also whether it’s coherent, physically possible, etc.).

  146. Ed Seedhouse says

    Vijen @126: ‘Go back and read my original post at #85 wherein consciousness is experientially defined: Consciousness is “the experiencer”. This “I” you mention is a body-mind. Again: thoughts don’t think, body-minds don’t experience.’

    But your definition is incoherent.

    If consciousness “is the experiencer’ you are merely using two words to refer to the same thing. So you are really saying no more than “consciousness is consciousness”. Well, duh. But not a helpful definition, if it is a definition. I can’t define consciousness either, but the difference is I am not pretending that I can.

  147. jack lecou says

    You said it yourself: A is not awake. If that’s the sort of question being asked, there’s the answer.

    I don’t understand. What you asked was how to transfer a “person”. That’s how you transfer a person: there’s no longer a(n active) person in body A, but now the same person that was in body A has woken up in body/substrate/whatever B.

    I suppose you have to start with a definition of a person: I’m operating under the idea that it’s the sum of the memories, experiences, reflexes, emotions etc. that go on from the workings and changing of my brain and body.

    If you freeze me or drug me the right way, the running “experience” of that, the “me”, I suppose, stops, but the physical structure which gives rise to all that is still there. Assuming it’s all in order, then if you thaw me out or whatever, it restarts. And everyone seems to agree that after the restart, it’s still the same person.

    If you freeze me and replace every molecule in my body before thawing me out, most people still seem to agree I’m the same person.

    If you freeze me, cut all the molecules apart and reassemble them on another slab two meters to the left, then thaw me out, I still say I’m the same person (though a few people start having real difficulties, and maybe claim that there was a death at some point along the way, though they can’t ever quite say who died).

    If you freeze me, cut the molecules apart one by one, simultaneously replacing them with new ones and reassembling them on the other slab, and then wake both up, I say both are equally “me”.

    If you only wake one up, and incinerate the other, whichever one wakes up is still me. Maybe I’m “transferred” at that point.

    If you freeze me, make a careful note of where everything is, and build up a duplicate on the other slab and wake it up, it’ll be me. The same person in a new “place”. Which is a transfer, near as I can tell.

    If you throw the other still frozen piece of meat in a shredder, I’m still transferred.

  148. says

    Transfer process described: put unit A to sleep. Copy mental state to unit B. Wake up unit B and not unit A.

    Try it this way.
    Transfer process described: put unit A to sleep. Copy mental state to unit B. Wake up unit B and unit A.

    Now there’s two of you. Is that hopelessly dualistic?
    How can unit B be the original if unit A is?

    What is magic about the “not waking up of unit A” that makes unit B be the very same being, the very same experiential consciousness?

    What’s magic about throwing away the meat of unit A that makes unit B the “original,” if it wouldn’t if you DIDN’T throw out unit A?

    Magical thinking.
    Wishful thinking.

  149. jack lecou says

    Now there’s two of you.

    Yep.

    Is that hopelessly dualistic?

    Not any more than having two (more or less) identical toasters or pocketwatches or palm pilots.

    How can unit B be the original if unit A is?

    Who’s talking about “original”? What difference does that make? Does the universe care in any way? Is there something magical about “originalness”

    What is magic about the “not waking up of unit A” that makes unit B be the very same being, the very same experiential consciousness?

    There’s nothing magic about. If you wake one up, you got one. If you wake the other one up, you’ve got one. If you wake both up, you’ve got two. In all three cases, the consciousness is preserved.

    What’s magic about throwing away the meat of unit A that makes unit B the “original,” if it wouldn’t if you DIDN’T throw out unit A?

    What’s magic about an “original”? You’re the only one using that word.

  150. consciousness razor says

    I don’t understand. What you asked was how to transfer a “person”.

    And you gave yet another (extremely simplified, handwavey) version of copying a person. There are now two people: A and B. It’s not my problem that it’s incoherent to talk about them as if they’re “one person.” You may as well try to explain trinitarianism to me while you’re at it. If you need to destroy A, so that A’s out of the picture and I can stop answering questions about A, you need to say so not just put A to sleep.

    If you freeze me or drug me the right way, the running “experience” of that, the “me”, I suppose, stops, but the physical structure which gives rise to all that is still there. Assuming it’s all in order, then if you thaw me out or whatever, it restarts.

    Sure, it restarts. You don’t restart. It does, in a copy of you’ll we’ll call “B.” What is it? It’s a pattern of brain states. You’re not simply and only a pattern; you are a physical organism. Having that “program” running again will make B believe it’s jack lecou. But I don’t care at all about what B believes or doesn’t believe.

    I’d like to know whether jack lecou still exists, which requires knowing whether person A still exists (even if that means they’re “transferred” to B somehow [waves hands]). Maybe we can’t get any input from them. That person might be dead or asleep or up and walking around like usual, but whatever the case is they’re not person B. If you can ask them, they’re going to say they’re A not B. And they’d be right: that’s who they are.

    When they give an answer like that, they’re not claiming their brain state isn’t identical to some (hypothetical) copy of them on the other side of the universe. There might well be an identical brain state (in an alien on some duplicate planet), but that has nothing to do with who they are and how they go about having that sense of their own identity. If they die, you can say in some metaphorical sense that they live on in the form of their alien “copy,” but they’re nevertheless literally dead, because that alien’s experiences (by hypothesis, being sufficiently distant) have no causal relationship to their own experiences. They just share the same pattern, and that by itself is not enough.

    And everyone seems to agree that after the restart, it’s still the same person.

    Not even close to “everyone.” Some (confused) transhumanists, not everyone.

  151. jack lecou says

    Sure, it restarts. You don’t restart. It does, in a copy of you’ll we’ll call “B.” What is it? It’s a pattern of brain states. You’re not simply and only a pattern; you are a physical organism.

    We’ve duplicated the organism though. Down to the molecule in the, admittedly absurd, hypothetical. Sure it’s not same, in the sense that one identical lego brick is not the “same” as another identical lego brick, but you need to explain to me why I should care about that kind of notional continuity. Especially given that it’s not as if our bodies and brains are static and unchanging anyway, or made up of the same stuff the whole time, or the fact that identical molecules, e.g., are literally indistinguishable anyway.

    This is the purpose of the thought experiments about taking someone apart and piecing them back together, but out of replacement pieces. Or taking them apart and making two copies out of a scrambled mixture of old and new pieces. Does someone die along the way? Which one’s the original? Why does it matter?

    Not even close to “everyone.” Some (confused) transhumanists, not everyone.

    I’m neither a transhumanist, nor do I think I am confused about this. The alternative to not being the same person is that someone else wakes up. I.e., I get frozen, I’m later revived, still feeling like myself except maybe feeling cold, groggy and peckish, and you think I’m a different person now? (Different not just in the sense that I’m always a slightly different person than I was yesterday or last week, but different like I was Jack and now I’m Steve or at least Jack 2.0 or something.)

    Explain to me how that works.

  152. jack lecou says

    If you need to destroy A, so that A’s out of the picture and I can stop answering questions about A, you need to say so not just put A to sleep.

    Either way, there’s nothing in A that is feeling or experiencing anything anymore. Why does it matter to you whether it’s actually destroyed or not?

    Do you maybe feel like, no matter how frozen or anesthetized A is, there is still some fundamental part of A that will “die” only when the body is destroyed…?

  153. says

    Consciousness razor @164, we both agree that something has happened to the original me in a destructive upload; those are the facts we can agree on. We can both call it ‘death’ without self-contradiction. Where we part, I think, is in our opinion of the significance of the original dying (or surviving, come to that). What I think is that I have no objection to the copy calling himself me and laying claim to all my vast wealth, stupendous debts, loving friends and ferocious enemies, if original-me is dead. Yes, I’m dead, so long live me, I say.

    As I’ve said before, I don’t regard continuity of body or thought to be vital to my identity. Even having a duplicate around doesn’t threaten my identity. Yeah, there’s another body over there that thinks it’s me, so what? This body here also thinks it’s me, and both of us agree that being the original collection of atoms doesn’t give whichever one of us is the original any privileges in the consciousness or identity stakes. The problem before us is purely social, psychological, and legal, in our opinion, and the question of which is ‘really’ me is one in the same class as the one about who shaves the barber. That’s my opinion, yours may be different, but it is an opinion, not a fact.

  154. jack lecou says

    Consciousness razor @164, we both agree that something has happened to the original me in a destructive upload; those are the facts we can agree on. We can both call it ‘death’ without self-contradiction. Where we part, I think, is in our opinion of the significance of the original dying (or surviving, come to that). What I think is that I have no objection to the copy calling himself me and laying claim to all my vast wealth, stupendous debts, loving friends and ferocious enemies, if original-me is dead. Yes, I’m dead, so long live me, I say.

    Exactly. I would just add that while we certainly can use the word “death”, it’s maybe not the most intuitive. It doesn’t have the same connotations as it usually does.

    If I fall asleep tonight, and aliens come into my bedroom, make a copy of me, and then destroy one of the bodies — it doesn’t matter which one — all while I (both of me) am insensate and unconscious, nothing of importance has really changed. There is no “continuity of me-ness” that we need concern ourselves with except in the most abstract way, and in the morning I’ll still be me. If we use “death” for what happened to the unconscious body that was destroyed, it’s pretty different from what we usually mean, because he certainly didn’t notice, and I’ll get up and go to work in the morning, and my family and girlfriend won’t miss me a bit…

  155. consciousness razor says

    Sure it’s not same, in the sense that one identical lego brick is not the “same” as another identical lego brick, but you need to explain to me why I should care about that kind of notional continuity.

    You should care, because some people care enough to believe they’ll survive an uploader and come out the other end in a different body (or a computer). Because there’s no afterlife, they want to make one, and this is how they think it can be done. There’s no reason to believe you would survive, so don’t believe their claims. Consider that “consumer protection,” I guess, because it’s false advertising. Of course we probably won’t be anywhere close to even attempting it for a very long time, so I admit it probably has no practical significance to you.

    I’ve already given lots of explanations. You are not that “identical” alien in a distant galaxy. It doesn’t make any difference whether it has the same exact brain state as you. If it dies, you don’t die, and vice versa.

    So the point is just saying a “transfer” happens (somehow) is circular. You need to show that. And only ensuring the process involves the two sharing the same pattern also isn’t sufficient. The machine, however magical, would have to do more. It’s not for me to explain what would make it happen, because there might not be a way and I don’t claim there is. It’s up to you to explain how it’s supposed to work or to stop claiming that it does. Or that “everyone” thinks it does, as if that matters.

  156. jack lecou says

    There’s no reason to believe you would survive, so don’t believe their claims.

    But you would survive. At least for practical purposes.

    Like imagine I make a copy of my computer’s hard disk, and just as it finishes, the first disk crashes.

    “Oh nooo! The data is dead!”

    I say, “No, it’s ok. I’ve got a copy right here.”

    “No, it’s not the same. The disk died those bits are gone forever.”

    “No, really. I’ve got a copy right here. It’s exactly the same pattern of bits.”

    “Sure, but the those are different bits now. The first bits are dead.”

    [Shrugs, plugs in new hard disk and boots up same as ever.]

    Unless you believe there is something magical in those particular bits beyond their arrangement and the information they carry, they really have survived the copying process.

  157. says

    Jack lecou @179, if ‘death’ in a Transhuman world means something slightly different, or gains some additional meanings, then that’s just language being language. I’m comfortable with calling it death if one of me dies, say if one of my ferocious enemies kills one of us just after the copying. A person has died, a version of me, but a version lives on. The remaining me will mourn that loss, and will see myself avenged, even if it’s only by being a witness at the trial.

  158. says

    “I have George Washington’s axe he used to chop down the cherry tree. Long ago the handle needed to be replaced. A while back the head needed to be replaced too.”

    Theseus’s paradox:
    http://en.wikipedia.org/wiki/George_Washington%27s_axe#George_Washington.27s_axe

    You still need to explain how a consciousness in a separate body – one that is a DUPLICATE, is the same consciousness as the one from which it is copied. Not the same as in identical – but as in the ONE.

    Because if there are two of me running around, we may not care – but we are NOT the same people, and if one of us dies, we do not continue on through the experiences of the other.

    You need to explain what causes this, or else you’re just making the same argument as Vijen, you’re just leaving the imagined mechanism unnamed instead of calling it some as-yet-undiscovered particle or whatever.

    May be fine with you to be replaced. Great for the kids.
    But you’re still dead.
    Your doppleganger won’t believe a word of it though. He’ll insist he remembers your 5th birthday. And he will, just not from personal experience as he thinks.

    You’ll stop.

    Face it – you’re going to die.

  159. jack lecou says

    And only ensuring the process involves the two sharing the same pattern also isn’t sufficient. The machine, however magical, would have to do more.

    If the pattern of my memories and personality (and if you like, even the entire pattern of my body) isn’t me, what the heck is?

    You say the machine would have to transfer more. But we’ve already copied everything real and physical. The only thing left is the metaphysical. I find it hard to imagine that you have in mind anything other that putting in some plumbing so we can pour a soul, or at least the mystical thread of existential continuity, from one body to the other.

    It’s not for me to explain what would make it happen, because there might not be a way and I don’t claim there is. It’s up to you to explain how it’s supposed to work or to stop claiming that it does. Or that “everyone” thinks it does, as if that matters.

    The only time I’ve referred to “everyone” agreed on was the idea that someone waking from a coma or suspended animation is (in the everyday sense, which allows normal sorts of change and discontinuity) the same person. I certainly stand corrected on the everyone part (there’s alway one!), but I’m still waiting on the explanation of how it is you think they’re new people exactly.

  160. says

    Jafafa Hots @183, the doppelganger’s experience of my fifth birthday will be just as personal as the original’s. Very few of the five-year-old’s atoms will be present in the fifty-year-old original me, and the chances are that just as many will be present in the copy’s, owing to redistribution in the biosphere of all that breath, excrement and old skin cells.

    Neither I nor Jack are claiming anything spooky about the identity of the copy; it’s just that we’re not claiming anything spooky about the identity of the original, either. The original dies, the copy lives. The death is not an experience, it’s a lack of experience; once the original is dead, the copy has exactly as much claim to be me as the original did, and there is nothing to gainsay him.

  161. jack lecou says

    Not the same as in identical – but as in the ONE.

    Just the same as in identical. (Identical patterns of whatever it is that makes up memories and personality – if necessary, for the sake of thought experiments, identical bodies down to the molecule.) The ONE with capital letters is meaningless.

    Consciousness does not exist apart from the pattern of matter that forms your brain and body. Duplicate the pattern of matter and you duplicate the consciousness. The copy is, for all practical purposes, the same consciousness. The same person. (It won’t be exactly the same for long, of course, if experience begins to diverge.)

    If you believe there is some other mystical property that also needs to be copied, or some invisible, unmeasurable thread of continuity that is fundamental to your experience of being you…

  162. says

    You say the machine would have to transfer more. But we’ve already copied everything real and physical. The only thing left is the metaphysical.

    EXACTLY Jack, that’s why you’re now DEAD.
    Unless you can explain how a different physical object with an implanted copy of your memories becomes the same consciousness as YOU, unless you can explain how killing your brain and letting a copy live means your do NOT “go to sleep and never wake up,” unless you can explain how that is compatible with TWO of you being around if the original is not destroyed, then YOU are presupposing something metaphysical.

    And lets not get hung up on the word “original.”
    You are here alive today, Jack. If a new copy of you is made and YOU, the Jack here today still exists, you are the original. You came FIRST. Lets not ignore the obvious meaning and implication of words because they ruin some happy fantasy.

    I swear, it’s easier dealing with a creationist asking “why are there still monkeys.”

    Your entire argument depends either on some unproven metaphysical force, OR denying and muddying the meanings of words so that you can twist the argument any which way you choose to fit the problem you’re trying to ignore.

    Copy Jack, and you have Jack II.
    You, Jack I, may not care if you go to sleep and never wake up and Jack II wakes up thinking he is you and goes about living your life so that you can shorten your commute time. And then jack III, Jack IV, etc.

    Me? I want Jafafa I to NOT go to sleep and never wake up so that someone just like me can live out the rest of my life.

    The computer program metaphor is poor.
    Programs are not run from the harddrive. They are loaded from the hard drive into the CPU and run there.
    Some let you run multiple instances of the same program.

    Shut the program down, all you have left is the record of what the program should be doing when it’s in the CPU running. Start it up, that’s a new instance. Start up another instance of it running, there are two instances.

    Not that any of that is a good metaphor to use anyway.

  163. says

    Neither I nor Jack are claiming anything spooky about the identity of the copy; it’s just that we’re not claiming anything spooky about the identity of the original, either. The original dies, the copy lives. The death is not an experience, it’s a lack of experience; once the original is dead, the copy has exactly as much claim to be me as the original did, and there is nothing to gainsay him.

    How nice for him.

    You? You’re still DEAD.

    I dunno about you, but that little niggling detail is kind of important to me.

  164. consciousness razor says

    Unless you believe there is something magical in those particular bits beyond their arrangement and the information they carry, they really have survived the copying process.

    You know what’s magical? Claiming we are arrangements of information. I’m claiming we’re physical objects. Magic and physics don’t mix, and I don’t need them to. Magic and airy notions about abstract patterns “existing” in some Platonic realm, caused by some indescribable super-machine which may not obey anything we know about physics and may not even make sense logically, that definitely mixes just fine.

    Sorry, no, we’re not arrangements of information in a phase-space, just physical objects in plain old space, like everything else there is. So I think you have it backward.

  165. A Masked Avenger says

    #183, Jafafa Hots

    Face it – you’re going to die.

    Face it: the state of consciousness that you experience right now will never exist again. Consciousness will cease for you, sometime tonight, and resume tomorrow sometime–but while the consciousness tomorrow will believe itself to be the same entity that went to sleep the night before, the entity that went to sleep will not be available for comment.

    If you’ve ever wanted to kick the butt of past versions of yourself, or imagined the reaction of past versions of yourself to present-day you, then you’re basically acknowledging that the being that existed back then is distinct from yourself, and is now gone.

    Semantics? Possibly–but I think it speaks to the question whether a duplicated, or teleported, or resurrected, or uploaded version of me is “really me.” Given the right-wing fundamentalist that once considered itself to be me, contrasted with the being typing this post, I question whether I’m in any really meaningful sense him. His memories are a subset of my own, and it’s his fault I’m fat–but if we were ever to meet, he would hate me. I’m certainly glad he’s gone. And I think he’d agree that my existence implies his death (though a specific date of death can’t be determined).

    That being the case, I’m not too fussed about what walks out of the teleportation room on Alpha Centauri space station. I wish him well, just as I wish the one who wakes up tomorrow in my body.

    (Luckily I don’t always speak this way. I say, “I used to be an asshole.” I don’t generally say, “An asshole used to inhabit this body,” or some such. Hopefully that’s somewhat reassuring that my body isn’t presently inhabited by a loon.)

  166. nogginscratcher says

    I think I’m in agreement with some of you, but to offer another articulation of the thought… It seems like a chunk of the remaining disagreement could be resolved by differentiating between persons and identities – in the normal course of things the two are linked inextricably, on person to one unique identity, but with the theoretical upload/copy machine it seems you can create a scenario where two persons (physical bodies, or computronium substrate or whatever) have the same identity (memory, personality etc)

    At which point one faction says “They’re different, only one is the same person, the other is just a spooky clone who claims to remember things that never happened to them” and the other faction terrors “But just look at those casual connections, they both have equal claim to be the continuation of that identity from the past”

    And both are at least a little right – killing one person doesn’t kill any other person, even if they share an identity, but both really do seem to have a good claim to identify themself as the person who stepped into the copier. They’ll diverge away from each other rapidly, but that doesn’t give us a good footing to declare one more real than the other.

    For my money, if I were copied, I’d have to consider both of the resulting people to be ‘me’ to the same degree as I am the same ‘me’ as went to bed last night, and yet I can only personally be one of the potentially several me’s. I’ll always be the ‘mestuck n a silly mortal rmmeat bbrain, even if there is also a ‘me’ roaming free in cyberspace-immortality-land.

    Apologies for any uncaught typos, typing this from a phone has been a torturous experience – it refusrs to match the input box width to the screen width so half the time I’m typing blind, and occasionally it reverts about a second. worth of typing before deciding to snap back to showing whatever edit I made before, so if i type something then delete it, it briefly pretends I didn’t delete it, waits for me to try to delete it again, then catches up and deletes a chunk of the word before.

  167. jack lecou says

    EXACTLY Jack, that’s why you’re now DEAD.
    Unless you can explain how a different physical object with an implanted copy of your memories becomes the same consciousness as YOU, unless you can explain how killing your brain and letting a copy live means your do NOT “go to sleep and never wake up,” unless you can explain how that is compatible with TWO of you being around if the original is not destroyed, then YOU are presupposing something metaphysical.

    Let me try to come at this another way. To start with, phrases like “you’re dead” are actually kind of problematic. It implies that “I” am out there experiencing “death” somehow.

    One cannot experience death. In a sense, “I”never will die – assuming I see it coming, then I will experience some situation that seems pretty grim, and then maybe a dimming of vision and loss of consciousness. Than I won’t be.

    I’m just a meat machine, humming along. The “I” is some kind of illusion created inside that meat as neurons fire and ions wander around bumping into stuff. It’s just what happens when the meat gears are arranged on particular pegs and turn in particular ways. One day something is going to get a little too far out of tolerance, and the gears will stop. “I” will stop. I’ll stop experiencing, and the information in my brain will rot away.

    There is nothing spooky about this. It’s not really fundamentally different from what happens when a lightbulb filament breaks, or pocket watch gets dropped and falls into a hundred pieces. Just another machine that has worn out or broken. A squishy, complicated one, but still a machine.

    Of course, with a pocket watch, one can pick up the pieces, put them back together, wind it up again, and have the gears turn and mesh in the same old way. If the meshing of the watch’s gears was complicated enough to produced the same kind of consciousness side effect the meshing of my squishier gears does, maybe it would have some thoughts about this.

    And here’s the thing. If I drop the watch, then pick it up and put the pieces together again, those little hypothetical clockwork thoughts would start up in exactly the same spot again, ticking out from the same little gears.

    And if I stop the watch, take it apart, take measurements off all the gears and springs and machine a brand new watch, when I put it back together, the little clockwork thoughts of the second watch will start up in the same place.

    And when I put the first watch back together, it’s little thoughts will start there too. And if I take it apart again, it’s thoughts will stop, but the thoughts of the first watch will continue along in the same way it’s would have.

    Are these the same little clockwork thoughts? All I have to offer you is a big shrug. They’re not happening on the same watch, but they’re the same thoughts.

    Is the first watch “dead”? Big shrug. It’s little gears aren’t turning, so it doesn’t really care. It’s not having any thoughts on that matter or anything else. Yet the thoughts it would have had had it not been disassembled keep on getting ticked out on another set of gears by the other watch.

    You and Consciousness Razor keep making a fuss about the first watch not ticking anymore. But I just don’t see anything spooky about it. There is nothing magical about the continuity of exactly the same gears or watch. You can stop the gears and then start them back up again. (Our consciousness gets interrupted all the time.) You can take them apart. Replace some of them. Put them back together. Copy them. As long as the gears have the right number of teeth and are put together in the right way, they’re going to mesh out the same little thoughts.

    Whether they’re the “one true” thoughts of that little watch just seems like a really poorly defined question. And no matter what, the first little watch has certainly not had any experience of death, regardless of whether it’s currently lying in pieces.

  168. jack lecou says

    Sorry, no, we’re not arrangements of information in a phase-space, just physical objects in plain old space, like everything else there is. So I think you have it backward.

    Playing cute little semantic games or putting nonsense words in my mouth isn’t going to help your case.

    Sure. We’re arrangements of matter in physical space.

    Just like bits on a disk are arrangements of matter (well, magnetic charge).

    And yet the latter represents information.

    And if you have the information it represents, you could create another arrangement of magnetic charge on a different disk that was functionally equivalent to the first.

    And either way, there’s nothing mystical about the charges or the media of one particular disk. Create the same pattern of information on another disk, and the programs or spreadsheets or whatever that it represents are going to start up and run without ever caring.

  169. says

    @jack lecou #181

    No-one supposes that hard disks possess an independent consciousness. But this assumption is made about people. And it’s false, hence so much confused and pointless speculation on this thread.

  170. says

    Consciousness will cease for you, sometime tonight, and resume tomorrow sometime–but while the consciousness tomorrow will believe itself to be the same entity that went to sleep the night before, the entity that went to sleep will not be available for comment.

    Ever heard of the subconsious?
    Ever heard of dreams?
    Ever heard of lucid dreaming? I do that.

    My brainwaves do not stop when I sleep, I dunno about you.

  171. says

    Jafafa Hots @195, brainwaves aren’t consciousness. I rarely recall my dreams, if I have them every time I sleep, and even when I do the details fade away more quickly than everyday experience, so by the amnesia standard, they can’t count as consciousness either. Even with lucid dreaming, there are periods of sleep when you aren’t there at all.

    We get that the destruction of this body of yours disturbs you in ways that the gradual replacement of atoms and molecules doesn’t. I used to be that way myself. I’m not saying that you shouldn’t feel that way; there are good reasons for it. I’m just trying to represent that there are people who don’t feel that way and explain why. You don’t have to believe as I do, I just ask that you meet us halfway and stop referring to this difference in philosophy as “airy-fairy” or “magical thinking”.

    The kind of airy-fairyness you’re referring to is the same kind, in my mind, as the airy-fairyness of love, justice, truth, beauty and so on, the kind Terry Pratchett’s Death calls “the big lies”. There is no arrangement of matter you can point to that is justice; and yet no-one wants to deny that justice exists, that it is real, and that there times and places when justice is lacking. “I” am just as immaterial as justice, for all that I wouldn’t exist without the meat to create me. But this particular collection of atoms — this collection that changes with every breath — isn’t vital to my existence. If swapping atoms or molecules or neurons a bit at a time doesn’t stop me being me, then why should swapping them all at once be different? Because there’s a distinct line across my timeline instead of a broad and fuzzy band?

    Yes, we’ll be dead, with or without all-caps, we’ve acknowledged that several times. Move beyond that fixation, and see that we have what we think are valid reasons for thinking that we will also be alive.

  172. John Horstman says

    Dear Jebus, but esentialist philosophers are titesome. The map isn’t the trritory, but we’re not talikng about a map, we’re talking about a perfect 1:1 scale replica. That is also not the original, but it can do all of the exact same things. Hell, if we could actually map every atom in the brain and accurately model their behaviors, I could generate a conscious entity in software on my phone. It would think veeeeeeeeery slooooooooooowly because my phone isn’t up to anything near a realtime simulation of that sort, but it would absolutely exhibit the same emergent phenomenon of consciousness. We’re just talking about an information processing system, and my phone absolutely is one.

    As a side note, such a simulated brain would be deeply unethical in practice – the functional equivalent of severing a person’s sensory and motor nerves, keeping hir brain alive but entirely isolated from hir environment in every way. When many consider solitary confinement to be torture, this potentially defines the maximum limit of the torture scale.

  173. consciousness razor says

    John Horstman, #197:

    I don’t think anyone here is disagreeing that it’s possible to make a conscious entity. But if the talk is only about replicating something like the structure of a neural network, that’s not enough. There’s obviously a whole lot more to the biology of brains (since that’s the only intelligent thing we have to work with), other than the fact that neurons form networks. Like you said, any physical differences (internal or environmental or whatever) would obviously change the performance in some way, and would have to be taken into account. There’s only so much approximating you can do before you have something very, very different.

    As a side note, such a simulated brain would be deeply unethical in practice – the functional equivalent of severing a person’s sensory and motor nerves, keeping hir brain alive but entirely isolated from hir environment in every way. When many consider solitary confinement to be torture, this potentially defines the maximum limit of the torture scale.

    Definitely. Creating any AI (from scratch, simulating a person, whatever) would involve some serious ethical problems. How do we know what kind of life it’s having and would want to have? What sort of quality of life will it have, and what sort of rights? Why are we making it? To be a slave? To entertain ourselves? Some AI people do seem to understand that, and it may be part of their own research. But a lot of times when people are just speculating about this stuff, it’s pretty much assumed to be a non-issue.

  174. Holms says

    #129
    The physics underlying our everyday life, including all biochemical stuff, is completely understood.

    Oh my god no.

  175. ricepad says

    @holms #200 well, that seems to say that you don’t believe in science, and you do believe in some entity that is not detectable in any scientifical way but nonetheless can influence our world. At least that is consistent.

  176. jack lecou says

    Oh my god no.

    I don’t want to put words in ricepad’s mouth, but I understood that comment to be trying to say something along the lines of this post by Sean Carroll.

    Basically: the Standard Model is correct and complete, at least in a large window entirely encompassing the energy and interaction levels encountered in everyday life. There is simply no room for invisible ghosts or spirits or telepathy, etc.

    “Completely understood” is a bit ambiguous – it’s certainly true that we don’t have the capability to predict everything – say protein folding. And there may be mysterious biological functions that will surprise us – think compasses in birds. And human physicists are still doing all kinds of interesting new things with, e.g., photonics.

    But no ghosts. No spirits. No telekinesis. Etc.

  177. ricepad says

    @jack lecou #202 yes. I’ve basically always believed this. Sean Carroll wrote it down much better and more authorative than I could have done.

  178. Holms says

    See, that’s a significantly better explanation, and also avoids ridiculously broad declarations such as ‘completely understood’. Also ricepad, if I was of the godly delusion, don;t you think I’d capitalise the ‘g’?

  179. ricepad says

    @holms #204 re the godly delusion: yeah, that would have surprised me. Re completely understood: I understand your point, but I meant (and wrote) the physics _underlying_ our everyday life. We can calculate the easy cases, ignoring details that we think are not very relevant (like: I drop a ball from five meters; at what speed will it hit the floor?). Calculating all the chemical processes in my body would theoretically be possible, but practically too big a challenge. Perhaps I should have made my point more explicit in my original post, but it’s a bit hard to write a somewhat long coherent text on a smartphone with a Chrome Beta that seems a bit buggy in its text field handling.

  180. A Masked Avenger says

    consciousness razor, #199:

    Creating any AI (from scratch, simulating a person, whatever) would involve some serious ethical problems. How do we know what kind of life it’s having and would want to have? What sort of quality of life will it have, and what sort of rights?

    Something about your wording hit me forcibly. And curiously, reminded me powerfully of Stephen Donaldson’s ur-viles. They’re clearly a riff on demons. But they’re motivated entirely by self-loathing, simply because they are manufactured beings rather than born. They’re tormented by the mere fact that they could have been made other than they were, which makes their every flaw or limitation a conscious choice of their maker.

    It’s an interesting psychology, which I’ve never encountered anywhere else. Usually AIs have the same old Pinocchio syndrome–wishing and wishing that they were a real boy. It also resonates with the my own midlife crisis: I realize that although I can do anything, I can’t do everything, and the choices I’ve made earlier in life now make various other choices effectively impossible.

    It would be an interesting premise for a novel: someone builds an AI which, objectively speaking, has very little to complain about, but which is is tormented by the knowledge that the builder made various tradeoffs that could easily have been decided differently, resulting in an entirely different entity.