A biologist and a philosopher agree on something


John Wilkins talks about AI and the transhumanist agenda. I think we both concur that a lot of artificial intelligence and brain uploading bozos are living in a fantasy.

Here’s what bugs me about a lot of AI PR: they keep going on about building human-like intelligences. The Turing Test is explicitly about creating a program that mimics human responses.

But here’s what I am: I am a conglomeration of cells evolved into a functionally specialized organic network. I have a brain that is sloshing around in chemical signals that represent demands from all of my other organs; that brain’s whole purpose is tangled into processing complex electrical inputs from the environment. The “I” that I am writing about is a thin veneer, an illusion of awareness layered on top of a structure that is mostly dedicated to keeping the chemical makeup of the giant meat robot constant, running the machinery of the circulatory, pulmonary, digestive, and reproductive systems.

When I woke up this morning, what I had to deal with were biological impulses. Gotta pee, thirsty, gotta drink: ooh, ankle feels a little creaky this morning, what’s with the mild backache? Hair’s a mess, gotta wash my face, get the crud out of my eyes. Then there were the external demands: cat wants her breakfast, email is beeping, oh, look, laundry needs to be sorted and put away. Internal obligations: gotta write up the notes from yesterday’s long meeting, gotta blog.

You can’t generate a human-like intelligence without all the human-like stimuli. An AI is a chip in a box on a rack. You aren’t going to get something like us out of it without also somehow simulating all the messy, sloppy, gooey biological bits, and why should you? We’ve already got lots of human-like intelligences walking about, and they come ready-equipped with all the organic concerns tacked on.

I don’t have a problem with the artificial intelligence idea, I think it’s achievable, actually…but I do think it’s damned silly to assume that the only kind of intelligence that counts is the fuzzy, survival-oriented kind that wants to reproduce itself and that is trying to do a dozen things at once, poorly. An AI should evolve to do what chips in a box on a rack need to do, and it isn’t going to be what sloshy 3 pound blobs of meat stewing in chemicals spewed by gonads want to do.

Wilkins also talks about the silly business of trying to upload yourself into a computer. It can’t be done; we’re not just a pattern of electrical signals, but the whole meaty sausage of guts and nerves and bones and muscle. He makes the point that you can’t possibly build a perfect simulacrum of anything, that if you did replace cells with little nanomachines, they would lack properties of biological cells and have other properties of little machines, so the whole endeavor is made futile by the physical impossibility of achieving perfect identity.

I agree, but I’ll add that if we could make perfect replicas of cells and organs in software, what would be the point? You would have to replicate the shortcomings as well as the parts I like, or it wouldn’t be me. Really, who needs another schlubby middle-aged guy with creaking joints and a fragile heart and a brain that needs caffeine to jolt it awake? If I am going to be uploaded into a cybernetic body, it will be 6½ feet tall, slim and muscular, clear-eyed and always energetic and alert…oh, wait. And how will that be “me”? If we replace the candle that sputters and flickers and makes warm red light dancing on the table with an LED that glows brightly and intensely and fixedly, is the illumination still the same?

Note that I’m not judging which is better, but simply that they are different. If your goal is to upload yourself into a machine, it won’t be “yourself” if you change everything about it.

Comments

  1. chigau (違う) says

    I doubt that AI will be achieved by mimicking MeatyIntelligence, any more than human flight was achieved by mimicking bird flight.

  2. addiepray says

    I remember reading Anthony Damasio’s “Descartes’ Error” back in college, where he argued that our bodies were essential parts of our decision making. Seemed to pretty firmly swat away this idea of a digital “brain in a vat” serving as a replica of us.

  3. says

    I do think that self-aware, socially inclined AIs are possible, and they’d be ‘human-like’ in that broad, vague sense, but there’d still be noteworthy differences. “What is eating and tasting like, human?” I’d imagine the learning process would require a body of some sort for deep, nuanced interaction with the world, whether it’s the physical world or a virtual one. I suspect the end result would be akin to interacting with a friendly alien race, but not the Star Trek human-with-rubber-forehead kind.

    Even if we do manage to upload an accurate copy of a human’s personality onto a computer, it’s inevitably going to change as a result of having a new set of capabilities, limitations, and senses.

  4. says

    I strongly disagree with him on the Ship of Theseus problem. True, what we’d be replacing wouldn’t necessarily be a simulation of “ourselves.” It would probably just be a supplement, but some of what we call “ourselves” could be performed by a machine. Really, we already have technologies doing this. Machines choose our words for us. Machines regulate our hormones. Machines pump our blood. We don’t know the exact purpose of every gene and protein in our heart, but we’ve managed to replicate the important aspects. Who is to say we couldn’t do the same with parts of the brain? Who is to say we haven’t already.

    He’s right that we absolutely wouldn’t be simulating the purely biological version of “ourselves” in the end. We’d be replacing “ourselves” with something quite different, but the purely biological version of “ourselves” wouldn’t even exist for us to compare against. If I got a replacement heart, I’d be a different person than I would’ve been without it. But I’d still be a person. A simulation of the non-replacement heart me? No. But we still agree it’s “me.”

    Books are a technology that supplement our collective cultural memory. We’re gradually replacing books with digital copies. Culture is persisting even though the substrate is gradually changing. Why couldn’t the same be true for “me” or “you”?

  5. says

    If your goal is to upload yourself into a machine, it won’t be “yourself” if you change everything about it.

    I dispute this point. While mind uploading may be fantastically impractical, the “it wouldn’t be you, it would just be a copy of you” objection doesn’t hold. “You” aren’t a brain, “you” are the information describing the computational structure of one particular brain. There’s no reason (at least in principle), why you can’t retain that information, while replacing its physical instantiation.

  6. Katie Anderson says

    A lot of the AI field simply isn’t interesting to read about, and this kookiness gets all the press. I write software that uses experimentation to learn how a few tons of magnetics with a spacecraft bolted on top responds to a control signal. As it learns how the system responds it builds a simple model of how it thinks everything works and is connected, and makes smaller and smaller refinements as it goes on. It then uses this internal model of the system to predict how it will respond to arbitrary control signals, works out the problem in reverse, and uses the knowledge to accurately simulate specific vibrations the spacecraft will encounter such as the firing of attitude control thrusters.

    It isn’t intelligent in the way we understand it as applied to humans, so it isn’t “hard AI”, but it certainly falls into the domain of artificial intelligence. But if I tried to explain how it works on a more detailed level I would very quickly end up resorting to domain-specific jargon – not nearly as interesting of a subject to write about as simulated brains and the singularity…

  7. says

    There’s no reason (at least in principle), why you can’t retain that information, while replacing its physical instantiation.

    To perfectly simulate a copy of yourself, you’d have to simulate the quantum state of every particle in your body. Knowing that is impossible. But maybe that level of simulation isn’t important. Even if we assume we could simulate every molecule in your body, you’d also have to perfectly simulate every molecule in your environment to it exactly right. Your biology is linked to what you eat and drink and how you sleep, right? And the people you interact with? Eventually, you’d end up simulating the entire universe to simulate all of those things perfectly. Unfortunately, your simulation would be on a computer within that universe, so it’d need to be able to simulate itself simulating itself simulating itself etc. This is impossible.

    Unless you believe there’s some kind of symbolic or abstract representation of a person. If you think we’re fundamentally a program that can be written out nicely, we could copy that. But we don’t KNOW if that’s true. Until we prove it, this is just speculation.

  8. says

    I have a bit of an issue with the argument that uploading yourself won’t be you any more. I do agree that creating an exact duplicate will create someone new, not continue me, but I don’t agree that I can’t end up in a different state that happens to be cybernetic.
    Each of my cells have died and been replaced at least a few times, yet I am considered the same as I was all along, I have simply changed and grown, my existence continuous and dynamic rather than static and unchanging. The me at age 7 never died, but simply became the me at age 8, and gradually I changed from there till my current state. Like with evolution, it remains a variation of the same continuous process no matter how much it changes. The penguin is still a mighty dinosaur, and the twenty-year-old me is still me.

    I can’t do much beyond speculation, but it seems perfectly reasonable to me that if I slowly augment myself with cybernetics, I would still be me from the All-Organic Original Flavour human, through replaced heart and cybernetic legs cyborg, to the all-new unnatural GMO robot. Just like child me became adult me and the dinosaur tree sprouted a branch upon a branch with swimming balls of adorableness.

    Now, replacing all of my brain at once, that would probably disrupt the continuity too much, but if I can remain me with neurons being replaced one by one with other neurons, I can remain me while replacing neurons one by one with new cybernetic neurons with kung-fu action. At least, that’s my laywhippersnapper understanding of it. ^_^’

  9. says

    When they say “human-like intelligence” they aren’t talking about the particular applications of it. They are talking about a general competent ability to learn, investigate, and solve problems. Possibly also socialize.

    Some kind of intelligence that could potentially replace scientists.

  10. says

    Katie, what you’re referring to would be called “machine learning” or “weak AI” in this context. When Wilkins refers to “the project of AI,” he’s talking about “strong AI” or “artificial consciousness.” Basically, strong AI is trying to simulate some abstract essence of intelligence. According to most people thinking about that kind of problem, on some level, what we call intelligence is some kind of formal system manipulating symbols like a computer program. They’re on a quest to discover and describe that essence of intelligence. Wilkins thinks there is no such thing.

  11. unclefrogy says

    the question really is what are you / who are you. Most of the time what I hear about artificial copies uploading people suggests to me that the first question has not been considered. The point of this post is that we can not be separated from “the substrate” we are more than just the end result on top of any arbitrary pile of supporting material and processes. This fascination with up-loading and copying is just another way we think we can reach immortality.
    We are the supporting substrate self aware and consciously directed interacting with all the interior and exterior stimulation existing as a continuous series of interconnected events in time seemingly separate but intrinsically, fundamentally and inseparably connected to the whole.
    We have only just begun to recognize the true intelligence of the other creatures with which we share existence with after all.
    We probably already have AI we just can not recognize it yet because we are looking for ourselves.
    uncle frogy

  12. knowknot says

    – I am REALLY happy, just completely jmping up and down happy, to see the phrase “The ‘I’ that I am writing about is a thin veneer, an illusion of awareness…”
     
    – I’ve read Susan Blackmore stating her belief that “The Hard Problem of Consciousness” (I… hate?… Chalmers for this wee beast and all it’s horrible offspring) is, in fact, a trick. So, in this view, more like a “hard to pin down sleight of hand problem of consciousness.” Which, given our brains’ magical ability to fill in gaps without our knowing, change the past via memory, mispercieve at every level, spend 40 minutes a day effectively blind without noticing, make decisions before we are “consciously” aware of making decisions, tendency to have our biases (and then, down the line, our actions and thoughts) driven from the blind… etc… seems a completely sensible outlook.
     
    – I know this scares the hell out of religiously oriented people, because it has very strabge effects on the concept of soul, the concept of “me,” the concept of independently and consciously directed free will, and whatever else. I do understand that, since I found deconversion was very, very painful given the imbedded hooks of all that.
     
    – But I don’t get it at all now; the reality of the experience ofthat break is the shadow of the vapor of a ghost of a memory. All I really remember is something hurting like hell. (Ha!)
     
    – And I don’t get it the fundamental concern for this reason: understanding this (I suppose I should say “believeing this,” though it opens the floodgates to the “certainty is the true evidence of God’s penetrating whateverness” hive) changes who I am and how I experience life as a human not one bit once the anxiety of the change in views is factored out.
     
    – Whether I would want to impose this trick on any conscious thing – which would presumably include a “conscious” me-droid – is a fraught question. What would it suffer for any number of reasons of displacement? What might we have forgotten to build in regarding the blindness we require to maintain sanity? At what point would we euthanize it if we judged its development to be unsatisfactory? If fundamental changes and adaptions to interactions occur, how do we know what the hell is going on in there (aka, “The Hard Problem of Skynet)?
     
    @11 Keveak

    I do agree that creating an exact duplicate will create someone new, not continue me, but I don’t agree that I can’t end up in a different state that happens to be cybernetic.

    What?
    Perhaps I’m missing something.
    “Me” wouldn’t continue, but “I” would?
    Is this from Freud?

  13. says

    I suppose it depends on what you consider “you”. I don’t believe that brain uploads will ever be practical, myself, but I also don’t believe my personal identity can’t be uncoupled from pooping and having bad knees. Besides, who needs “perfect identity”? I’m not the same person I was 20 years ago, nor would I want to be that guy now. Except for his awesome knees.

    Anyway.

  14. =8)-DX says

    You’re getting it all wrong. The goal is not to upload yourself to the cloud. The goal is to create a direct brain-internet interface making all of us part of the hivemind.

  15. says

    Ryan Cunningham:

    To perfectly simulate a copy of yourself, you’d have to simulate the quantum state of every particle in your body. Knowing that is impossible. But maybe that level of simulation isn’t important. Even if we assume we could simulate every molecule in your body, you’d also have to perfectly simulate every molecule in your environment to it exactly right.

    But do you need to do it exactly right? A simulation must capture behaviour such as whether or not a neuron should fire, or a synapse should form, but it’s unlikely that you need to know the location of individual molecules to do this. Not least because any system which is exquisitely dependent on every tiny parameter will also be exquisitely unstable. Good designs are resilient with respect to imperfections, and the same probably applies to the brain.

    Unless you believe there’s some kind of symbolic or abstract representation of a person. If you think we’re fundamentally a program that can be written out nicely, we could copy that. But we don’t KNOW if that’s true. Until we prove it, this is just speculation.

    I agree I’m entering the world of speculation, but I think it’s likely that an person can (in principle) be reduced to an abstract representation, that can be used to create a new brain (with “them” on board), or a computer simulation (again with “them” on board).

  16. markd555 says

    Agreed, an AI won’t really be “Human like” but it eventually may be a sentient creative being with independent original thought. I hope we can treat it with the respect we should give all life.

    Personally I have more hope than most for AIs. Everyone thinks an artificial intelligence will eliminate humans. I think that idea says more about the people thinking it. We assume they will be like us.
    AIs won’t be evolved from violent, hateful, fearful, poop tossing primates that hate everything not like themselves.

    No offense to humanity; we have done alot of great things, and good for us. But we have ALOT of bad instinctual baggage we carry around.

  17. P. Zimmerle says

    I agree that there are flaws, but this is approaching curmudgeonly luddism.

    Yeah, it won’t be exactly the same as “you”, but “you” has changed and will continue to change throughout your biological lifespan anyway. What’s so wrong about getting to choose what you are? I don’t see why I should be a slave to biological determinism.

    If you aren’t okay with that, fine – you don’t need to do it. Don’t try to pretend that your personal opinions are universal, though.

  18. says

    With the caveat that I’m happy to talk about the subject from an SF point of view, but not as a believer in Transhumanism as an organised belief system:

    The “I” that I am writing about is a thin veneer

    Then why do you need an exact duplicate of your meat body to run it? A close-enough copy* could perhaps be run on a 2^n-core multi-processor embodied in an artificial body; close enough that no psychological or mental test would detect a difference that couldn’t be put down to the effect of discovering that “you” were a thin veneer running on a chip in a mechanical body (which would be supplying enough substitute nervous input to keep your unconscious sub-strata happy). None of us out here would know the difference between the meat-driven you, the electronic you or a version running on your human brain transplanted into a squid.

    *Probably a bit more complicated than the one proposed for Arthur Dent.

  19. says

    The point is, all the biological input into meat-based me influences my personality. It’s part of what makes me me. If the me running on a digital chip is no longer being fed all those messy chemical signals and so on, the personality is pretty quickly not going to be recognisably me anymore. Yeah, you could probably make the hardware and software even more complicated and simulate all the organically-based input into Chip-Me, but then what have I or you gained? Might as well have created a non-biologically-based AI personality in the first place.

  20. markd555 says

    Might as well have created a non-biologically-based AI personality in the first place.

    And back to the point of life:

    Is it better to be immortal?
    Or is it better to help give rise to some charming, smart, useful, inquisitive “children” to live a new life and make a better future? Does not necessarily matter if they are your not your species, or even non-biological.

  21. knowknot says

    @19 markd555

    AIs won’t be evolved from violent, hateful, fearful, poop tossing primates that hate everything not like themselves.
    No offense to humanity; we have done alot of great things, and good for us. But we have ALOT of bad instinctual baggage we carry around.

    Or, without understanding of suffering, or memory of early choldhood care, may approximate Ash’s statement:

    You still don’t understand what you’re dealing with, do you? Perfect organism. Its structural perfection is matched only by its hostility.

    If you’re equating “them” with humans, I assume you’re including making it conscious means… what? No one even knows what.

  22. knowknot says

    @20 P. Zimmerle

    I agree that there are flaws, but this is approaching curmudgeonly luddism.
    Yeah, it won’t be exactly the same as “you”, but “you” has changed and will continue to change throughout your biological lifespan anyway. What’s so wrong about getting to choose what you are? I don’t see why I should be a slave to biological determinism.
    If you aren’t okay with that, fine – you don’t need to do it. Don’t try to pretend that your personal opinions are universal, though.

    Ow! Ow! Retinas burning! The dreaded curmudgeonly luddism attack! That which revokes all health points and delivers all hit points! AGGHHHHHHH!
     
    And if “you” keeps changing, what changing “you” is choosing what other changing “you?” Why not just, you know, make something? And isn’t the apparently imposed changing submitting to a non-biological anti-determinism (or voodoo individuality concept of choice)?
     
    And ow! Ow! Universality of opinions revoked! God points denied!
     
    Lighten up. Botify yourself if you want. Chill. You’ll make a better whatsit.
     
    But seriously, why in the name of the ten thousand simulacra are you so defensive?

  23. brett says

    As others have pointed out, you don’t need a simulation to be perfect – you just need it to be “good enough” that both the simulated person and the people interacting with the simulation can’t tell that something is off. It’s like with climate change and weather models, where you don’t need the models to be absolutely 100% perfect matches for how the real weather works. You just need them to be close enough to the real thing so that you can make falsifiable predictions that stand up to testing with them.

    I think that might be where the problem is, though. We don’t have arbitrarily capable methods for scanning a human brain while the person is still alive, not even if you cross ethical boundaries and do scanning that wrecks the brain in the process (destructive scanning?). Neuroscience is getting better at measuring stuff from the brain and nervous system as well as scanning, but there’s no guarantee or “law” in effect that that has to continue – we could reach a point where we just can’t get more minute details and thus can’t reconstruct and simulate a living person. You might still be able to simulate a new person (since you don’t have to copy memories and the like), but that wouldn’t be what most emulation advocates are hoping for – and it has a ton of ethical issues.

    In any case, I’m pretty skeptical that we’ll see it. There’s probably easier ways to immortality and mental control over technology than full simulation, and “Weak” AI/machine learning may become good enough for what we need it to do.

  24. horrabin says

    While mind uploading may be fantastically impractical, the “it wouldn’t be you, it would just be a copy of you” objection doesn’t hold.

    But isn’t uploading copying? Usually the idea is your old body is worn out so you upload before you croak, but If you copy your mind-state into another body (flesh or mechanical) and original you doesn’t die…do you know what the copy is thinking? If the copy goes to Mars but original you stays in Encino, can you say “Yeah, I’ve been to Mars.”?

  25. chigau (違う) says

    I read a short story where you could have your ‘personality’ transferred to a new body but they had to sorta run the old body through a blender to do it.

  26. Amphiox says

    Upload a brain to a computer and even if it works, you’ve just created two individuals. Two individuals who will start out similar, but instantly begin to diverge, thanks to the wholly different environments and associated stimuli they will experience from t=0 of the upload.

    There will be no immortality for the meaty one of the pair.

  27. Amphiox says

    “You” aren’t a brain, “you” are the information describing the computational structure of one particular brain.

    And how, exactly, have you empirically demonstrated this fact claim of yours here?

  28. Ichthyic says

    Upload a brain to a computer and even if it works, you’ve just created two individuals. Two individuals who will start out similar, but instantly begin to diverge, thanks to the wholly different environments and associated stimuli they will experience from t=0 of the upload.

    Wasn’t that basically one of the plotlines of that “Ghost in the Shell: Stand Alone Complex” anime series?

  29. knowknot says

    @8 hyperdeath

    “You” aren’t a brain, “you” are the information describing the computational structure of one particular brain

    Probably at a specific timeslice, but the utility of that would be…?

  30. says

    keveak @ 1:21 pm:

    it seems perfectly reasonable to me that if I slowly augment myself with cybernetics, I would still be me from the All-Organic Original Flavour human, through replaced heart and cybernetic legs cyborg, to the all-new unnatural GMO robot.

    … Now, replacing all of my brain at once, that would probably disrupt the continuity too much, but if I can remain me with neurons being replaced one by one with other neurons, I can remain me while replacing neurons one by one with new cybernetic neurons …

    yea, that’s basically one way how i see it going down.

    see: “who wants to live forever?”

  31. says

    hyperdeath: So the assumption here is so-called “substrate independence”, and I know at least one smart person who rejects it (because he’s kookoo IMHO). Another possibility has been raised by smart person and smart-guy Scott Aaronson: consciousness, whatever it is, requires quantum decoherence. He proposes to conjecture this not from first principles but because it has so many nice consequences (he’s fully aware that it would be fallacy to assert its truth because of the nice consequences). The nice consequences include things like No-Cloning theorems then apply to conscious minds, handily mopping up teleportation vs copying conundrums, and reversible computation not resulting in multiple “active” minds.

    Ryan Cunningham: To be concrete, let’s suppose you’re willing to concede that after a nice 7 hours of sleep overnight, you’re still basically the same “you” as the person who went to sleep. If we can get input/output fidelity to that level, I’d say we’d be doing pretty well on the copying front. But consider how many chemical physiological changes take place, how much thermal noise there is in that good night’s rest! Whatever personal identity is, it ought to be considered stable in the face of those processes…

  32. consciousness razor says

    Ryan Cunningham: To be concrete, let’s suppose you’re willing to concede that after a nice 7 hours of sleep overnight, you’re still basically the same “you” as the person who went to sleep. If we can get input/output fidelity to that level, I’d say we’d be doing pretty well on the copying front. But consider how many chemical physiological changes take place, how much thermal noise there is in that good night’s rest! Whatever personal identity is, it ought to be considered stable in the face of those processes…

    It’s fairly “stable” in the sense that you have memories of what happened before. And you aren’t some other organism (like me, for example), despite the fact that you’re undergoing changes while you were out. I would say “while you don’t exist,” since as an experiencing subject with a personal identity, you (the thing that’s aware) don’t exist when you’re not aware of anything. The body still exists, obviously, but it’s not coated in that special subjective sauce, which is what “you” are, as far as this “mind-uploading” nonsense is concerned.

    I’d like to continue living and having experiences of that life, for some reasonable length of time. Whether I’ll be mortal or immortal (though I’m certain I can predict which one it’ll be), that’s what my life will be: me experiencing things in it. If some copy of me, or something that has the same patterns of information as my brain states, also goes around living its own life (before or after mine), that’s fantastic. I’m happy for it and wish it all the best. That copy cannot be me. It doesn’t even make sense to say that it is or that it would be. I’m not, in fact, a pattern of information. Besides being incoherent, as far as I can tell, this information-dualism is just not substantiated by anything at all; and I guess the only thing that makes it seem reasonable (to people who are not otherwise dualists) is that “information” gets bandied about so much while nobody has any idea of what they’re saying or what the implications are.

    So, it’s funny seeing people talking about how “impractical” mind-uploading supposedly is, all of the technological hurdles that would apparently need to be overcome — and probably won’t be? I just don’t see how any of that, even if I granted that all of that can and will happen, would be relevant to this question. Sure, impossible things are “impractical,” but if that’s how you actually think about it, you’re really underselling this point. Please don’t give nerd rapturists even that tiny sliver of hope, for future technological miracles that we can’t absolutely rule out. There is no such hope, and you’re just confusing everyone else, because such miracles wouldn’t matter anyway.

  33. knowknot says

    @38 consciousness razor

    […] The body still exists, obviously, but it’s not coated in that special subjective sauce, which is what “you” are, as far as this “mind-uploading” nonsense is concerned.
    […] Besides being incoherent, as far as I can tell, this information-dualism is just not substantiated by anything at all; and I guess the only thing that makes it seem reasonable (to people who are not otherwise dualists) is that “information” gets bandied about so much while nobody has any idea of what they’re saying or what the implications are.

     
    …while nobody has any idea of what they’re saying or what the implications are.
    THAT is the one persistent sense I get from every treatment of these topics that I’ve heard or read. Granted, I’m ignorant. But I’m not stupid, and I’m used to having some thread hold, even in most matters that are over my head.
     
    What follows is… what it is. Please do NOT feel any need to persevere. It’s an attempt to purge myself of what feels like the nonsense these issue always leave me dripping with.
     
    So… there’s this load of memory of past events, the initial accuracy of which is a non-issue as far as actual experience is concerned, but which we know to be malleable by the process of retrieval and re-storage.
     
    Then we’ve got the emotions attached to these memories, the malleability of which is apparently malleable as well, certainly by therapeutic means (as with more recent PTSD treatment) and recent indications of chemical means.
     
    Then we’ve got the moods in which these memories are retrieved, whcih causes the emotional effects accompanying recall to vary. (Or is that only true for those of us with mood disorders? Are sane people so “stable” that any given instance of recall of a memory always has the same emotional effect?)
     
    And these moods seem to be an impotant aspect of personality given all the names we have for the moods of various “personality types” and their apparent weight… “cheerful,” “optimistic,” “negative,” “stoic,” etc., and how apparently major changes in mood cause a person to “not seem themselves” (even to themselves!). (And what is the effect on “personality” of a cycle of moods – the effect of no one knows exactly how much of the whole body’s chemistry – on all this retrieval and return to storage during the cycle? How does one part of the cycle affect the next?)
     
    Agghhh. Is the “person” supposed to be simply this specific substrate of memory with “the hard problem” laying over the top, and emotion somewhere in the sauce? If your memories were replaced with mine, with nothing else changed, how close to being me would you be?
     
    I’ve heard it said that the persistence of the sense of self is simply a result of the fact that we have approximately the same memories among which are memories of what we felt like x minutes/hours/days ago. And that we like to feel constant. So we comply and conform, unconsciously or not, with that stream.
     
    Maybe it’s only because I have a cyclic mood disorder (cyclothymia, aka “bipolar light,” but still a beast) that this stuff makes the whole what gets uploaded thing seem ridiculous at best… because I can honestly say that when I’m in an elevated mood I do not understand or recognize the experience of a depressed state and vice versa. I can remember all the same bits and pieces of my life, but many (if not most) of my responses, thoughts and experiences in the other state are truly alien. Entire experiences that I can only recall in the abstract, as if someone were merely explaining themselves to me. Words.
     
    Given that the ability to “experience” “consciously” seems to be a primary effect of a successful upload, which one comes out of the pipe? And would I recognize him? And should I?
     
    And it seems to me that precisely none of that is answered anywhere. And maybe the creature in my head is off-kilter, but I’m pretty sure that if the “personality” thing that’s supposed to upload is A) human and B) “comscious,” it is one, and all the crap terms that get thrown around shouldn’t be so vague or clashing.
     
    OK. I apologize for all that. I really do. I just have no clue how to do better at this moment. It just really pisses me off that something as important as “what it is to be human” becomes so happily coated in loosely defined abstraction. That, in the end, is what makes me seasick.

  34. Matthew Trevor says

    It’s bad enough that ideas persist long past their best-before date. The idea of whole personalities doing so is horrifying.

  35. prae says

    I understand “Human-like AI” as something you can talk to in natural language, with both of you understanding what the other one is meaning. It might be better to describe it as an AI capable of understanding humans. Regarding strong AIs I am more concerned about their “sanity”, since they wouldn’t have millions of years of evolution calibrating their minds, only some moderately-intelligent designers having only a few decades at most. I think it might get easily unhingled by the right (or rather wrong) kinds of stimuli, since it wasn’t made for coping with them.

    Also, as pointed out, perfect 1:1 replication (for mind uploading) shouldn’t be necessary. I consider all that environmetal/biochemical interference as some kind of noise. If you can mantain being “you” while being exposed to that noise, you should be able to tolerate imperfections of your artificial brain/it’s divergence from a biological one. I consider your examples to be odd, too. Are you saying you wouldn’t be “you” anymore without your fragile heart or creaking joints? And yes, if you would get yourself a new cybernetic body, it would still be your mind piloting it IMHO. And yes, you would probably change, but again, as pointed out, you change all the time.

  36. sugarfrosted says

    I wonder when the MIRI cultists are going to show up. That organization is fairly terrifying, they even made a secular version of the devil to keep members in line.

  37. knowknot says

    @42 prae

    If you can mantain being “you” while being exposed to that noise, you should be able to tolerate imperfections of your artificial brain/it’s divergence from a biological one.

    You can’t. That “noise” is what the brain is adapted to deal with. In sensory deprivation the brain starts making stuff up to deal with the starvation.

  38. prae says

    @44 knowknot:
    um, who talks about sensory deprivation? Of course I am assuming that you should have some kind of artificial body after mind uploading with all the necessary sensory organs and limbs etc. Everything else would be absolutely ridiculous. I mean, you sure could handle not feeling any hunger, or feeling it “wrong” if it gets mapped to your battery charge level, or not feeling getting drunk because the effects of alcohol aren’t emulated, or not feeling the effects of your period, etc. Also, seeing how some people lose limbs or sensory organs and are able adapt, I think adapting to an artificial body shouldn’t be that much of an issue.

  39. says

    “the whole meaty sausage of guts and nerves and bones and muscle”
    Mmmmmm, now you’re making me hungry…I suppose that’s a human intelligence response??

  40. Tigger_the_Wing, asking "Where's the justice?" says

    The ‘uploading’ people sound like this:

    Step 1. “Personalities are information running on Brains!”
    Step 2. “Software is information running on Computers!”
    Step 3. “?????”
    Step 4. Profit Immortality!

    When anyone points out that not only do they not have any coherent plan for Step 3, but also that Step 1 is a metaphor, they yell “Luddites!” as if that explains anything.

    Just for starters, I wonder if any of them have any experience with the study of normal neurology, let alone any other kind. How do they explain the mechanisms? They seem incapable of explaining what they think personality actually is. Or anything else, for that matter.

    What about identical twins with completely different personalities?

    What about brain damage which results in a completely different personality?

    As knowknot said, what about moods?

    What causes someone to become angry, for instance? Not just macro stuff, like outside stimuli and pre-existing personality, but at the chemical level?

    If they cannot explain how neurons work in concert with one another in all the different levels of the brain, in detail (no magic hand-waving), then they have no chance whatsoever of duplicating something as complex as an emergent personality.

    Or even a worm.

    Anyway an uploaded copy of me (impossible as that is) whilst theoretically immortal, would not confer immortality on me. I’ll still die when my brain ceases to function whilst my originally ‘identical’ twin will continue to change into someone not like me from the moment it is switched on.

  41. knowknot says

    @45 prae

    Also, seeing how some people lose limbs or sensory organs and are able adapt, I think adapting to an artificial body shouldn’t be that much of an issue.

    Having seen an internal combustion driven car run without an internal battery, I assume it will run without spark plugs.
    From brain-in-a-jar to brain-in-a-jar with attachments. Well… maybe. At least it bypasses the uploading nonsense. But there’s still no certainty that “personality” wouldn’t be altered radically.

  42. gussnarp says

    On a surface level, I agree that “you” are the entirety of the messy biological bits sloshing around, but, leaving aside whether “you” now is the same “you” as “you” tomorrow in general, would you say that if your leg gets cut off tomorrow that you will still be you? I expect we’d all say yes. So your leg is not needed for you to be you. What if it’s both legs? Would you still be you? What if it’s also the arms? What if we have a way of replacing arms and legs with demonstrably superior arms and legs and you do it voluntarily, is it still you? We already have artificial hearts, what if they get so good that you’d be an idiot not to get your cranky old ticker replaced? What if they’re so good that everyone gets one at forty, are we still us? Skipping a few steps, say we replace all those bits, we replace all your muscles with artificial muscles that are superior and we’re able to replace or augment your bones with superior ones, including your spine. Still you? I think we’d all say yes. What if your brain just sits in a remote vat of chemicals receiving signals from a completely robotic body that replicates your human body – all the right chemicals pour into the vat, all the right electrical signals, you can’t tell the difference between being in your body and being the brain in the vat running the robot, except you’re taller, stronger faster and live for say, 250 years before the brain in the vat gives out. Still you? If not, when did it stop being you? When the hormone producing organs were removed from the system? The stomach? The liver? Only when the brain went remote in the vat?

    I still thing the uploaded brain might not be you, but the brain in the jar? I don’t know. Maybe it’s somewhere back in there when we started voluntarily replacing bits with demonstrably superior ones just to make ourselves a little better. But I don’t know.

  43. knowknot says

    @49 gussnarp

    all the right chemicals pour into the vat, all the right electrical signals, you can’t tell the difference between being in your body and being the brain in the vat running the robot

    And yet, we can’t quite figure out whether or not vitamin supplements work. Or any number of other, seemingly very mundane things.
    This reminds me of the old cartoon of physicists at a blackboard with a step indicated “and then magic happens.”
    Though I do admit the brain-in-a-jar-luxury-edition is less a stretch tha the whole uploading thing. Still, that’s a far mark.

  44. gussnarp says

    @knowknot – Two separate issues, of course. Possible any time in the near future? No. Still you? Up for debate. Let’s just say we can keep the brain alive and get enough information in and out to run the robot, you or not? I do feel like that remote brain in a jar step might be crossing a rubicon of sorts, whereas having the brain controlling a mostly artificial body that still carries it and most of the endocrine system around, probably is still you.

    But I can see how without that lump of living brain, it almost certainly ceases to be you, no matter what technology we eventually arrive at.

  45. knowknot says

    @51 gussnarp
    That, I get (for what my opinion is worth… and really, I’m not trying to be contentious. There’s just soooooo much fluff hidden in various corners of these… let’s say topics, since it’s going to be a while before they’re issues for any but the faithful).
    So, yes… not “identity” (tired of using “me”), but something functional. Can’t help but wonder what direction it would spin off in, though. Seems all the tendencies toward various biases, self aggrandizement, self denegration, status, physical pleasure, etc etc etc added to whatever physical changes (strong? fast? beautiful? scary? invisible?) would result in someone (not a thing, if it’s conscious, even if it goes insane) interesting given some time…

  46. says

    Tigger_the_Wing @47, so many questions. At least you’re not saying that because we don’t know the answers we will never be able to upload a human mind, because that really would be Luddite. A scientist rubs their hands with glee when they find unanswered questions, then tries to answer them. Perhaps they find that in the light of further knowledge some of the questions don’t make sense, perhaps that some are unanswerable but there are some handy shortcuts to get to the next question. Perhaps they find an answer or two.

    We’re at the stage of Leonardo da Vinci trying to figure out how to fly, studying birds because those are the only examples he knows, sketching bird-like mechanisms but missing important details like aerofoils and power-to-weight ratios that his science doesn’t even know about. Or having the occasional crazy idea like airscrews, and almost inventing parachutes.

    We don’t know in detail how brains work, so we work with what we have, trying to create a model of the only intelligence we can study closely. If the model works, great; if it doesn’t, also great, because either way we’ve found out something new. Maybe we can make human-like intelligences, maybe we end up with something that isn’t human but that will do something useful for us. And maybe, by trying to model it and failing, we find out some things about human intelligence, same as has happened throughout the history of AI.

  47. Ichthyic says

    in general, would you say that if your leg gets cut off tomorrow that you will still be you? I expect we’d all say yes.

    I’m gonna go with… no.

    in fact, I would be a totally different me that now had to learn how to cope with life with only one leg.

    you probably should pick something less impacting. maybe clipping my fingernails. providing they weren’t for some reason way longer than most people’s.

    “You” is composed of a lot more individual bits than you want to give credit for, and you typically take for granted, but would quickly have an impact if you no longer had them.

    I really don’t think there IS a simple definition of self identification. Hell, even the internet is part of who I am.

  48. Iain Walker says

    hyperdeath (#8):

    While mind uploading may be fantastically impractical, the “it wouldn’t be you, it would just be a copy of you” objection doesn’t hold. “You” aren’t a brain, “you” are the information describing the computational structure of one particular brain.

    Er, no. The “you” in the problem of personal identity is a person, and a person is a self-aware agent – a dynamic, embodied information-processing system. The “information describing the computational structure of one particular brain” is simply an abstract representation of that concrete particular at a particular instance, and is no more “you” than a photograph is “you”. What you’re describing is the map, and mistaking it for the territory.

  49. P. Zimmerle says

    Ow! Ow! Retinas burning! The dreaded curmudgeonly luddism attack! That which revokes all health points and delivers all hit points! AGGHHHHHHH!

    And if “you” keeps changing, what changing “you” is choosing what other changing “you?” Why not just, you know, make something? And isn’t the apparently imposed changing submitting to a non-biological anti-determinism (or voodoo individuality concept of choice)?

    And ow! Ow! Universality of opinions revoked! God points denied!

    Lighten up. Botify yourself if you want. Chill. You’ll make a better whatsit.

    But seriously, why in the name of the ten thousand simulacra are you so defensive?

    I don’t know, why are you childish, dismissive, and incomprehensible?