We’ll get brain-uploading about the time we get teleportation


neurons

There’s an interesting conversation in the New York Times: a neuroscientist, Kenneth D. Miller, argues that brain uploading ain’t gonna happen. I agree with him, only in part because of the argument from complexity he gives.

Much of the current hope of reconstructing a functioning brain rests on connectomics: the ambition to construct a complete wiring diagram, or “connectome,” of all the synaptic connections between neurons in the mammalian brain. Unfortunately connectomics, while an important part of basic research, falls far short of the goal of reconstructing a mind, in two ways. First, we are far from constructing a connectome. The current best achievement was determining the connections in a tiny piece of brain tissue containing 1,700 synapses; the human brain has more than a hundred billion times that number of synapses. While progress is swift, no one has any realistic estimate of how long it will take to arrive at brain-size connectomes. (My wild guess: centuries.)

Second, even if this goal were achieved, it would be only a first step toward the goal of describing the brain sufficiently to capture a mind, which would mean understanding the brain’s detailed electrical activity. If neuron A makes a synaptic connection onto neuron B, we would need to know the strength of the electrical signal in neuron B that would be caused by each electrical event from neuron A. The connectome might give an average strength for each connection, but the actual strength varies over time. Over short times (thousandths of a second to tens of seconds), the strength is changed, often sharply, by each signal that A sends. Over longer times (minutes to years), both the overall strength and the patterns of short-term changes can alter more permanently as part of learning. The details of these variations differ from synapse to synapse. To describe this complex transmission of information by a single fixed strength would be like describing air traffic using only the average number of flights between each pair of airports.

Underlying this complex behavior is a complex structure: Each synapse is an enormously complicated molecular machine, one of the most complicated known in biology, made up of over 1,000 different proteins with multiple copies of each. Why does a synapse need to be so complex? We don’t know all of the things that synapses do, but beyond dynamically changing their signal strengths, synapses may also need to control how changeable they are: Our best current theories of how we store new memories without overwriting old ones suggest that each synapse needs to continually reintegrate its past experience (the patterns of activity in neuron A and neuron B) to determine how fixed or changeable it will be in response to the next new experience. Take away this synapse-by-synapse malleability, current theory suggests, and either our memories would quickly disappear or we would have great difficulty forming new ones. Without being able to characterize how each synapse would respond in real time to new inputs and modify itself in response to them, we cannot reconstruct the dynamic, learning, changing entity that is the mind.

That’s part of the problem: the brain is really, really complicated. That tiny scrap of brain tissue where they mapped out all the synapses? I wrote about that here; it was a tiny slice, 1500µm3, or a little dot about 12µm on a side…1/80th of a millimeter. It contained all of those synapses, took a huge effort (an effort that destroyed the tissue), and it recorded only a snapshot of cellular and subcellular structure. There was no information about those thousands of proteins, or the concentration of ions, or any of the stuff we’d need to know to reconstruct activity at a single synapse — all that was also destroyed by the chemical processing required to preserve the structure of the cell.

We aren’t even close to being able to take apart a brain at the level necessary. Miller is exactly right. And as he points out, one additional problem is that the brain isn’t static — it’s not going to hold still long enough for us to get a full snapshot.

But as I said, complexity is only part of the problem, and if you focus on just that issue, it opens you up to this kind of rebuttal from Gary Marcus.

Two hundred years ago, people thought that flying machines were more or less impossible; cellphones were inconceivable as real-world artifacts. Nobody knew what a genome was, and nobody could have imagined sequencing one for a thousand dollars.

Mr. Miller’s articulation of the complexity of the brain is reasonable, but his extrapolation proceeds without any regard whatsoever to the pace of technological progress. It is the ratio between the two — complexity and progress — that matters.

Brain uploads won’t be here tomorrow, but there is a very good chance that they will be here within a century or two at most. And there is no real argument to the contrary.

We’ve got a problem with lots and lots of parts, and it’s too complicated for us to even count the parts. But technology marches on, and we can expect that someday we’ll have widgets that can track and count far more parts than we can even imagine now. It doesn’t matter how many parts you postulate, that is a merely quantitative problem, and we’ve been really good at solving quantitative problems. Why, any day now we’ll figure out how to squeeze enough energy into a teeny-tiny box so that we can build jet-packs.

As for that genome argument, that is correct: we’re really good and getting better at sequencing a few billion nucleotides at a time. With a sufficiently simple definition of the constitution of the cell, you could claim that it’s a solved problem: we can figure out the arrangement of the letters A, T, C, and G in a linear polymer just fine. Now telling me how that gets translated into a cell…well, that’s a little more difficult. That’s a whole ‘nother problem we aren’t even close to solving in detail. It’s also not going to be solved by enumerating the bits.

Another problem here, beyond complexity, is specificity. My brain and your brain are equally complex, have about the same number of parts, and are arranged in roughly equivalent ways, but they differ in all the specifics, and it’s those specifics that matter. If you were to disintegrate my brain molecule by molecule so you could attempt to reconstruct it in a computer, it does me no good if you build your brain in the machine, or Jeffrey Dahmer’s brain, or a profoundly malfunctioning artifact with randomized cognitive connections, or a blank blob with a potential to learn. All the transhumanists want personal immortality by placing their personal, unique awareness in a box less likely to rot than our standard organic hardware. So not only do you have to build something immensely complicated, it’s got to be a nearly exact copy of something just as complicated.

And the bits in this copy are specified right down to the arrangement of individual molecules and even the concentration of ions in tiny compartments…all of which are changing constantly to generate the mind. You would have to freeze my brain in place instantaneously, capture the position and state of every molecule in it, and then build a copy with astonishing accuracy at the molecular level — all while the copy is locked down and not reacting in any way with it’s components — and then restart it all instantaneously as well. There are physical limits to how precisely individual molecules can be manipulated. This problem goes beyond building better mapping and inventory of a multitude of parts. It’s bumping up against limitations of the physical universe.

I agree with Marcus that someday we might be able to build something as complicated as a brain — we already do it, every time we make a baby. But making an exact copy of 1.5kg of insanely intricate meat, in a medium that isn’t meat, somehow? Nah, that’s not a realistic proposal.

Comments

  1. petesh says

    I completely agree with you, but I am also increasingly aware of Clarke’s First Law:

    When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

    I would say it keeps me humble, but I’m not sure it succeeds.

  2. kevinalexander says

    I can imagine that someday we will make artificial brains as complex as our own but they still won’t be the same as ours in general much less copy an individuals brain.

  3. slithey tove (twas brillig (stevem)) says

    Two hundred years ago, people thought that flying machines were more or less impossible;

    Me thinks me sees a logical fallacy there. [drum beats]
    Just because they were wrong about something in the past, does NOT mean they are necessarily wrong today. The problem is trying oversimplify the problem without observing the immensity of challenges involved.
    Yes, each aspect is a solvable problem that can probably be solved in time. The fallacy is minimizing the amount of time required to achieve the solutions and the immense quantity of solutions required to achieve the result one is anticipating.

    I disagree with comparing the problem to Teleportation. I think (however mistakenly) that teleportation is impossible (p=0), while “uploading brains” has a small, but non-zero (p>0), probability of eventually being possible.

  4. says

    You would have to freeze my brain in place instantaneously, capture the position and state of every molecule in it, and then build a copy with astonishing accuracy at the molecular level — all while the copy is locked down and not reacting in any way with it’s components — and then restart it all instantaneously as well.

    This overstates the case a bit. It would be sufficient to generate something as close to you as the you that went to sleep last night is to the you that woke up this morning. That would require an inconceivable amount of information we don’t know how to get, but the “freeze, capture everything, restart” procedure you’ve outlined would require inconceivably more.

  5. Elladan says

    You know, these sorts of arguments generally don’t really sit right with me.

    I see no reason to doubt that the technical problems with actually uploading a brain are nearly insurmountable. But that’s not the part that bothers me: rather, the last bit about whether it is you / helps you / etc. Saying a copy isn’t you has always seemed like a silly philosophical cop out to me.

    To put that another way, imagine you woke up a million years in the future and learn that people sell brain uploading at the local 7-11. Furthermore, you learn that all your new neighbors seem perfectly at ease with making copies of themselves, and do it all the time. Would you still worry about issues of whether it’s you or not? What if you grew up in such a society? My point is: your feelings about brain uploading as an idea are about a hypothetical idea. If we actually had the technology, things would be different. We’d have to pass laws about whether copies can access your bank account, and so on. We couldn’t just say it’s a dumb idea and be done with it.

    If you imagine that making a proper AI thinking computer is more plausible than brain uploading, then from a philosophical point of view this will certainly become interesting. Because, even if brain uploading is implausible, it should be pretty clear that you’ll be able to make a backup / copy of a computer AI. So even if it remains a fantasy for us ape-like people, said computer people are actually going to have to sit down and make up some laws about the subject because it’s part of their day-to-day life.

    Of course, strong AI is also one of those very hard problems, so who knows.

  6. Amphiox says

    #1: Hah! I’m neither elderly nor distinguished, therefore you must accept what I say!

    You’ve also never actually said it was “impossible”, only much harder than what proponent X has claimed.

    You sneaky rules-lawyer, you….

  7. says

    #7: That doesn’t work. It assumes that somehow the errors in analysis/assembly are temporally coherent, and that what’s reconstructed is simply out of date. But it’s not. The you of the night before is just as complex and specific as the you of this morning.

    The only excuse I can see for the scenario is if there is a fair bit of slop in the specificity: the brain doesn’t have to have every molecule precisely in place to generate a “you”, but if the synaptic boutons are all within a tenth of a micron and the ion concentrations are within a fraction of a micromolar and all the right proteins are in approximately the right place, you get something that is approximately “you”, a you as if you’d just gotten a nearly fatal electrical jolt and fallen off a building and gotten your head slammed by a hammer.

    But even that level of accuracy is going to be unattainable, I think, and amplifies the doubt that your copy truly has any continuity with who you are.

  8. Amphiox says

    re: the thing about the copy not being you…

    If the copy is of sufficient fidelity, no external observer would ever know, or ever could know. There would be no external independent test that could verify that the copy was not a continuation of the original.

    If the copy of sufficient fidelity, the copy himself would also not know, and have no way of ever knowing. From the copy’s point of view, there would simply be a smooth (or not so smooth) transition from having a meat body to having whatever else it would have as an electronic copy.

    The only entity who could know would be you yourself, and you would only be able to know in the affirmative. You would know if your own consciousness continued through the transition. And if it was the other way? If your consciousness did NOT continue through the transition? You wouldn’t know, because you’d be dead, and unable to know anything.

  9. applehead says

    @petesh:

    Saint Clarke also said woman astronauts would destroy the space program because their male counterparts would be critically distracted by their bouncy parts and believed steadfastly in cold fusion long after the physics community had moved on till his death in the 90s.

    Maybe we should finally ignore the bloviations of this nutty hack.

    “Mind uploading” is an incoherent wish-fulfillment fantasy that’s just a repaint of body-soul dualism. A picture of you isn’t you, and you’d need a machine shaped exactly like human brain to replicate the mind. Computers won’t help you there; they’re serial while the brain is massively parallel and doesn’t possess a hard-/software separation.

  10. says

    I think this hits at least two walls that make saving/uploading personalities impossible.

    Firstly, the preciser you want to measure something, the higher uncertainity about the final result, even when you try measure the length of invar ingot. This is why the costs of precise measurements do not rise linerally with precision, but much much more.

    Secondly this is even worse at particle level. The mind is an ongoing process that consists of electrically charged particles whizzing about. Heisenberg’s uncertainty principle already established, that you cannot measure charged particles whizzing about, you either know where they are, or where they are going, not both.

    Maybe it will be possible at some point to upload approximations of minds and maybe they will get better over time, but I too believe that there is a physical limit to how far this can get.

  11. consciousness razor says

    All the transhumanists want personal immortality by placing their personal, unique awareness in a box less likely to rot than our standard organic hardware. So not only do you have to build something immensely complicated, it’s got to be a nearly exact copy of something just as complicated.

    Even if that could be done, it wouldn’t be offering personal immortality to anybody. It may not “rot” if it’s inorganic, but it would not be functional forever. So to be more precise, assume “a long time” for the rest of this, instead of a literally infinite amount of time in the future, which is a useless concept here….

    That’s making a clone, not “uploading” yourself. Doing the latter is not merely some very difficult, complicated, practical problem that might be overcome (however unlikely it is) by technologies in the distant future — I don’t see how it makes any sense that anybody could ever do that. I have no idea why I should believe that some other “personal, unique awareness in a box” is me. It’s not implied by merely having the same form, properties or functions as my brain does. That would be another person whose mind is just like mine (for the moment, until we start experiencing different things, if we both survive this procedure). So what, if you could make my twin or my clone or my copy? What is that supposed to be doing for me?

  12. cartomancer says

    It’s a moot point anyway. Politics is trying its hardest to make brains unnecessary in the future, and I think it can achieve that goal far more quickly…

  13. consciousness razor says

    If the copy is of sufficient fidelity, no external observer would ever know, or ever could know. There would be no external independent test that could verify that the copy was not a continuation of the original.

    Why couldn’t they know? If you have me in one room and my copy is made in another room, of course other external observers would be able to know. They can tell which room each person is from. You may not be able to tell if you’re the copy or the original, because your past memories would be indistinguishable from false memories that were planted into the clone, but others certainly could know that fact.

  14. petesh says

    applehead @12: Lighten up. Clarke on cold fusion is a fine counter-example, but Clarke on reactions to women’s bouncy bits was, ah, likely to be ill-informed.

  15. applehead says

    @17:

    The symptoms conform to the pattern of a well-known pathology. Clarke is the Dawkins of SF, or maybe the Rand.

    Fostering transhumanut techno-triumphalism has negative results in the here and now. How many talent and money has been wasted on coding the Digital Heaven or God Program that could’ve been used to tackle cancers, environmental degradation and just wealth distribution?

  16. Elladan says

    PZ @ 10: You know, before the techno-rapture singularity people got involved, I seem to recall that science fiction types who talked about uploading / copies / etc. seemed to imagine that it would be tied to an advanced theory of how the brain actually works.

    If you assume that the people doing the uploading have a really advanced brain science and could actually understand what they’re doing and intelligently fix brain damage etc., the whole thing seems a lot less implausible. At least for me, the part where it really falls apart is when people start talking about Brain! Uploading! Tomorrow! Moore’s Law! Singularity!

    In other words, the difference between something we can engineer in the near future, and some wild idea that might be plausible given hundreds or thousands of years worth of advances in our scientific understanding.

  17. Blattafrax says

    Well, Surface Detail is one of my favourite books, so I’m going to have to (slightly) disagree here. I don’t think you need complete knowledge of the molecular details of the whole brain. Most of the brain deals with a lot of stuff that we don’t really need (as electronic entities) All we have to really do (I think) is log what goes in and out of the hippocampus from birth, in perfect detail and make sense of it. A neural net* implanted at birth would do it and I reckon in a few hundred years it’d be under control.

    More useful – and a step in the right direction – would be a brainWikipedia USB link. Could someone work that out please?

    * or similar technology like cerebral crocodile clips or hippocampal handcuffs.

    ** And now anyone who knows me in meatspace knows exactly who this is. Hi!

  18. slithey tove (twas brillig (stevem)) says

    re @16:
    consider Turing’s “Imitation Game”: ie bring the Original and the Copy into a room, how can any questioner distinguish each as Copy v Original? QED.
    I too have always had this problem with that ~conundrum, particularly with the teleportation scenario. I’ve since decided that the conflict arose from “dualistic” thinking (ie, “mind” is a separate thing using the “brain” as a hardware interface to reality).
    Regardless, whether identical or not, the copy, at the moment of it’s creation, becomes a separate entity from the original, essentially an “identical twin”. Not being colocated perpetually gives each separate experiences to enrich their personalities individually.

  19. consciousness razor says

    consider Turing’s “Imitation Game”: ie bring the Original and the Copy into a room, how can any questioner distinguish each as Copy v Original? QED.

    Ask each which room they came from. Since they may not know and don’t need to know, get evidence of which room they came from and don’t bother “testing” them in this way. Getting such answers from them is not the only sort of observation we could make.

    Amphiox just had it backwards. The epistemological issue is not that others can’t know, because they could have all sorts of ways of knowing that, but that you can’t be sure of it simply by means of subjective introspection. How are you supposed to know that your memories aren’t false? In fact, you don’t know that. Asking yourself “does it seem as though my consciousness continued the transition?” (or similar) wouldn’t do the job, no matter what answers you might come up with. The person who doesn’t have continuity with the original, but merely believes that, would have all the same reasons to give the same answers.

    Regardless, whether identical or not, the copy, at the moment of it’s creation, becomes a separate entity from the original, essentially an “identical twin”. Not being colocated perpetually gives each separate experiences to enrich their personalities individually.

    Well, okay, but forget about “perpetually.” They’re not ever colocated for even a single moment. And even before or without being “enriched” by new experiences, at a moment whenever they are exactly the same (except locations obviously), they are still two people and not a single person. Being exactly identical to you in this way just plain wouldn’t mean that it is you.

  20. leerudolph says

    PZ:

    The only excuse I can see for the scenario is if there is a fair bit of slop in the specificity: the brain doesn’t have to have every molecule precisely in place to generate a “you”, but if the synaptic boutons are all within a tenth of a micron and the ion concentrations are within a fraction of a micromolar and all the right proteins are in approximately the right place, you get something that is approximately “you”, a you as if you’d just gotten a nearly fatal electrical jolt and fallen off a building and gotten your head slammed by a hammer.

    For this possible excuse to apply across the board, the dynamical system constituted by all the physical states you mention and the physical laws that govern how they change in time would (more or less by definition) have to be “structurally stable”. Now, there are high-dimensional structurally stable dynamical systems (in mathematics), and I suppose there is solid empirical evidence that some high-dimensional dynamical systems (in the physical world) are structurally stable. But there are also (in dimension 3 and higher) scads of non-structurally-stable dynamical systems; and I see no reason to believe (and have no idea of how either mathematical theory or empirical science could give me reason to believe) that the dynamical system you describe is structurally stable.

    The situation is analogous (but not identical) to that of determinism and “Laplace’s demon”; as the probabilist I. J. Good put it,

    if a flea is deterministic, it is like an unbreakable cipher machine. To predict its future, under all normal circumstances, for a time T ahead, assuming some deterministic theory analogous to Newtonian mechanics, Laplace’s demon would need the initial conditions expressed to a number of decimal places proportional to T.

    (The distinction between the cases is that, rather than having to predict the long-term future of a precisely given system to within a specified degree of accuracy, Laplace’s Brain-Uploader has to arrange short-term identity of an imprecisely given system within a specified degree of accuracy.)

    I don’t think I’m bullshitting, but welcome informed correction (or even snark; what the heck, I’m elderly and closer to ex- than to dis-tinguished, but I can take it).

  21. Gregory Greenwood says

    I have always found the Singularitarian obsession with ‘upload’ to be curious. The goal is so immensely difficult to achieve, and even if the near insurmountable obstacles could be resolved one day, the end result would still really be a technological copy of you that would be little more than an AI that thinks it is you, while the actual person perishes. Its experiences going forward, and the way in which such a being would be capable of experiencing the world, would be fundamentally different from the meatbag you to the point that it would cease to bear any resemblance to the version of yourself you are trying (and failing) to preserve in short order.

    If you are so desperate to extend your existence at almost any price, you would be better off focusing on some hypothetical biological means of slowing or reversing senescence, or even trying to develop the means to take an existing human or human foetus and modifying it to the point where it has a functionally immortal (cut price immortality I grant you, but the only type of immortality you will ever even have the slimmest chance of achieving) genetically engineered/cybernetic physiology. That might buy you a few centuries at best, if we could somehow pull it off, before something thoroughly terminal happened like an injury sufficient to kill even the new and improved you, or the collapse of the society that maintains the tech that keeps you going (hardly turning you into a voyager on the cosmic tides of deep time). And that has to be a contender for the all time heavyweight champ of ‘ifs’.

    Despite the fact that this in itself would be unprecedented in its complexity and would carry with it no small measure of risk (and may very well ultimately prove to be impossible in its own right – cells that had to replicate for so long without any aging process would throw up all kinds of issues, not least with the risk of tumour formation) it would still be orders of magnitude more practicable than this mind upload stuff, which is frankly pretty much on a par the mythical fountain of youth from a scientific standpoint so far as I can see. At least we know that there are biological systems able to endure contiguously for far longer periods of time than our own, not that this information does us much good in itself.

    Sorry to all the Kurzweillites, Singularitarians, H+ types and transhumanists of all kinds, but when something as ridiculous as engineered biological pseudo-immortality is far more credible than your pet obsession, you probably need a reality-check. I like science fiction too, but I know when to put the book down and take a step back.

  22. Steven Brown: Man of Mediocrity says

    Brain uploading. Pah! I’d rather get to the point that they can undo/prevent the damage from things like Alzheimer’s, which I’m pretty likely to get.
    Once that’s sorted I’d love to get to the point, and I think in actual fact we’re not too far from this, of being able to add new inputs to our brain: I’d love to be able to see thing like ultraviolet, or hear sounds that humans can’t currently.

    I don’t necessarily want immortality but I would like all the years I do have to be ones where my brain isn’t slowly falling apart and the thing I consider me, whatever that is, slowly looses whatever it is that makes it me. And if I could experience some cool new sensations along the way? That would be awesome.

  23. leerudolph says

    Elladan@19: “You know, before the techno-rapture singularity people got involved, I seem to recall that science fiction types who talked about uploading / copies / etc. seemed to imagine that it would be tied to an advanced theory of how the brain actually works.”

    Certainly Marvin Minsky’s remarks at the memorial for Oliver Selfridge in 2008 suggested that he still imagined that, and had non-zero hope (if not expectation) that the problem would be solved before he died.

    Mind you, Minsky’s the man who (AI Lab legend claimed) assigned computer vision as a summer project to some graduate student or other.

  24. leerudolph says

    Gregory Greenwood@27:

    The end result would still really be a technological copy of you that would be little more than an AI that thinks it is you, while the actual person perishes. Its experiences going forward, and the way in which such a being would be capable of experiencing the world, would be fundamentally different from the meatbag you to the point that it would cease to bear any resemblance to the version of yourself you are trying (and failing) to preserve in short order.

    As to the last statement—that the “technological copy of you” would (soon) “cease to bear any resemblance to the version of yourself you are trying (and failing) to preserve”, I’m not sure that would be taken as an objection by everyone who promotes the project of “brain uploading”.

    In particular, my impression has always been that Marvin Minsky (for one) imagines that his “uploaded brain” would continue thinking hard about the problems he’s been thinking hard about all these years; it’s got to have the meatbag’s memories of his work on those problems (and other similar resources), and the meatbag’s creative capacities (ditto), but any other “resemblance” would have at most secondary importance to him. I could be very wrong, of course.

  25. Elladan says

    I noticed two people mention Iain Banks, and I just wanted to point out that the stories in question are about hyper intelligent alien space robots the size of mountains and their human familiars who figured out biology and AI ten thousand years ago.

    This is super cool, but it probably isn’t a great starting point for figuring out whether the EU is spending billions on the wrong thing or whether you ought to buy an annuity or not.

    P.S. The humans are also aliens.
    P.P.S. Except in the story where they come to Earth and laugh at our capitalism.

  26. gillt says

    PZ:

    The only excuse I can see for the scenario is if there is a fair bit of slop in the specificity: the brain doesn’t have to have every molecule precisely in place to generate a “you” […] you get something that is approximately “you”, a you as if you’d just gotten a nearly fatal electrical jolt and fallen off a building and gotten your head slammed by a hammer.
    But even that level of accuracy is going to be unattainable, I think, and amplifies the doubt that your copy truly has any continuity with who you are.

    You can’t have it both ways: We all consider the continuity of the ‘you’ of last night or two years ago or even the ‘you’ before you suffered brain damage [barring a vegetative state maybe] as unbroken. What difference practically speaking is there then accommodating some copying error approximations (quantum or otherwise) when downloading a brain. Once that copied brain engages with an environment, one different from the original, then it becomes less a copy and more unique and deserving of autonomy.

  27. says

    If the copy is of sufficient fidelity, no external observer would ever know, or ever could know.

    Before all of the nuts-and-bolts problems of replicating the mind, it also necessary to have a falsifiable model of consciousness. As it is, we rely on generalizations and subjective evaluations to prove wether or not a Turing correspondent is “real” or a machine.

    I have my doubts that we’d ever produce brain uploading in the sense transhumaniats mean. But I have no doubt that in the next 20 years we’ll have computer programs that can give every external appearance of being the same as the intellect of a living person. We don’t actually know what consciousness is, but some people are so desperate for the wish fulfillment of the Singularity, they’ll convince themselves that deep Markhov models and heuristics are “just as good,” when in fact we have no basis for making this judgement.

  28. petesh says

    Some years ago, at a transhumanist discussion at Stanford, which I attended as a sort of opposition research, I heard George Dvorsky (my favorite Buddhist transhumanist; actually a smart guy outside of certain areas) propose that brain uploading would be immoral unless we also uploaded the brains of all sentient beings — cows, as I recall, were the main example. Just saying.

  29. petesh says

    applehead @18: If I address this rubbish seriously, I completely agree with you, as you may by now have deduced. Nevertheless, I think ridiculing transhumanists is a useful way of isolating them, and arguing with them on their own grounds generally involves getting sucked into a vortex of idiocy out from which it is hard to clamber.

  30. keinsignal says

    I’m not going to bother digging up the link, as including it will probably delay this getting posted, but there’s a good essay online regarding the “technological singularity” that brings the history of flight in as a very instructive parallel… See, that whole “200 years ago people couldn’t conceive of flying machines” argument cuts two ways. Certainly few people in 1816 could have foreseen the ubiquity of aircraft today, but a century-and-change later, as the jet age was beginning to take off (so to speak), people began wildly overestimating the pace of future progress — 50-60 years ago, the general assumption was that by now we’d all have personal aircraft, every international flight would be supersonic, and we’d be building Howard Johnsons on the moon. The article even includes a very Kurzweilian chart from Popular Mechanics or somewhere like that, showing airspeeds from the Wright brothers through the X-1 with an asymptotic line of improvement projected for the 21st century.

    Of course none of that happened – in fact, we’ve seemingly taken several steps back. We haven’t been back to the moon for decades, the Concorde was shelved for good in 2003 and had been a money-losing relic for considerably longer than that, and the average passenger plane of today is actually slightly slower than its counterpart from the 60s or 70s! (But significantly more fuel-efficient).

    And that last parenthetical is key… The reason for that apparent lack of progress is simple – it wasn’t worth it. Even when the technology was available to push further, there was no incentive to do so. The moon’s a barren wasteland, most people shouldn’t be trusted with regular cars much less flying ones, and why build a fleet of noisy, inefficient, expensive-to-maintain SSTs when you can cram five times as many people on an Airbus A380 and still get to where you’re going in a reasonable amount of time? In fact, of course, there has been progress – better efficiency, better materials, better automation – but that innovation has all gone to solve the sort of immediate, practical problems that don’t capture the public imagination like jetpacks and rocketships do.

    I think we’re going to see the same pattern when it comes to these singularity techs like “brain uploading”, or even human-like AI: the risk, expense and hassle for these massive advances simply will not pay sufficient dividends when “good enough” alternatives are readily available and exponentially cheaper.

  31. consciousness razor says

    I think we’re going to see the same pattern when it comes to these singularity techs like “brain uploading”, or even human-like AI: the risk, expense and hassle for these massive advances simply will not pay sufficient dividends when “good enough” alternatives are readily available and exponentially cheaper.

    Well, you may be right about human-like AI. I doubt it…. But brain uploading simply isn’t happening. It’s incoherent or inconsistent with physics. You couldn’t solve problems like that with better technology, more money, more effort, more widespread interest, more potential benefits, etc.

    Suppose we’ve got two bodies. There’s mine here on Earth, and there’s another somewhere in the Andromeda galaxy. They’re the same. I don’t care how either of them got there, and no engineer needed to intentionally design the Andromeda one to be just like mine. However it happened, the relevant fact is that they are same, down to the last elementary particle if need be. Or if it’s just the brain/computer that you care about, there can be a robot on Andromeda instead, with identical functioning to my brain, which magically appeared there if you like. It makes no difference.

    How in the fuck could that mean that my personal identity exists over there in Andromeda? It doesn’t mean that. They’re two separate and independently existing people, even though they are the same. I would not “live on” in that other person or with that other person. It would not make me immortal, even if a sequence of many copies exist in someplace or another, one after another, for any arbitrary length of time. I’m still mortal. I’ll have my own experiences in my own body in its own environment, independently of whatever else may be out there in the universe. That’s how it actually works. If you don’t have the kinds of conditions that we’re talking about, when we say my brain causes my mind, because you’re assuming “I” am just some abstract formal entity like a piece of information with a totally arbitrary causal history, then you’re not talking about my personal identity or mortality as a human being at all. You’re just talking about an abstraction, and that’s not what a person is. That’s not what a brain is, and that’s not what consciousness is.

    Woody Allen is quoted as saying, “I don’t want to achieve immortality through my work; I want to achieve immortality through not dying. I don’t want to live on in the hearts of my countrymen; I want to live on in my apartment.” Myself, I don’t want immortality anyway. But that is what people think they’re signing on to with this brain-uploading bullshit. That is how they talk and argue about it. They want to not die and think it could happen (with the right technology) in some kind of a process like this. You wouldn’t get that, however, because at best you would only get an identical copy — which doesn’t mean I don’t die, or that somehow I do magically exist in a new body. So, we don’t need to assume anything about any (currently unknown or unimaginable) scientific or technological developments that may or may not happen in the future. Short of finding that we have immaterial souls which exist independently of our bodies, which won’t happen, nothing that anybody discovers or invents will have any effect on that.

  32. Rob Grigjanis says

    Technology is a red herring for the idea of copying or uploading. Any technology which copies you with sufficient fidelity to reproduce you in any sense would kill you.

  33. moarscienceplz says

    I’m a bit surprised that our understanding of synapses isn’t further along, but I suspect there is a way to model the functionality of a synapse that is much simpler than mimicking the action of every ion. The fact that I am able to maintain memories over decades that still gibe fairly well with reality I think implies that a massive reorganization of a major chunk of my brain is probably NOT happening every time I make a new memory. Yes, some kind of palimpsest memory storage may well be happening, but there probably aren’t thousands of potential states each synapse would have to be able to achieve and communicate on to other synapses. Dozens, possibly, but not thousands.

  34. brett says

    That would probably doom trying to make an exact copy of your brain, but it doesn’t seem like it would necessarily close off a Ship of Theseus approach if you can figure out how to connect either electronic neurons (or simulated neurons running in a computer network) to your biological ones. You’d then just gradually expand the amount of electronic neurons that your brain is using while slowly diminishing the number of biological ones – or even just have your mind using both and then relying totally on the electronic ones when your biological neurons die.

    From your perspective, it’d be like how your body replaces much of its cells over time anyways (except for the neurons under normal circumstances). You wouldn’t notice the gradual change, or have some sense of discontinuity.

  35. astro says

    but we already HAVE teleportation, albeit at the quantum level, and of no value to anything macroscopic.

    speaking of quantum stuff, every time i hear someone predict that we’ll have within years, i am reminded that given the pace of discoveries into quantum mechanics in the 1920s and 1930s, we should have had a grand unified theory of gravity and fusion reactors by 1950 or so.

    technological advancement doesn’t work that way.

  36. astro says

    weird, i guess i accidentally put in code. i meant to say, “every time i hear someone predict that we’ll have [complex thing] within [x] years…”

  37. Pierce R. Butler says

    If “I” woke up in a computer, “I” would want to explore existence in cyberstate, and after only a few petaflops “I” would no longer be “me” anyhow. A whole Matrix-level deception/hobbling process could perhaps preserve a sort of “me” for quite a relative while (given that the bodily me has yet to channel Keanu Reaves), but if I’m going to sentence “myself” to a fantasy life I want a few script changes from v. 1.0.

  38. AlexanderZ says

    Two hundred years ago, people thought that flying machines were more or less impossible; cellphones were inconceivable as real-world artifacts. Nobody knew what a genome was, and nobody could have imagined sequencing one for a thousand dollars.

    Ah, the ever popular argument from stupidity.
    For one thing, flying machines were already a fact two hundred years ago and humans have been building these machines for nearly two thousand years. True, hot air balloons aren’t airplanes, but that’s a minor quibble when the essential questions, like the amount of force necessary for lift off, were already answered and only the availability of a good enough fuel source was the problem.

    This brings me Donald Rumsfeld. See, people didn’t know about smartphones because they had almost no knowledge of the most basic of sciences needed to make one. It was an unknown unknown. We, on the other hand, have a pretty good understanding of certain limitations of the physical world that prevent brain upload and other such nonsense. Or in other words:
    The technological steps needed to fully (or sufficiently well) simulate a living human brain are an unknown unknown, the necessary knowledge of brain science is a known unknown, and the physical limitation of our bloody universe (such as the uncertainty principle) are a known known.

    Rumsfeld trumps Marcus and that speaks volumes about H+ and its adherents.
    _______________

    leerudolph #30

    In particular, my impression has always been that Marvin Minsky (for one) imagines that his “uploaded brain” would continue thinking hard about the problems he’s been thinking hard about all these years

    Wouldn’t a sufficiently advanced AI be a much better goal then? It would have the advantage of a far larger pool of knowledge than any human brain could ever have. Why focus on downloadable brains?
    _______________

    petesh #34

    brain uploading would be immoral unless we also uploaded the brains of all sentient beings — cows, as I recall, were the main example.

    Clearly he was a fan of Star Control 3.
    _______________

    keinsignal #36

    I’m not going to bother digging up the link, as including it will probably delay this getting posted

    I’m very interested in that link.

  39. consciousness razor says

    but we already HAVE teleportation, albeit at the quantum level, and of no value to anything macroscopic.

    That’s not teleportation, in the sense it’s normally understood. You’re creating a state at B which is the same as the one created at A. You didn’t send (or carry, or move through a port, etc., as “teleport” suggests) the particle at location A to the location B. Physicists are free to use “quantum teleportation” as an exciting piece of jargon if they like, but it shouldn’t be interpreted literally or confused with what most people would assume by the word “teleportation.” To actually move that physical object to somewhere else, even a microscopic one, it does still exist in and go through spacetime. Instead of moving the thing, you could make another one over there which is just like it. That gets the job done too, but it’s not terribly mysterious or exciting when that sort of thing happens at macroscopic scales either.

  40. consciousness razor says

    Ah, the ever popular argument from stupidity.
    For one thing, flying machines were already a fact two hundred years ago and humans have been building these machines for nearly two thousand years. True, hot air balloons aren’t airplanes, but that’s a minor quibble when the essential questions, like the amount of force necessary for lift off, were already answered and only the availability of a good enough fuel source was the problem.

    Besides, there are birds, bats, insects, etc. People knew they could fly a very long time ago, well before anything in recorded history. So I’m not buying it, that a reasonable person would’ve thought it’s impossible for a machine to do the same thing. You’d basically have to assume birds, etc., have some kind of magical non-physical energy source, that human-made devices could never use. Or they’d have to be considered really special somehow or another…. “god meant for them to fly, not us” does that even count as an explanation? Maybe it helps a lot that I’m looking at it in hindsight, but it’s pretty hard to imagine anyone ever giving a decent argument that people simply couldn’t do that.

  41. unclefrogy says

    well if you look at the idea flight from the standpoint of what the thinking was 200 or more years ago what we do today is different. We do not fly like the animal masters of flight birds we can go much faster but do not do so by moving wings to “swim” through the air nor do we have the control that is demonstrated by birds but we do fly by generating lift with wings and by brute power.
    I think that if something like up-loading ever happens it will not look like what is envisioned by the current dreamers but will be uploading. Maybe remote operating some kind of device with lots of sensory awareness and abilities.

    I think PZ has it about right as does keinsignal.

    But making an exact copy of 1.5kg of insanely intricate meat, in a medium that isn’t meat, somehow? Nah, that’s not a realistic proposal.

    uncle frogy

  42. consciousness razor says

    And of course people have been making arrows and other projectiles. That fits the concept of “flying contraption” pretty well, and those were quite successful in that limited sense, a very long time ago. It wasn’t exactly fair of me, to talk in terms of “energy” like a modern person would, but older theories of motion wouldn’t have ruled out flight either. Because they knew very well that things could fly.

  43. says

    @#27, Gregory Greenwood

    I have always found the Singularitarian obsession with ‘upload’ to be curious. The goal is so immensely difficult to achieve, and even if the near insurmountable obstacles could be resolved one day, the end result would still really be a technological copy of you that would be little more than an AI that thinks it is you, while the actual person perishes. Its experiences going forward, and the way in which such a being would be capable of experiencing the world, would be fundamentally different from the meatbag you to the point that it would cease to bear any resemblance to the version of yourself you are trying (and failing) to preserve in short order.

    No no, that’s just typical post-vitality personality drift.

  44. xnoarchive says

    My take on this issue is that it revives the old mind/body fallacy. There is no evidence that anything called “the mind” exists separate from the body. Our brains are our minds. Change the brain, change your mind. People with brain damage exhibit drastic personality changes. People with Alzheimer’s are no longer the same person they used to be.
    Exactly how “mind uploading” is supposed to work when brain = mind is never explained. How, exactly, does some kind of “upload” magically change all the organic structure (including precise proportions of many thousands of different neurotransmitters) without radically tearing apart and rebuilding the brain? And how does anyone survive that?
    It doesn’t make sense.
    Mind “uploading” presupposes that the mind is “software” separable from the organic brain hardware. It just doesn’t work that way. All the evidence suggests that brain = mind.

  45. xnoarchive says

    @39 moarscienceplz said: “I’m a bit surprised that our understanding of synapses isn’t further along, but I suspect there is a way to model the functionality of a synapse that is much simpler than mimicking the action of every ion.”

    Our understand of synapses is quite far along. That’s the problem. Turns out synapses are far more complex than simple on/off switches:

    “Attempting to explain the incredible complexity of the brain that is revealed, Smith said, “One synapse, by itself, is more like a microprocessor—with both memory-storage and information-processing elements—than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth,” he said.”

    Source: “Brain more complex than previously thought, research reveals,” 3 December 2010.

    For the nitty-gritty details, see “Mapping Synapses by Conjugate Light-Electron Array Tomography,” Forrest Collman, JoAnn Buchanan, Kristen D. Phend, Kristina D. Micheva, Richard J. Weinberg, and Stephen J Smith, The Journal of Neuroscience, 8 April 2015, 35(14): pp. 5792-5807.

  46. xnoarchive says

    Elladan remarked: “You know, before the techno-rapture singularity people got involved, I seem to recall that science fiction types who talked about uploading / copies / etc. seemed to imagine that it would be tied to an advanced theory of how the brain actually works.”

    Not so sure about that. One of the earliest examples of mind uploading I know of comes from Roger Zelazny’s “Lord of Light” (1967). Zelazny hand-waves his way around the issue. Lots of pretty lights and crystals and fancy electronics, nothing remotely like hard science.

    Unless you want to consider Edgar Rice Burroughs’ telepathic dinosaurs, the Mahars, as mind uploaders. But there’s always been a tricky confusion twixt mind uploading a mind control in science fiction. Probably because no one seems to have a clear notion of what a “mind” actually is, as opposed to a brain. Minds tend to have magical properties, like souls — typical of meaningless fantastical objects. “Mind” is the phlogiston of neuroscience.

  47. M. L. says

    I can’t wait to go on a near light speed day cruise to a club on one of Jupiter’s moons. It makes sense. Our relative speed to the earth went from like 25 miles per hour on a horse to like 80 in an old train to hundreds on a plane 24,791 miles per hour on a rocket ship to the Moon. It was inevitable that this would increase at that rate to near light speed.

    There are plenty of examples of cases where something turned out to be vastly harder than what optimists thought. You can just look at tons of medical examples cancer for that. Claiming scientists were wrong before about how difficult or feasible something would be is up there with the Galileo gambit in that it mostly just proves the difficult of predicting the long term future in general.

  48. emergence says

    I’ve talked about my sympathies with transhumanism before, but brain uploading isn’t something I see as being possible. Beyond all of the stuff about how it would be impossible to capture and digitize every last isolated detail of our brains that contribute to our consciousness, I see a couple of other problems.

    First, I don’t really see the future of AI as being in software. I tend to think of an AI as being the actual, physical device, not the software it’s running. Computers are so different from brains in terms of how they work, I figure that we would have to stop trying to make a standard computer self-aware and focus more on trying to create a sort of electronic brain. With that in mind, trying to simulate a brain using software down to the atomic level seems like the wrong way to go.

    Second, I’d like to go back to the usual point that gets brought up; it’s not that you’re “transferring” your mind to a computer, you’re just slowly destroying your own brain to make a copy of it in a computer that thinks it’s you. Even when you’re transferring files between computers, you’re not literally moving some incorporeal cloud of electrons between them, you’re just having one computer tell the other to rearrange a set of circuits in its memory. Let’s just say that you somehow managed to scan your own brain without destroying it. Then there’d be two versions of you, and it’s not like you’d experience being both of them at the same time.

    If we’re trying to extend human life and preserve human minds, we should probably focus on repairing and maintaining our flesh-and-blood brains to make them last longer. Why try to construct a whole new brain for yourself when you can just use regenerative medicine to repair age-related wear and tear on the brain that you already have?

  49. ragdish says

    Consciousness is still a neural mystery and the answer does not lie in the complex behavior of individual neurons. The neurons in the perisylvian language regions, occipital lobes, basal ganglia, etc. are not wholly different from each other in regards to overall physiology and biochemistry. The evidence is that you can damage those areas severely and still have a conscious individual. There is nothing special about the individual neurons that participate in the collective activity that gives rise to thought. Indeed neuroscientists who have done substantial work in the field of consciousness (eg. Christoff Koch, Giulio Tononi) dispute the claim that you need to know the details of neuron A and B. Rather what is still unknown is how information is processed among large numbers of neurons to give rise to subjective life. This raises the valid question of whether such patterns of information processing can take place among non-neuron substrates and the possibility of neural upload.

  50. says

    “The only excuse I can see for the scenario is if there is a fair bit of slop in the specificity:”

    My point is that we already know there’s a fair bit of slop in the specificity: thermal noise is irrelevant, sleeping-you is as much you as waking-you, meditating-you is as much you as furiously-angry-you and recently-concussed-you, etc. No doubt an inconceivably large number of degrees of freedom are critical for personal identity, but we know personal identity is robust to variation in many others.

  51. Cuttlefish says

    The vast majority (including the above) of the musings about brain-uploading implicitly or explicitly assume some sort of neural representation of the other elements involved (body, of course, and environment and time). We see “personality” as it unfolds over time in interaction with a social and physical environment; the mechanistic model has us interacting not with the real world, but with a cognitive/neural/mental (depending on who is writing) representation of that real world, which presumably would be stored, or storable, in the brain, and part of what is uploaded.

    But it is every bit as likely (I’d like to think more so) that we, y’know, actually interact with a real world over time. That the patterns of behavior we call “personality” are in part not stored internally at all, in any form. Our interaction over time gets reified into a metaphor-thing, a “trait” or “attitude” or “type” or “habit”. For some of the aspects of our personality, the notion that they are somewhat permanent and stored is far more a requirement of a mechanistic philosophy than an observed reality.

    And if we interact with a real world, not a neural representation–if we interact with the environment rather than a cognitive map–then there is absolutely no reason to suspect that huge portions of our personality would even be available for uploading. It’s not that these things are extraordinarily complex; it’s that they are fictional.

  52. consciousness razor says

    We see “personality” as it unfolds over time in interaction with a social and physical environment; the mechanistic model has us interacting not with the real world, but with a cognitive/neural/mental (depending on who is writing) representation of that real world, which presumably would be stored, or storable, in the brain, and part of what is uploaded.

    Whose model are you talking about? There’s no other thing, which is “us,” that is interacting with a representation. Our brains are representing various kinds of stimuli from the real world. That’s how we interact with the real world, not an alternative to doing that.

    That the patterns of behavior we call “personality” are in part not stored internally at all, in any form. Our interaction over time gets reified into a metaphor-thing, a “trait” or “attitude” or “type” or “habit”. For some of the aspects of our personality, the notion that they are somewhat permanent and stored is far more a requirement of a mechanistic philosophy than an observed reality.

    I don’t see why “a mechanistic philosophy” (whatever that means to you) assumes that anything mentioned in a folk psychological theory must be stored anywhere in the brain. We’re not committed to saying those things are even real, much less taking such a specific stance about what kinds of things they are.

    And if we interact with a real world, not a neural representation–if we interact with the environment rather than a cognitive map–

    What could this mean? My own awareness of being myself is a representation created by my brain. So, saying “we” interact with a “representation” is confusing at best. Who and what are “we,” according to you?

    then there is absolutely no reason to suspect that huge portions of our personality would even be available for uploading.

    Uploading just isn’t happening, as I’ve said repeatedly in this thread. There is no reason to think anything is available for uploading, because it doesn’t make any fucking sense at all. How exactly is your brand of externalism supposed to be an improvement over that?

    It’s not that these things are extraordinarily complex; it’s that they are fictional.

    Why must a “mechanist” (an awfully loaded and misleading term, and I expect better from you, Cuttlefish) think anything that anybody decides to call “personality” is therefore (1) real or (2) a coherent/stable/consistent feature across individuals that could be recognized as something stored in the brain? Is there some reason why we would have to be so naive about it?

  53. leerudolph says

    xnoarchive@52: ““Mind” is the phlogiston of neuroscience.”

    I propose to quote that often.

  54. jamiejag says

    How do we even know that a brain in isolation is you at all? What is the ratio of conscious activity in the brain to unconscious? Would we first have to develop an equivalent environment for the uploaded brain to operate in in order for what we call consciousness or self to emerge?

  55. richardh says

    applehead@18:

    How many talent and money has been wasted on coding the Digital Heaven or God Program that could’ve been used to tackle cancers, environmental degradation and just wealth distribution?

    But but but but … It’s either that or the B*s*l*sk!

  56. says

    The question is “can we create an artificial object which experiences its own existence in the exact same way a human does”, which is probably not going to be answered by any attempt to perfectly replicate a human being at the atomic level.

    I think the quantum scale replication argument is simply an answer to the suggestion that the transfer of a human mind to a synthetic substrate is impossible in principle. On the other hand, proposing a simulation of the interactions of 10^27 atoms is likely to give any computer scientist a raging headache.

    I find it more interesting to ask whether there is any aspect of human consciousness which is inextricably bound to the organic substrates in which it evolved. Meaning, in order to be human, or indeed, conscious at all, is it necessary for the mind in question to arise from organic chemistry? I doubt we can answer that question at this time, and it can be dismissed by appeal to the digital simulation of organic chemistry, but it does need to be answered before any attempt to produce an inorganic human brain makes sense.

    If mental transference is possible any time in the near future it seems to me more likely to be a consequence of our understanding of the brain advancing to the point where we can confidently state which interactions are critical and which are incidental.

    It may prove to be the case that, since ‘you’ are not ‘your brain’ in the strictest sense, the pattern of interactions which IS ‘you’ can be abstracted at a much higher level than individual neurons. I don’t know how much grey matter can be excised before a person is officially declared to be a different person, but certainly there is no single neural connection which is indispensable to the existence of your self.

  57. stevenjohnson2 says

    “The only excuse I can see for the scenario is if there is a fair bit of slop in the specificity: the brain doesn’t have to have every molecule precisely in place to generate a “you”, but if the synaptic boutons are all within a tenth of a micron and the ion concentrations are within a fraction of a micromolar and all the right proteins are in approximately the right place, you get something that is approximately “you”, a you as if you’d just gotten a nearly fatal electrical jolt and fallen off a building and gotten your head slammed by a hammer.
    But even that level of accuracy is going to be unattainable, I think, and amplifies the doubt that your copy truly has any continuity with who you are.”

    Since there is no way to have every molecule “precisely” placed in a biological organism, then the brain must tolerate a fair amount of slop in the positioning of molecules. Molecules are pretty precisely placed in crystals (which the brain isn’t, anyhow,) but there are not really any mechanisms for placing molecules precisely in position which is why crystals the size of brains are so rare: They are not freely reproducible. Also, what does it even mean to talk about precise positions for molecules in suspension? Nothing, I think. Perhaps diagrams meant to highlight chemical reactions are being taken a little too literally.

    The conclusion though is merely that although we can’t say such copying will always be impossible, we can say that it is impossible now, and that we don’t even have a well-defined set of problems to solve before we could.

    However, since nature does deal many an insult to the human brain, in the form of trauma and disease, the conclusion that a rough copy can’t count as a continuation of the “real” person, the only consistent conclusion is to regard victims of strokes, brain injuries, dementia of various sorts, ailments like ALS or Parkinson’s as new persons. (Also, sufferers from mental illness, for those unlike the OP who believe there is such a thing.) I too think transhumanism is nuts in regarding speculation as potential fact at this time. But taking such a position so that you can claim they are in the same category as believers in perpetual motion seems like accepting vitalism to refute spiritualists.

  58. Marshall says

    These arguments from complexity are missing out on the concept of reductionism, and the dispensing of redundancy. Our brains do have billions of neurons and trillions of connections; but could one build a functionally equivalent brain using far less? In order to answer this question, we’d have to go deep into the question of what a “functionally equivalent” brain is. The brain isn’t really a black box, since we think, and that thinking is part of our brain’s computation and therefore dependent on its internal structure. But suppose we could instantly replace a module in our brain with a more compact, redundancy-free module for which the net observable effect on our thoughts and actions is literally zero. If this is true, that our brains can be logically “reduced” (I am not stating that they can), then PZ’s requirement of a perfect copy is too strong.

    A likely result is that the logical topology of the brain is simply too complex for humans to understand, but for which future AI will have no problem with.

  59. says

    @applehead, #18:

    Fostering transhumanut techno-triumphalism has negative results in the here and now. How many talent and money has been wasted on coding the Digital Heaven or God Program that could’ve been used to tackle cancers, environmental degradation and just wealth distribution?

    In science, effort is only “wasted” if it is expended repeating exactly something that did not work the first time. Finding out that something doesn’t work is still a conclusion and a suggestion for possible future research. And since there are by definition more ways that won’t achieve the desired result than will, there are a lot of blind alleys to be found.

    The real problem I have with the naysayers is, “the brain is just too complex to model accurately” sounds just a bit too close to “life is insanely complex, therefore it could not possibly have evolved” for comfort.

  60. says

    The argument from our ability to increase computing power amounts to nothing more than:
    “If we can write a perl script today, that does
    print “Hello I am Marcus\n”;
    eventually, as computers get more powerful, we will have a vastly more complicated perl script that can actually simulate Marcus, to the point where the perl script can be said to think is it Marcus.”

    Not an eternity to be desired.

  61. says

    @ Marcus, #68: Actually, I’d be more impressed with an AI that could understand Perl scripts written by other people …..

    (Says she who always uses strict and -w)

  62. says

    I’d be more impressed with an AI that could understand Perl scripts written by other people

    It already exists! It’s called a perl interpreter. ;)

    (My schooling was from the 1970s school of “if the compiler can understand it, it’s self-documenting code”)

  63. ravensneo says

    Exactly! Its the same problem as teleportation–turning all the exact information in meat into digital (or something else) information. Although it broke my heart, when reading “The Physics of Star Trek” years ago, it most clearly illustrated that the Transporter, with its Heisenberg Compensators, was the LEAST likely technology to come to pass (while others are already here of course–communicators, lasers, etc) due to the impossibility of knowing the location and momentum of a particle at the same time. Without altering the information. So goes the very nerdy joke–when an electron gets pulled over for speeding, and the officer tells him how fast he was going, he say “OH THANKS! Now I’m LOST!” (author unknown).
    I was so sad! Think of the problems the Transporter would solve! No traffic! Cheap travel! No coach seats! No parking!
    Also the time portal. *weeps*

  64. says

    In my not so humble opinion brain does not equal mind and that is the crux of the problem.

    The “me” that writes these letters is different than the “me” that read the comments inspiring this comment – because reading those comments changed my brain and subsequently my mind.

    The best explanation wia analogy I read was in one of Pratchett’s “Science on Discsworld”. One of his coauthors compares mind to a flame(fire) – it is an ongoing process that requires certain inputs (combustables/nutrition), a place to happen in (fireplace/brain), and once started through external stimulus, it runs in the form of (flames/thoughts) and as long as the proces goes uninterrupted, we can call it “the fire/the mind”. You can influence the proces and its outcomes through manipulating the inputs and/or restricting the outputs, nevertheless it remains the “same” thing in certain sense of the word.

    But once the process is stopped, it cannot be restarted – you can at best start another process in its place (we cannot restart stopped brains, but maybe someday we will), but all the information that was not stored in the matter but in the process itself (shape of the flames/thoughts) is irreversibly lost and can never be regained.

    It is not possible and it will never be possible to replicate flames, and It will never be possible to replicate minds. It may be possible to make some analogs to “a picture” or “a movie” of a mind, it might be possible at some point to repair damaged and stopped brains and start a new mind in it, but the continuity will be lost and so will be the person. Just like Frankensteins monster had no sense of self prior to being revived – only some muscle memory and vague recolections.

  65. dannysichel says

    the various comments about copies remind me of Greg Egan’s Learning To Be Me, where everyone is given a tiny skull implant at birth(?) which observes every sensory impression and the reactions to them, so that it can learn to imitate the reactions If the implant’s predictions ever vary from what actually happens, the implant is automatically adjusted until it can perfectly mimic its wearer’s thoughts. This usually takes decades to accomplish, and by the time your brain is physically decaying, your body is able to use the implant instead.

    (then stuff happens, and the rest is spoilers.)

  66. xnoarchive says

    Emergence remarked:

    Second, I’d like to go back to the usual point that gets brought up; it’s not that you’re “transferring” your mind to a computer, you’re just slowly destroying your own brain to make a copy of it in a computer that thinks it’s you. (..) Let’s just say that you somehow managed to scan your own brain without destroying it. Then there’d be two versions of you, and it’s not like you’d experience being both of them at the same time.

    This gets at an unstated assumption by the transhumanists: most likely they implicitly believe that if we can get to the point where we can scan and re-create in some form (silicon or biological) all the wiring and synaptic potentials and neurtransmitters of a living brain, we can also figure out how to translate that information into other forms. Necessarily so, in fact, if we propose to translate an organic human brain into a silicon substrate or a software version.
    Then once you’ve got that translation procedure, you get telepathy, because now you can translate what goes on in one organic human brain into some form that can be shunted into another organic human brain.
    But this leads to issues the transhumanists seem reluctant to deal with. Once you have software or hardware telepathy, you can also do things like merge representations of organic human brains, do all sorts of processing on them (OR, AND, NOT, XOR, much more complex procedures) so now you’ve got group minds, a brain simulation with ethical processes deleted, or a brain simulation with amygdaloid fight-or-flight response deleted, ad infinitum. This opens up a huge can of worms with regard to mind control, destruction or artificial creation of personalities, wrenching transformation of basic human responses. Suppose you designed a software mind to enjoy pain and fear pleasure? Suppose you deleted the survival instinct? Suppose you eliminated or pathologically enhanced its ability to read emotions in other humans?
    Suppose you altered a software mind by making it intelligent but non-self-aware? Suppose you edited a silicon brain so that it has Capgras’ Syndrome, where it thinks everyone else in the world is a mindless automaton? Suppose you edited a silicon brain so that it can’t recognize anything that’s a weapon? Suppose you edited a silicon brain so that it mistakes its wife for a hat? Is it still human? What does it even mean for a consciousness to be human in those circumstances?
    The very prospect of mind uploading founders on that basic philosophical quandary. How radically can you alter an uploaded mind before it ceases to be anything we would regard as human or conscious or self-aware? And how would we know? How do you measure that?
    We would need some metric for what “humanness” is and we don’t have one. We would need some objective metric for consciousness, and we don’t have one.
    The entire project of mind uploading seems like exorcism. It lacks even the remotest scientific basis. If you can’t objectively measure a demon, how can you exorcise it from a human body? If you have no objective definition or scientific metric for intelligence or consciousness or personality, how can you be sure you’re “uploading” or re-creating it? And please don’t cite pseudoscience like the Meyer-Briggs tests, that stuff is junk science with no more credibility than astrology. We just have no verifiable scientific metrics at all for personality or consciousness, and if you can’t measure something, how in the world are we supposed to do science on it?

  67. xnoarchive says

    @56 ragdish noted:

    Consciousness is still a neural mystery…

    But more to the point, consciousness is also a conceptual mystery. You can’t do science on something until you can define it and measure it. What’s the scientific definition of consciousness? Of personality? How you measure ’em objectively and repeatably in a falsifiable way?
    If you can’t, the entire scientific project is dead before you start.

    ragdish continued:

    … and the answer does not lie in the complex behavior of individual neurons. The neurons in the perisylvian language regions, occipital lobes, basal ganglia, etc. are not wholly different from each other in regards to overall physiology and biochemistry. The evidence is that you can damage those areas severely and still have a conscious individual. There is nothing special about the individual neurons that participate in the collective activity that gives rise to thought. Indeed neuroscientists who have done substantial work in the field of consciousness (eg. Christoff Koch, Giulio Tononi) dispute the claim that you need to know the details of neuron A and B. Rather what is still unknown is how information is processed among large numbers of neurons to give rise to subjective life. This raises the valid question of whether such patterns of information processing can take place among non-neuron substrates and the possibility of neural upload.

    Good point. Larger-scale patterns of activity seem to be more important than individual neurons and synpases. Certainly it is objectively and demonstrably true that you can surgically remove gobs of a person’s brain and they remain conscious. However, we also know that depending on the brain tissue surgically removed, an individual’s personality can be radically altered. We don’t know exactly why, or exactly how to do that repeatably. Brain injuries that produce extreme personality changes in one person don’t seem to do so in other people. There’s a large peer-reviewed literature in cognitive neuroscience about this in the area of stroke victims, hydrocephalics, and so on.

    But an even more important point is that it’s by no means clear that any kind of information is being processed in the brain. The brain is not a computer. The brain does not behave like a computer. There is no reason to believe that the analogy of “brain = compute” makes any sense or has any explanatory power.

    See “10 Important Differences Between Brains and Computers,” Chris Chatham, Scienceblogs, 27 March 2007. Also see “Why minds are not like computers,” Ari Schulman, The New Atlantis, Winter 2009. Also see “The Brain Is Not Computable: A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines,” Antonio Regolado, MIT Technology Review, 18 February 2013.

    Everything we know about organic brains says they’re more like a soup than a computer. Does anyone talk about uploading the taste of a distinctive soup into a computer? Organic brains mostly operate by neurochemistry — the action potentials serve mainly to release neurotransmitters between synaptic junctions. This is chemistry, not computer science. The entire conceptual mode of brain = computer is completely misguided. The things that computers do which are brainlike they do very poorly and very inaccurately (face recognition, reading emotions, summarizing meaning, navigating a 3D world with partial 2D visual input), while the the things that brains do that are computerlike, they do even more poorly and even more inaccurately (adding, subtracting, dividing, multiplying numbers, storing & retrieving information accurately, following long chains of modus ponens logic flawlessly),
    Modeling a brain as a computer seems on the level of modeling a spoon as a hammer. Yes, you can sort of eat soup with a hammer, but not very well. Yes, you can sort of hammer things with a spoon, but very poorly. What evidence is there that a spoon offers a good conceptual working model for a hammer? Effectively none. I would argue that the same applies with computers and brains.
    In fact, we’re only trying to model brains with computers because the computer is latest and spiffiest tool we’ve got and it’s ready to hand and convenient. There is no other justification for the analogy brain = computer, any more than there was a credible justification for the analogy popular in Freud’s time that brain = steam engine (too much pressure builds & you get neuroses, etc).

  68. xnoarchive says

    @58 Cory yanofsky raises a fascinating question:

    My point is that we already know there’s a fair bit of slop in the specificity: thermal noise is irrelevant, sleeping-you is as much you as waking-you, meditating-you is as much you as furiously-angry-you and recently-concussed-you, etc. No doubt an inconceivably large number of degrees of freedom are critical for personal identity, but we know personal identity is robust to variation in many others.

    But actually, our courts recognize that sleeping-you is not the same as waking-you. There are court cases in which a husband or wife attacked (in some cases killed) their spouse while sleepwalking. Juries have held that the sleeping person is not responsible because they were in an altered state of consciousness. In effect, the juries have held that the sleepwalker is not legally the same person as the awake person. See the Wikipedia article on “Homicidal Sleepwalking” for citations of specific cases, such as the 1987 Parks case, the 2004 Lowes case, and so on. Just this year, in March 2015, Joseph Mitchell was found not guilty in a sleepwalking homicide.

    Recently-concussed-you may also be a completely different person than uninjured you. MRI studies show that a remarkably high proportion of violent prison inmates suffer from organic brain damage. See “Research Links Brain Damage & Violent Crime — USC Studies Point To Underlying Causes Of Violent Crime In Young Offenders,” Science Daily, 13 September 1997.

  69. xnoarchive says

    @62 jamiejag asks:

    How do we even know that a brain in isolation is you at all?

    We have pretty good evidence that it wouldn’t be. Antonio Damasio, one of the world’s leading neuroscientists, has proposed what he called the somatic-marker hypothesis. Briefly, Damasio says that we think with our bodies as well as mind, that emotions are fundamental to human reasoning and cannot be separated from conscious rational thought, and that emotions are bodily states. The logical conclusion is that a mind without a body won’t have emotions, and won’t be able to act intelligently. “The somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex situations.” (Wikipedia entry for SMH.)

    Evidence for the somatic-marker hypothesis comes from experiments like the Iowa Gambling Task. See “The somatic marker hypothesis: A neural theory of economic decision,” Antoine Bechara, Antonio R. Damasio, Games and Economic Behavior, Volume 52, Issue 2, August 2005, Pages 336–372.

    Moreover, the entire field of embodied cognition has grown up around the somatic marker hypothesis and the two-factor theory of emotion (viz., that emotions are bodily states which then get interpreted consciously in context) over the last 20 years or so, and you can find quite a lot of evidence in favor of it summarized in various books devoted to embodied cognition. See “At the root of embodied cognition: Cognitive science meets neurophysiology,” Francesca Garbarini, Mauro Adenzato, Brain and Cognition, Volume 56, Issue 1, October 2004, Pages 100–106. See also the book “Embodied Cognition” by Lawrence Shapiro, 2010. Also see “Embodied Cognition: A field guide,” Michael L. Anderson, Artificial Intelligence, Volume 149, Issue 1, September 2003, Pages 91–130.

    Last but far from least, Damasio cites some striking evidence for the crucial role played by emotions in intelligence in his book “Descarte’s Error”: Damasio recounts the case of an executive who suffered a stroke leaving him unable to recognize the emotions of people with whom he worked, and which left him with flattened affect. The exec scored superbly on standardized I.Q. tests, but failed completely at his job and had to be replaced because he could no longer solve everyday problems in his job.

    All this evidence converges on the conclusion that intelligence requires emotions which in turns arise from a body, and that a disembodied intelligence lacks the characteristics typically associated with functional human problem-solving. The simplest and most obvious toy example would be a hypothetical disembodied AI asked to reduce crime in a major city. The AI answers that the solution is simple, just kill everyone in the city. When the humans explain that this is not acceptable and they need another solution, the disembodied AI responds that the next best solution is to imprison everyone in the city. When the humans reject that solution as well, the disembodied Ai then suggests sedating everyone in the city, and so on. Rinse, wash, repeat. Intelligence without emotions (i.e., without a body) is not intelligent.

  70. xnoarchive says

    To amplify Marcus Ranum’s rebuttal @68 to Marshall’s functional reductionist argument @66, the claim that we should be able to functionally duplicate portions of an organic brain and gradually replace the entire brain, then run that silicon or software brain at a higher clock speed or on more processors to make it smarter and let it repeat the process with itself, is a fallacy.

    In fact, it’s a variant of the well-known “underpants gnome” fallacy. Remember that one? Step 1: steal underpants; Step 2: ???? Step 3: Profit!

    The underpants gnome fallacy in AI is the same faulty reasoning: Step 1: functionally duplicate part of an organic brain; Step 2: ???? Step 3: Get smarter!

    It’s a fallacy for 4 reasons. First, functionally duplicating one aspect of one part of a human brain doesn’t mean you are functionally duplicating all aspects of that part of the human brain. A trivial counterexample: memory. You can functionally duplicate (even improve on) human memory by writing information down in books. Human memory is malleable, and false memories can be induced — that’s not true of a library. So a library is even better (functionally speaking) than a human memory. But by this line of reasoning, if we put enough books in a library, we should have a perfect functional duplication of a human memory, but that’s just not the way it works. It doesn’t work that way because human memory operates by spreading activation, whereas libraries operate through card catalogs, and a card catalog is much more limited and functionally crippled than the associative holographic-type recall of spreading activation that occurs in neural nets. Human memories do things that libraries can’t because of spreading activation.

    Reason 2 why functional duplication of individual brain modules won’t get to an AI is that human intelligence comes not from individual brain functions like Broca’s area or the ventromedial hypothalamus or the left temporal lobe, but the interaction of all of these specialized structures working together. The fallacy here is like saying that if you toss enough transistors together you get a supercomputer. No, it really matters exactly how you wire together all the transistors. The wrong wiring topology gives you a hunk of junk, not a supercomputer. The devil is in the details.

    Reason 3 why functional duplication of individual brain areas won’t produce AI is the fallacy that running a computer with a faster clock speed or more processors will make it smarter. But there’s no evidence of this. Such a computer will run faster, but it won’t necessarily be smarter. Many real-world problems are P-type (as opposed to NP) problems and the complexity explodes as you increase the size of the problem. Humans are able to approximate solutions to these problems not by brute-forcing the problem and running our brains faster, but by finding clever workarounds to evade the combinatorial explosion. Guys like Ray Kurzweil blithely assume that merely cranking up the clock speed on a silicon brain will make it superintelligent. But there’s no evidence of that. Moreover, the history of computer science proves it. Computers today are many millions of times faster than the computers of the 1950s, but computers today still hit a brick wall on simple basic problems like “Jane saw a puppy in the window and wanted it. Which did Jane want, the puppy or the window?” Cranking up the clock speed has not made our computers millions of times smarter over the last 60 years. They’re still dumb mindless machines. Intelligence means working cleverer, not working faster. Furious activity is useless without understanding.

    Reason 4 why the idea of bootstrapping to AI from a functional duplication of individual brain regions is a fallacy is that the ability to design a better computer doesn’t depend on intelligence. Designing a better computer depends on imagination and creativity. Even if we could figure out how to wave a magic wand and make a smarter version of all our brain, we have no idea how to make a more creative and more imaginative version of our brain. And across the board, Nobel laureates report that the crucial traits that let them crack really tough problems involved creativity and originality, not raw intelligence. In fact, Nobel laureates like Richard Feynman tested much lower than you’d expect. Feynman’s tested IQ was 125. Yet he was able to solve problems that eluded other smarter physicists. Why? Because Feynman had imagination and creativity that they lacked, and we don’t know how to measure imagination and creativity numerically. We don’t really know how to measure intelligence either — IQ tests showed, for example, that new immigrants from Italy in the 1930s historically scored in the mentally retarded range. The children of those Italian immigrants, however, scored at the mean. Lastly, the Flynn Effect shows that peoples’ IQ scores rise as they age.

    All these results strongly suggest that an IQ score measures acculturation, rather than smartness. Thus we’re clearly not measuring intelligence with IQ tests, but something else. So if we can’t numerically measure creativity and imagination, or intelligence, how do we increase them? You can’t do science on things you can’t measure. That’s basic.

    So for 4 important reasons, the notion of bootstrapping ourselves to a smarter artificial brain by functionally reproducing aspects of the operation of parts of our organic brains successively is dead in the water. It’s the same kind of fundamentally flawed reasoning as thinking we can make a really big plane fly by starting out with a small engine that flaps a small plane’s wings, and then scaling up to a much heftier nuclear-powered engine that flaps a 767 jet’s wings really really fast. The whole approach is wrong, and based on a set of fallacies.

  71. multitool says

    I agree with @40 brett. Personality transfer is more plausible if you do it slowly rather than quickly.

    Individual human identity is over-emphasized. We can already turn one conscious mind into two separate ones -who both believe they are the same person- just by cutting your two brain hemispheres apart at the corpus callosum. Then, what if we take it one step further and put each half in a separate body?

    Our assumptions have been poisoned by religious tradition that the self is some kind of indivisible point-object soul, and not a fluid that might be poured into more than one glass.

  72. prae says

    I’d say, start with cellular protheses. Aka, make artificial neurons which are compatible to the natural ones, and start from there. New braincells which can go in and replace old ones would be a REALLY good start. And if that works, you could try to add features like a link for data collection and transmission in order to learn more what a neuron actually does, then try to emulate THAT, and finally upgrade the protheses so that they can communicate their current states to the outside, and recieve updates from there as well. Then I guess you can start slowly moving them one by one into the simulation, while still having them connected to the real brain and the body through the aforementioned data link…

  73. Anders Kehlet says

    xnoarchive@77: Your toy example is silly. Why would you expect a satisfying answer when you’re withholding requisite information? The AI would need an understanding of human psychology/sociology as a bare minimum, though it would probably be a good idea to explicitly state the value of life/happiness/freedom just to be sure.

  74. unclefrogy says

    this sounds like a very flexible and subjective set of concepts that would be near impossible to pin down to anything even other than vague concepts

    though it would probably be a good idea to explicitly state the value of life/happiness/freedom just to be sure.

  75. says

    xnoarchive @52:

    “Mind” is the phlogiston of neuroscience.

    If “mind” is an illusion, does arguing that a copy isn’t the original make any sense at all, beyond pandering to a superstition about “continuity”?

    If I have the “original” PZ and a “copy” PZ in my waiting room, while it makes sense to treat them as individuals as far as their experience after copying is concerned, then for the purposes, say, of arranging a university’s teaching schedule, does it make a whit of sense to treat one as more privileged than the other?

  76. says

    Anders Kehlet @ 81:

    The AI would need an understanding of human psychology/sociology as a bare minimum, though it would probably be a good idea to explicitly state the value of life/happiness/freedom just to be sure.

    Well, that would be interesting, given that for many people all over the world, the value of life/happiness/freedom is highly dependent on making sure that other people have no quality of life, little happiness, if any, and little freedom, if any.

  77. says

    @76.

    Legal culpability of unconscious action is more about mens rea than personal identity. I’ll grant the concussion one, though — I agree that brain damage that changes brain functioning would have a non-negligible impact on the question of personal identity.

  78. Anders Kehlet says

    unclefrogy@82: Yet some smart people managed to write that declaration of human rights thing. You may have heard of it.

    Caine@84: A lot of people are wrong about a lot of things.