Singularly silly singularity


Since I had the effrontery to critize futurism and especially Ray Kurzweil, here’s a repost of something I wrote on the subject a while back…and I’ll expand on it at the end.


i-ccbc028bf567ec6e49f3b515a2c4c149-old_pharyngula.gif

Kevin Drum picks at Kurzweil—a very good thing, I think—and expresses bafflement at this graph (another version is here, but it’s no better):

i-d20bc0f93fa99895724fb2d2b58fe909-kurzweil_bad_graph.jpg.jpeg

(Another try: here’s a cleaner scan of the chart.)

You see, Kurzweil is predicting that the accelerating pace of technological development is going to lead to a revolutionary event called the Singularity in our lifetimes. Drum has extended his graph (the pink areas) to show that, if it were correct, these changes ought to be occurring at a still faster rate now…something we aren’t seeing. There’s something wrong in this.

I peered at that graph myself, and the flaws go even deeper. It’s bogus through and through.

Kurzweil cheats. The most obvious flaw is the way he lumps multiple events together as one to keep the distribution linear. For example, one “event” is “Genus Homo, Homo erectus, specialized stone tools”, and another is “Printing, experimental method” and “Writing, wheel”. If those were treated as separate events, they would have inserted major downward deflections in his chart a million years ago, and about 500 to a few thousand years ago.

The biology is fudged, too. Other “events” are “Class Mammalia“, “Superfamily Hominoidea“, “Family Hominidae“, the species “Homo sapiens“, and the subspecies “Homo sapiens sapiens“. Think about it. If the formation of a species, let alone a subspecies, is a major event about a million years ago, why isn’t each species back to the Cambrian awarded equivalent significance? Because it wouldn’t fit his line, of course. As he goes back farther in time, he’s using larger and larger artificial taxonomic distinctions to inflate the time between taxa.

It’s also simplifying the complex. “Spoken language” is treated as a discrete event, one little dot with a specific point of origin, as if it just poofed into existence. However, it was almost certainly a long-drawn-out, gradual process stretched out over hundreds of thousands of years. Primates communicate with vocalizations; why not smear that “spoken language” point into a fuzzy blur stretching back another million years or so?

Here’s another problem: cows. If you’re going to use basic biology as milestones in the countdown to singularity, we can find similar taxonomic divisions in the cow lineage, so they were tracking along with us primates all through the first few billion years of this chart. Were they on course to the Singularity? Are they still? If not, why has the cow curve flattened out, and doesn’t that suggest that the continued linearity of the human curve is not an ineluctable trend? This objection also applies to every single species on the planet—ants, monkeys, and banana plants all exhibit a “trend” if you look backwards on it (a phenomenon Gould called “retrospective coronation”), and you can even pretend it is an accelerating trend if you gin it up by using larger and larger taxonomic divisions the farther back you go.

Even the technologies are selectively presented. Don’t the Oldowan, Acheulian, and Mousterian stone tool technologies represent major advances? Why isn’t the Levallois flake in the chart as a major event, comparable to agriculture or the Industrial Revolution? Copper and iron smelting? How about hygiene or vaccination?

I’ll tell you why. Because not only is the chart an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion, but what it actually represents is the proximity of the familiar. We are much more aware of innovations in our current time and environment, and the farther back we look, the blurrier the distinctions get. We may think it’s a grand step forward to have these fancy cell phones that don’t tie you to a cord coming from the wall, but there was also a time when people thought it was radical to be using this new bow & arrow thingie, instead of the good ol’ atlatl. We just lump that prior event into a “flinging pointy things” category and don’t think much of it. When Kurzweil reifies biases that way, he gets garbage, like this graph, out.

Now I do think that human culture has allowed and encouraged greater rates of change than are possible without active, intelligent engagement—but this techno-mystical crap is just kookery, plain and simple, and the rationale is disgracefully bad. One thing I will say for Kurzweil, though, is that he seems to be a first-rate bullshit artist.

I don’t think he’ll be sending me a copy of his book to review.


I got one thing wrong in my original article: he did send me a copy of his book, The Singularity is Near! I even read it. It was horrible.

Most of it was exactly like the example above: Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it’s OK, when he’s actually looking at something measurable, like processor speed over time. In other places, where he puts bacteria and monkeys on the Y-axis and pontificates about the future of evolution, it’s absurd. I am completely baffled by Kurzweil’s popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination.

I actually am optimistic about technological progress, and I think some of the things he talks about (nanotechnology, AI, etc.) will come to pass. But I do not believe in the Singularity at all.

Nanotech is overhyped, though. They seem to be aspiring to build little machines that do exactly what bacteria and viruses do right now…and don’t seem to appreciate the compromises and restrictions that are a natural consequence of multifunctional systems. I also don’t believe in the gray goo nightmare scenario: we’re already surrounded by a cloud of miniscule replicating machines that want to break our bodies down into their constituent molecules. We seem to cope, usually.

I think we will develop amazing new technologies, and they will affect human evolution, but it will be nothing like what Kurzweil imagines. We have already experienced a ‘singularity’ — the combination of agriculture, urbanization, and literacy transformed our species, but did not result in a speciation event, nor did it have quite the abrupt change an Iron Age Kurzweil might have predicted. Probably the most radical evolutionary changes would be found in our immune systems as we adapted to new diets and pathogens, but people are still people, and we can find cultures living a neolithic life style and an information age lifestyle, and they can still communicate and even interbreed. Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don’t expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

Comments

  1. says

    The singularity cannot physically exist. The most obvious reasons include bandwidth and storage.

    In order to store infinite information we would need infinite energy. Since we cannot obtain infinite energy, the “singularity” will just be a spike in the graph.

  2. says

    Am I the only one starting to suspect that PZ is working through a list of groups who’ll flood his blog with traffic if he pisses them off?

    Creationist protestants, catholics (and sort of muslims), AGW deniers, libertarians, singulatarians…

    Judging by the high-traffic blogs on wordpress I’m guessing he’ll be taking a swipe at PUMAs next, or possibly Arsenal fans.

  3. says

    Thank you! So many people here are on a Kurzweil kick. And they’re all really defensive about it, too. The whole Singularity concept is one of the dumbest ideas I’ve heard come out of smart people in years. It’s not even good sci fi.

  4. says

    The main point of the advancement of computing power is simply not present here, about actual future. And no, I am not talking about the technological singularity.

  5. says

    Actually, when you get right to the point of his description of what will happen to humans at the point of Singularity, it’s completely absurd. I say this as a person who’s worked with computers on a professional basis for many years now. Stints in IT and all. Kurzweil assumes that because a computer has X processing power and the human brain has a comparative processing power of X or below (how he gets those numbers is a meaningless guesstimate by the way), you can magically cram a human brain into a computer circuit.

    That’s his vision in a nutshell.

    Reminds me of Max Heindel’s spiritual alchemy and Madam Blavatsky’s Ancient Mysteries tomes, just peppered with modern technobabble. My shameless plug of a complete write-up on on how many levels Kurzweil is wrong below:

    http://worldofweirdthings.com/2008/10/22/ray-kurzweil%E2%80%99s-digital-pipe-dreams/

  6. Geb says

    The singularity has been happening ever since the industrial revolution, it’s just that it happens more slowly than people think, mostly because of the economics of new projects.

    Technological developments don’t happen instantaneously, they start out as barely functional prototype processes and then become better and cheaper until they are usable. Almost by definition they are brought into the world to be used once they pass the point at which they will make money, and by the way money works, this means that the work saved in a new project is almost exactly the same as the work expended on it. Later on, technology does get better of course.

    An alternative way to look at this is that we have lots of people willing to use new stuff but very few innovative researchers.

  7. Steve says

    Yep, there’s a big disconnect between biological and technological evolution. Because we cities, were we a new species? Frankly, messing with the Homo genotype should be well baselined and cautious.

  8. Brownian says

    I was under the impression that everyone knew the development of stone tools was a drawn-out process involving numerous innovations many of which were nothing short of revolutionary.

    I guess if you’re the type whose jaw drops every time Apple releases a new ‘generation’ of iPod, then I guess yeah, you’d really be tempted to think the development of stone tools was a relatively minor, “Hey Oog. Look what happens when I bang these two not-plants together!” “Neat-o! Well, now that that’s done, let’s sit around for the next 2.5 million years until someone invents the city-state”-type thing.

    Maybe I’d be tempted to fudge the numbers too if I were blinded by the idea of a Cool World future in which I get to sleep with an AI Kim Basinger.

  9. says

    If there is any such “singularity” it will most likely be our extinction.

    And Kobra #4,

    Why shouldn’t technology regress? Technology has regressed many times in human history.
    .

  10. says

    P.S.: I hear a lot about people who consider today’s leaps in technology to be the fastest in human history. The Industrial Revolution was a big deal yes, but what it really did was to build on a base that allowed more and more inventions to be derived from it; electricity.

    Throughout our past there have always been turnkey inventions which spun off all kinds of useful derivatives and created their own little bursts of hyper-innovation. In the classical world there were actually steam engines for crying out loud:

    http://www.smith.edu/hsc/museum/ancient_inventions/steamengine2.html

    I wonder where that is on Kurzweil’s chart…

  11. Chris says

    Yes, thank you, Kurzweil is definitely a kook. As someone who has worked in AI, albeit briefly, I just don’t think he understands the complexity of what he’s talking about. I’m not saying that there’s a magical barrier between making a computer artificially intelligent – there isn’t, and i’m pretty confident we’ll get there eventually, i just wouldn’t bet on it in my lifetime.

    Additionally, there are still lots of questions to be answered regarding what having AI even means, like does it mean it can have a conversation with someone? Pass the turing test? Is there a certain measurable level of complexity it has to reach where it is equal to the level of complexity of our own brains? Do we wait until we have simulated an entire human brain…and does simulating a human brain even make sense outside the context of a human body and our own world?

    I just think it’s far too complex for us to see it soon…at least for us to see walking, talking terminator like robots or things like the matrix the other sci-fi stuff. People also blow relatively small discoveries in AI out of proportion – just because we can simulate a human-like characteristic mostly means we developed an algorithm to APPROXIMATE a human characteristic, leaving behind all of the complexity.

    I could be wrong here – I only worked with AI for a short time, and there could be some game-changing discovery or insight that I haven’t thought about or heard of…

  12. Alex Besogonov says

    I hope that we’ll see singularity.

    First, it _might_ be possible to somehow build infinite speed computers (or some other forms of hypercomputers). Maybe with some clever wormhole engineering. It might be possible to exceed the speed of light.

    Second, I fully expect human-like computer AIs. Also, AIs should be much more efficient in colonizing the Solar System and other stars.

  13. says

    Kobra@10: In fairness, if they want to define “singularity” as a term of art with a different meaning to in mathematics it’s fair enough. After all, mathematicians redefined “topos” with a different meaning than literary theorists were already using.

  14. says

    They always know that if they can proclaim a great revolution, or other dramatic change, people will buy the book and try to figure a way to make money off of it.

    Essentially it’s the same problem as ID and other bullshit like it. It’s dealing with a kind metaphysical claim, not the actual causes and effects, none of which really fit neatly on a line (other than a causal one, of course) leading through history. History is linear, but it is contingent, not a working out of some god/inexorable fact behind it all.

    The fact is that information processing may not continue on the same path through time. More importantly, it is not likely to have similar effects through time, as even now home computers don’t need much boost in power to do what people want them to do, except for the gamers.

    Kurzweil’s nonsense does fall under what some disparagingly call “scientism,” a kind of belief in science as providing redemption and heaven for those who no longer believe the old fables. No, that’s not what will happen, it’ll just slog along improving knowledge, and generally helping us to cope with the accumulating problems. There’s no guarantee that it will truly be able to cope with global warming, etc., however, although it is all we really have to attempt to do so without some massive die-off of humanity.

    Glen D
    http://tinyurl.com/6mb592

  15. Eric says

    PZ:

    I also don’t believe in the gray goo nightmare scenario: we’re already surrounded by a cloud of miniscule replicating machines that want to break our bodies down into their constituent molecules. We seem to cope, usually.

    Yes, we’ve coped with replicating machines that evolved in parallel with us. We haven’t ever had to cope with replicating machines that were intelligently designed. Evolution is stupid and blind and bad at searching design space for efficient designs. Humans are much better, and most likely would not have too much trouble coming up with a design our bodies can’t cope with.

    @1:

    The singularity cannot physically exist. The most obvious reasons include bandwidth and storage.

    *facepalm*
    Has anyone ever claimed that we’ll have infinite bandwidth and storage? I don’t think even Kurzweil claimed that.

    Full disclosure: I would put a high probability on a singularity occurring within the next 100 years, but not for the reasons Kurzweil lists. His work is somewhat interesting, but not all that relevant, in my opinion. If you want to attack an idea, attack it’s strongest arguments, not its weakest. Eliezer Yudkowsky‘s writing would be a place to start.

  16. CJO says

    I fully expect human-like computer AIs.

    Why?
    Perhaps you mean something like massively parallel neural-net architectures capable of approximating or even exceeding human-CNS-level processing speed.

    But “human-like” I doubt. Too much of what makes us “human-like” is a result of embodiment and embeddedness of that body in a social world from day one.

  17. Reader5000 says

    I’d like to echo Geb’s comments @10, and give another example.

    Scientific discovery is another area that won’t become accelerated to the point of verticality. Sure, a super-AI could come up with a million new hypotheses in a second, but then it will take time to build the experimental apparatuses (?) to test them. Don’t forget the replication, peer review, and other things that business types don’t find sexy.

    And what happens when some of those hypotheses are shown to be mistaken? I guess Mr. Cyberbrain isn’t omniscient.

    And then you need time for the engineers to incorporate the new scientific understanding into new technologies. Which is further slowed by the environmental and ethical brakes. Will the AI’s be able to understand the complex interaction of technologies, ecologies, and sociologies? And then there’s the rate at which new toys become part of our lives, or don’t. Think of all the new gadgets of the last fifty years that no one was interested in. Monorails? The Clapper? Oh, yeah, they’re ubiquitous.

    And even if we could accelerate more and more, does anyone seriously expect to upgrade their brain-chip at exponentially shorter intervals? Why bother? Faster-faster-forward isn’t living.

  18. Andrew says

    I had a post late in the other thread, and now PZ stole most of my thunder, but the version of this graph that I’ve seen around most is here:
    http://singularity.com/charts/page19.html

    It’s got 15 different “data sets”, while the graph in this post only has 1. The thing is, the “data sets” aren’t independent. The same event gets repeated over and over, with the date of the invention fudged to make it show up at the appropriate point in each “data set”. Writing shows up in 9 of the 15 data sets, with dates ranging from 40,000 years ago to 4000 (and an amusingly precise 4907).

    Most of the time-line covered in the chart is fairly arbitrarily chosen evolutionary milestones, as PZ points out. History since the Neolithic Revolution occupies a tiny chunk of the time-line, is replete with important events such as “Mayan civilization; Sung Dynasty China; Byzantine empire; Mongol invasion; crusades” occuring 1000 years ago (Mongol invasions were 800, Maya civilization was at it’s peak ~2000 years ago, and had collapsed completely 1000 years ago).

    If the singularity is really getting closer, we should be seeing developments that exert a revolutionary change on society every couple of years now. All that’s happened in the 30 years I’ve been alive has been widespread use of home computers (and increased processing power), widespread use of the internet, and lots of cell phones. And that’s all been pretty much around for 15 years now with no further revolutionary developments. My grandmother lived through the widespread adoption of: automobiles, airplanes, radio, electric lights, telephones. She was alive for both the invention and the wide adoption of: television, antibiotics, nuclear weapons, satellites, computers, internet and cell phones. There’ve basically been no revolutionary technologies adopted in the last 50 years, except the internet.

  19. Helioprogenus says

    Though most of the singularity ideas are bullshit, some of the possibilities may be prescient. We know that neuronal networks in our brain are immensely complicated, but perhaps, with the increased power of computing and system’s analysis, we will come to a point where we can replicate networks that mimic those in our neurons. Perhaps, even the creation of something that approximates the complexity of our brains, which may develop what we consider sentience or even consciousness. Now, would it be too difficult to imagine somehow mapping the neuronal circuitry in our brains, and then extrapolating it into some kind of matrix, with algorithms that mimic our senses? The point is, wouldn’t it be possible, perhaps a thousand years from now to somehow preserve our consciousness? In a way, that would transcend biological evolution.

    A lot of Kurzeil’s support for this is because of his immense fear of death. His contrived nonesense is all a result of his fears of his demise. He wants to believe that within our lifetime, we’ll be able to preserve consciousness, and in doing so, he rejects critical thinking. Yet, just because his notions are flawed does not mean that preserving human consciousness sometime in the future is impossible. The road we travel is predicated on envisioning the possibilities. We just have to be honest with ourselves, unlike Kurzweil’s futile fantasies.

  20. George says

    It one think to write things down in such a way that it seems to expose a trend and linkage. It is quite another thing to actually find a relationship among the things.

    He writes events – but does nothing to confirm that these events are related in way that allows meaningful predictions about the future.

  21. Colin says

    Just another millennial ideology, no? Dollars to donuts you could find the same thing 100 years ago – ZOMG, electricity! But it’s an attractive misuse of Darwin — a student said to me “hasn’t science proven that everything evolves” by which she meant that there was a spirit in everything making everything better all the time…

  22. Lurkbot says

    First of all: no argument that Kurzweil’s “Singularity” is a bunch of hooey. I’m willing to cut him more slack on his defense of Strong AI, though, such as in his book The Age of Spiritual Machines.

    I’ll always have a soft spot for him in my heart, however, for the way he took the challenge of defending Strong AI against the Discovery Institute’s litany of losers, Searle, Dembski, et. al., in their own book Are We Spiritual Machines?: Ray Kurzweil Confronts the Critics of Strong AI. He states his position in the first part of the book, then each of these maroons contributes a chapter criticizing it, and he takes them to the woodshed in the last third.

    What’s delicious is, they had no idea how thoroughly they’d been eviscerated, and actually published the book! Gotta love it that a demonstrated loon like Kurzweil could take them down so thoroughly, and they didn’t even realize it! So, yeah, Ray gets a few Brownie Points in my book for that.

  23. Eric says

    @23:

    Sure, a super-AI could come up with a million new hypotheses in a second, but then it will take time to build the experimental apparatuses (?) to test them.

    The whole point of intelligence is that it’s a more efficient search of answer-space. Computers can already do what you’re talking about – that’s basically how Deep Blue played chess. Intelligence is about optimizing the searching so you’re not coming up with a million obviously false hypotheses, but rather focusing on the few that are likely to be true.

    I’d like to point out too that there are different schools of the singularity. Yudkowsky explains them pretty well here. It looks like pretty much everyone here uses the fourth definition, though.

  24. says

    Some places it’s OK, when he’s actually looking at something measurable, like processor speed over time.

    Just because marketers have stamped numbers on it, doesn’t mean that the shown number means something real. Also, there are some very real limits we’re starting to hit already in the processor speed race.

    We’re nearly tapped out as far as simply making the processors go any faster. The processor makers know this already, and it’s rare to see a processor with a single core these days – what this means is that your new CPU chip capable of N megaflops is actually two (or four, or more) processors each doing N/2 megaflops welded together.

    The overly optimistic answer to this is “but we’ll process more in parallel!”. The two reality-buzzkill answers to this are that we humans so far have a lousy track record at developing parallel algorithms for most problem domains and that the extra on-chip speed doesn’t help if you can’t move data to and from the chip fast enough to take advantage of it. Data pathways haven’t been getting faster at anything even vaguely near the rate of processor speed.

  25. Aaron Luchko says

    The criticism of Kurzweil is definitely valid.

    But as to the possibility of a technological singularity.

    There’s nothing magical about human brains that couldn’t be replicated by a sufficiently advanced computer and AI.

    On the day we design an AI with an intelligence greater than a human, and the ability to use that intelligence to improve it’s own structure, how do we not have a singularity?

  26. says

    Alex Besogonov #17,

    No, infinitely powerful computing power is not possible, in sense of existential computing, done in an arena, such as the universe, multiverse, or megaverse, etc. But could be possible in bizarre levels where inexistence and existence are no longer different, from where the laws of universe arise from.

    By the way, Raymond Kurzweil does not imply a singularity as to infinity but rather a point where after changes are very hard to predict from our point. Ray does talk about the problems where we will lose the ability to make computers more dense in future to theoretical maximum density called Computronium, as I said the infinitely powerful computing power is not possible.

  27. says

    I agree with Eric that Eliezer Yudkowsky’s stuff is well worth a look. He certainly deals with the all the objections to taking strong AI seriously that are likely to come into the head of someone who isn’t an expert in the area.

  28. says

    PZM makes good points about Kurzweil’s sloppiness.

    But the basic idea of singularity is older than Kurzweil. It is the unavoidable riddle of AI:

    If a human can (eventually) invent a machine smarter than himself, then can that machine can also invent a machine smarter than itself? And being smarter, will it do it more quickly?

    If the answer is yes, we have a divergent series. Imagine the real-world outcome.

    John

  29. J.D. says

    I think you are downplaying nanotech a bit. I agree that it’s overhyped in terms of media play. However, that is more a function of us not being very good at it yet. I posit that nanotech is the next big revolutionary technology that has severe impacts on economic and social structure. Gray-goo is not relevant to that. I don’t claim its all that near, I think the big breakthroughs in nano manufacturing may be beyond our lifetime, but then again maybe not.

    I also don’t fathom the poo-pooing of AI in this thread. If you really don’t think there is anything metaphysical about being human, that we are fundamentally very complex stimulus/response mechanisms then there is no fundamental barrier to AI being equivalently “alive”.

  30. says

    I agree the graph is bogus, but the concept of the Singularity is not, at least conceptually. It is just the obvious extension of the existing evolutionary scheme. The history of life shows many points in time when organisms evolved better ways to evolve, resulting in various “explosions”. Likewise, over time technology has incorporated changes that increase the rate of technology development. Can that trend continue? Who knows! One can easily imagine new innovations in augmented intelligence allowing that kind of process to continue to accelerate. Not so easy is to see the step after that. Perhaps the process will slow, perhaps it will accelerate even faster, perhaps the inability to cope with technological change will result in devastation and regression. Trying to predict the long term effects is a fools game, since there
    are too many variables. For a very interesting and enjoyable introduction to the concept, try reading the story in which the term was coined, “Marooned in Realtime” by Vernor Vinge. Vinge wisely put forth the concept without presuming to predict what the Singularity would actually be like. “Marooned in Realtime” works on multiple levels, both as SciFi and as a murder mystery. It is one of my favorite books.

  31. JackC says

    I am having a real problem with these graphs mechanically, Is it normal to have a logarithmic scale between equally spaced steps? That just looks wrong to me – but it has been a long time. When I think of a logarithmic scale, I imagine the first step being say 100-101 then an equally spaced jump to 1010 and then 10100 and so on.

    I am seeing log divisions between each even-stepped division. That just doesn’t feel right.

    But then, the entire thing doesn’t feel right. Gut reaction says the concept is ridiculous, and I would bet that if it were even possible to plot this stuff “correctly” the graph would be all over the map – and perhaps flattening out dramatically.

    Am I missing something? The brain is getting old.

    I DO like his synthesizers though.

    JC

  32. says

    If a human can (eventually) invent a machine smarter than himself, then can that machine can also invent a machine smarter than itself? And being smarter, will it do it more quickly?
    If the answer is yes, we have a divergent series. Imagine the real-world outcome.

    Not if the increments of improvement fall away (which there is good physical reason to assume). A dizzyingly rapid increase perhaps, but certainly a bound one.

  33. Chris says

    I think the limits of computing aren’t the problem at all when it comes to AI – I have confidence that we will get there eventually in terms of processor power and storage space.

    The one thing that in my opinion could be ultimately game changing in almost all areas of computing would be actual, real, usable quantum computers.

  34. Chris says

    A lot of singularity talk is silly, but when you boil it down I think it really comes to three ideas:

    1 – Immortality is possible
    2 – Human level Artificial Intelligence is possible
    3 – We are close to achieving these things

    I feel certain that 1 and 2 are true, whereas 3 is much more up for dispute.

    Immortality is a matter of curing diseases and ensuring the body can maintain itself indefinitely, no easy task but certainly not an impossibility. The human brain is a machine, a natural creation endowed with no supernatural attributes – knowing this why couldn’t a system that is similarly as creative, complex and intelligent be created on computers.

    Whether these will happen in our lifetimes is something nobody can know for sure, but they are different from such science fiction as faster than light travel in that we know for sure that intelligent machines (our brains) are possible.

  35. Eric says

    @40:

    A dizzyingly rapid increase perhaps, but certainly a bound one.

    Has anyone predicted an unbounded increase? I don’t recall even reading Kurzweil saying that, though it’s been a while since I read The Singularity Is Near.

  36. Matt says

    >>>If you really don’t think there is anything metaphysical about being human, that we are fundamentally very complex stimulus/response mechanisms then there is no fundamental barrier to AI being equivalently “alive”.

    This really is the 64,000$ question.

    I like to think of myself as an atheist, and yet when I try and fathom AI, and yet I dont see how we could construct or grant this equivalence.

  37. JackC says

    This thread is just crying out for a mention of Douglas Adams, Deep Thought and the Electric Monk. Why has no one brought them up yet?

    JC

  38. says

    Wow — Amanda Palmer and a critique of Ray Kurzweil on Pharyngula all in the same day. I don’t know about your Singularity, but my Rapture of the Nerd just happened about thirty seconds ago.

    BRAIN GO FOOM.

  39. Joshua BA says

    About AI not being human-like:
    What is to keep us from making a shell for the AI that mimics a human body? Maybe you could even raise it as human, creating a series of slightly bigger/different bodies as it gets older to simulate the experience of growing. Would it then have human-like intelligence? (or maybe we could invent something like a holodeck so it could be raised in an environment where it is, at least in ability, no different in form than a human)

    Would someone born without the use of their limbs also not possess “human-like” intelligence (this is not an appeal to emotion. I am actually curious as to what the answer would be. If human-like intelligence comes from experiencing the world as an average human does then it seems the answer should be ‘no’)?

  40. says

    Eric “Has anyone predicted an unbounded increase? ” I was responding to text I quoted where John Atkeson @35 talked about a divergent (and implicitly monotone) sequence. That would have to mean unbounded growth.

  41. J.D. says

    I like to think of myself as an atheist, and yet when I try and fathom AI, and yet I dont see how we could construct or grant this equivalence.

    Then you are asserting something “magical” about human consciousness. This is a fundamental human prejudice which is a pillar for religious belief. Magical thinking is magical thinking, we must relieve ourselves of all such irrational human prejudice IMO.

  42. Knockgoats says

    Eric@21,
    I followed your link to Yudkowsky’s page – and there’s the most basic flaw in the “singularity” concept, at about sentence 2:

    “Since the rise of Homo sapiens, human beings have been the smartest minds around. But very shortly – on a historical scale, that is – we can expect technology to break the upper bound on intelligence that has held for the last few tens of thousands of years.”

    Er, there hasn’t been any such “upper bound”. Human intelligence is social. We, collectively, are vastly more intelligent than we used to be, in the most useful senses of the term: we can, for example, casually solve problems no human from a century ago, let alone tens of thousands of years ago, could even have conceived – because knowledge is cumulative. Yes, having a piece of software that can reason in a human-like fashion (but faster) will make a difference, when it happens (having worked in AI myself I’m prepared to bet it won’t be in this century, but I’ll admit I could be wrong) – but it won’t make the kind of radical difference the singulatarians think it will.

  43. Joshua BA says

    Oops that should read: “Would someone born without the use of their limbs also not possess “human-like” intelligence (this is not an appeal to emotion. I am actually curious as to what the answer would be. If human-like intelligence comes from experiencing the world as an average human does then it seems the answer should be ‘such a person does not possess human-like intelligence‘)?

    Changed part in bold.

  44. Chris says

    @42 (also named Chris, different guy)

    I think that’s spot on – I believe definitely that human level AI is possible on computers, and as far as immortality goes I think it depends on how you define that. The main reason I disagree with Kurzweil is that #3 is just bogus in my opinion.

    He defines AI as one of the next big milestones, and thinks it will follow his trend and be established soon, when his definition of a milestone is kind of fuzzy in the first place.

  45. says

    Kobra @13: Depends on definitions.

    Consider, for a much simpler and better-understood case, a shock wave coming off a supersonic object (or expanding outward from an explosion). If you do the math using the Navier-Stokes equations for a compressible fluid, you find that there’s this singularity at the boundary between the undisturbed fluid and the fluid that has been affected by the object’s passing, and that there’s a sharp jump in velocity, temperature, and pressure across that singularity. That’s the shock wave, and it goes “bang” when it passes across your ear.

    Note that concept in there, though: it’s a singularity. There is no solution to the Navier-Stokes equations that works with any continuous gradient across that boundary. It’s one value on one side, and one value on the other, and in this infinitely-thin region something abrupt happens.

    This is a useful view of the world. Jet fighter aircraft are built using this view. It works just fine for predictive purposes. When a shock wave of a certain strength passes, the difference in velocity, et cetera across the boundary is just what the theory says it should be.

    But nature doesn’t work like that. If you actually look at a shock wave, it’s not infinitely thin. It can’t be; air is made of molecules, and they have to bump into each other to transfer velocity and energy along. And it’s a statistical process. So there’s this thin region at the “shock wave” boundary where the Navier-Stokes equations break down because the gas isn’t at a local thermodynamic quasi-equilibrium, and the shock wave is really a smooth but quick change.

    Kurtzweil’s hypothetical singularity is like that. It would be absurd to assume that his model applies accurately down to the picosecond; at some time point the assumption of smoothness in the model gives way to the fact that it’s getting that smoothness by statistical averaging (just like the way we have a single fluid velocity in the Navier-Stokes equations) and when you’re looking at sufficiently small space/time boxes you don’t have enough particles for the statistics to be smooth any more, and the model breaks down at that level. But, if you’ve got a good model, it will still predict the big-picture behavior.

    Thus, it would be entirely plausible for a thing that’s a singularity in models that look at behavior over tens or hundreds of thousands of years to look like a gradual rise on a scale of mere centuries.

    This is, meanwhile, completely not to be confused with a Vingean singularity, which is simply a boundary past which current humans are incapable of understanding what’s going on because technology has advanced too far.

  46. Knockgoats says

    By the way, Raymond Kurzweil does not imply a singularity as to infinity but rather a point where after changes are very hard to predict from our point. – Umair Rahat

    Oh. Tomorrow, you mean ;-)

  47. David Harper says

    It’s a log-log graph. Any mathematician worth her salt knows that if you have a really dubious dataset and you want to make it look good, you plot it on a log-log graph.

    And only an amateur would join the data points. Sheesh! A real pro just eyeballs the best-fit straight line, and who cares if one or two points don’t sit perfectly on the line. They just add to the authenticity.

    Hold on, there’s some guy called Edward Tufte on the phone. Says he wants to talk to Kurzweil.

  48. CJO says

    Would someone born without the use of their limbs also not possess “human-like” intelligence (this is not an appeal to emotion. I am actually curious as to what the answer would be. If human-like intelligence comes from experiencing the world as an average human does then it seems the answer should be ‘no’)?

    I simply cited embodiment and the social world as crucial factors –these are somewhat more general than “experiencing the world as an average human.” Inability to use one’s limbs should, in principle, have little bearing on them. However, I will also ask a question: would Helen Keller, or someone similarly disabled, have human-level intelligence in the absence of a heroic effort to ‘reach’ her early in childhood? All the processing power is there, but the inputs would be lacking.

    Your scenario about simulating embidiment and sociality are interesting, but provided we were truly working with a sentient being of some sort, ethical issues loom large. Talk about footing the psychiatrist’s bill!

  49. Knockgoats says

    I also don’t fathom the poo-pooing of AI in this thread. – J.D.

    What are you talking about? I don’t see any. What people are doubting is that we’ll have superhuman AIs in the next few decades. Looking at the rate of progress over the past half century, that looks to me like a pretty good bet.

  50. Chris says

    Yeah I think that the singularity is plausible if his general model is, which in my opinion uses too many things which are difficult to measure. Sure, the rate of processor speed increase does have a relationship with the rate of technology increase as a whole – however, the second concept is just fuzzy and ill-defined from a scientific perspective. How do you measure “technology” when it incorporates so many different things?

  51. cpsmith says

    Daniel Pinchbeck (sp?) tried out this line of reasoning in his book ‘2012’. He also said that nature did not make waste and added some vague comments about quantum and spirituality. I only got halfway through the book before I had to give up on it. And yet it seems to be selling well. I find this distressing. You would think more folks would pick up on the foolishness when an author makes such a glaring error. Alas.

  52. says

    Knockgoats #60,

    No, not really. Ray makes that point 2045, when computers will be re-improving themselves with their higher intelligence. Also “they” is considered same as computer and human, since when both will be merged.

  53. Dr. Pablito says

    My pointy-thingy flinger is better than your pointy-thingy flinger! I bet that next year, the pointy-thingy-flingy thing is gonna completely kick butt! My brother in law from two caves down is gonna build an awesome one to take to Burning Homo this year and then everyone is gonna get with the new paradigm.

  54. herr doktor bimler says

    Yes, we’ve coped with replicating machines that evolved in parallel with us. We haven’t ever had to cope with replicating machines that were intelligently designed. Evolution is stupid and blind and bad at searching design space for efficient designs.

    For multicellular eukaryotes, yes. But when it comes to designing efficient prokaryotes, my money is on 4 billion years of rapid trial-and-error.
    The whole rationale for the original nanotechnology enthusiasm rested on the claim that “cell biology is messy and inefficient and we can do it better with molecular-level production lines rather than random movement of molecules”. Control freaks all the way down.

  55. Anon says

    @#35–

    A different take–Susan Blackmore on memes and “temes”, her “third replicant”.

  56. Paul Claessen says

    If you read Ray Kurzweil’s books and Hans Moravec’s, what I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.

    — Douglas T Hoffstadter, in an interview with American Scientist: http://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter

  57. Matt says

    Well JD @52, there is magical thinking and there is magical thinking.

    A friend told me of an idea computer program that analyzed trends in popular music. After such analysis, the program could then ‘make’ original popular music. An AI of sorts. The whole idea made me wanna puke when I thought about label producers getting their hands on it.

    Im such a music snob I proclaimed no such program could ever fool me. Im probably wrong. Even as an agnostic when it comes to things I believe special about human beings being reduced to a programmatic output it bugs me.

  58. CJO says

    Would someone born without the use of their limbs also not possess “human-like” intelligence (this is not an appeal to emotion. I am actually curious as to what the answer would be. If human-like intelligence comes from experiencing the world as an average human does then it seems the answer should be ‘no’)?

    I simply cited embodiment and the social world as crucial factors –these are somewhat more general than “experiencing the world as an average human.” Inability to use one’s limbs should, in principle, have little bearing on them. However, I will also ask a question: would Helen Keller, or someone similarly disabled, have human-level intelligence in the absence of a heroic effort to ‘reach’ her early in childhood? All the processing power is there, but the inputs would be lacking.

    Your scenario about simulating embidiment and sociality are interesting, but provided we were truly working with a sentient being of some sort, ethical issues loom large. Talk about footing the psychiatrist’s bill!

  59. Joshua BA says

    @#62
    Sorry I misunderstood you. Just to be clear, if we put an AI into a box with appendages so it could physically interact with the world, sensory organs so it could experience the world, and communication methods so it could socialize (and put it into situations to give it opportunity to do so), would that be enough to get it to a human-ish mind?

    As for deaf/blind/mute people, I would argue that the way they process, interact, and experience the world is different than an average human and so their minds would be different to compensate. That doesn’t make them on a different level of intelligence in the slightest, just different, at least in some ways, than that of a human without those differences.

  60. J.D. says

    What are you talking about? I don’t see any (poo pooing of AI). What people are doubting is that we’ll have superhuman AIs in the next few decades. Looking at the rate of progress over the past half century, that looks to me like a pretty good bet.

    I was referring to stuff like this:

    just because we can simulate a human-like characteristic mostly means we developed an algorithm to APPROXIMATE a human characteristic, leaving behind all of the complexity.

    Perhaps you mean something like massively parallel neural-net architectures capable of approximating or even exceeding human-CNS-level processing speed.

    But “human-like” I doubt. Too much of what makes us “human-like” is a result of embodiment and embeddedness of that body in a social world from day one.

    But if you are a hardcore materialist, the “complexity” of being human is fundamentally an algorithm. YMMV.

  61. andy says

    My general impression of the Singularity is something along the lines of…

    1. Computers
    2. Nanotech
    3. MAGIC PIXIE DUST!!! LA LA LA LA LA LA LAAAAAA!!!

  62. CJO says

    JD, you quote me, but the idea I’m questioning is how “like humans” such intelligences would or could be, not the possibility of a sentient algorithm.

  63. Chris says

    @75

    “just because we can simulate a human-like characteristic mostly means we developed an algorithm to APPROXIMATE a human characteristic, leaving behind all of the complexity.”

    I also said in that post that I think it’s possible we could simulate an entire brain. I’m just saying that these days people jump on small advancements in AI and somehow think that real AI is just around the corner when in fact they just wrote a heuristic/fuzzy algorithm that simulates a human behavior.

  64. Eric says

    @53 Knockgoats:

    Yes, having a piece of software that can reason in a human-like fashion (but faster) will make a difference, when it happens (having worked in AI myself I’m prepared to bet it won’t be in this century, but I’ll admit I could be wrong) – but it won’t make the kind of radical difference the singulatarians think it will.

    Why not? This sentence didn’t seem to be supported by the rest of your comment. You noted that the increase we’ve seen in the past was due to workarounds for the limitations on processing power and algorithm improvement that humans face (such as pooling power by communicating through an extremely inefficient and inaccurate datalink [language] and recording past knowledge through a similarly inefficient and inaccuarate method [writing]), but I see no link from that to “it won’t make a big difference if we intelligently improve our brain architecture.”

  65. Chris A says

    @78

    “I’m just saying that these days people jump on small advancements in AI and somehow think that real AI is just around the corner when in fact they just wrote a heuristic/fuzzy algorithm that simulates a human behavior.”

    But if we go with the Turing definition, which I think is plausible, simulating human behavior is recreating human behavior. One problem I think a lot of people have with the idea of AI is they think computers are smart right now.

    Computers are not smart right now – computers are ridiculously, excessively stupid right now. This is one of the most important lessons you learn when programming, computers right now simply do exactly what you tell them and very simply follow a script. While the scripts (programs) have been getting more complex, the actual way computers follow them has only gotten slightly more complex. Trying to get a computer to program itself is simply not worth it right now in normal cases.

    The idea is that once computers are much more powerful, the method of operation of a computer will become more complex. This can be thought of as an extension of the road from writing assembly, to using compilers, to using interpreters. Each step requires more computer resources and allows us to interact with a higher level than the basic von Neumann architecture of following an instruction pointer.

  66. Brownian says

    So, yeah, Ray gets a few Brownie Points in my book for that.

    I’ll be the arbiter of who gets those, Lurkbot, thank you very much.

  67. Eric says

    @80 Chris A:

    While the scripts (programs) have been getting more complex, the actual way computers follow them has only gotten slightly more complex. Trying to get a computer to program itself is simply not worth it right now in normal cases.

    But once it is, it’ll do it better than you. Just like your compiler writes assembly better than you, and your ALU and FPU does math better than you… For how stupid you say computers are, it can beat you at a lot of tasks.

  68. says

    Hooray!

    Kurzweil is just neo-positivist nonsense, and it’s always embarrassing to hear skeptics raving about him. Thanks for tackling him head on.

  69. Matt says

    >>>Magical thinking is magical thinking, we must relieve ourselves of all such irrational human prejudice IMO.

    JD, at the risk of turning this political, and I swear I dont, I would merely ask you to ponder on when that’s been tried before in history and how well its turned out.

    To be irrational, at least occasionally, is to be human. This is one obstacle to ‘equivalent’ AI, if far from the most important. Furthermore, if as you say, human complexity is just an algorithm, wouldnt it follow human life is the same? If the algorithm for the human brain can be solved, cant the algorithm for all the various human brain algorithms, colliding together, be solved? Given time and sufficient RAM, of course.

  70. another says

    Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don’t expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

    “Just” more cool stuff, sir? JUST!?!

    A never ending stream of ever cooler nerdgasm-inducing tech *is* the nerd rapture.

  71. says

    Just submitted this article to BoingBoing. They have featured stories on both PZ and Kurzweil many times already. Hope my summary is Ok:

    PZ Myers, expands an article where he criticizes Ray Kurzweil’s futurism and the Singularity concept from the biologist point of view, after reading Kurzweil’s book “The Singularity is Near!”. Basically, Kurzweil dont get evolution and cherry picks historical events to fit his graphs and projections.

  72. says

    Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it’s OK, when he’s actually looking at something measurable, like processor speed over time.

    In addition to what Daniel Martin said at #31, I’ll add that “computing power” isn’t purely a matter of clock speed, unless you’re talking about otherwise identical architectures. But, say, when Intel went from Pentium III to Pentium IV to Core2Duo, they made a ton of changes. Cache size(s), bus widths, pipeline lengths, and “bubble control” during multi-threading all contribute to overall performance. Plus there’s just a matter of what job you’re doing: photoshop and a video game will run differently depending on the number of floating-point vs. integer units (and that’s assuming you got both pieces of software to compile on different architectures and that both compiles were equally optimized). Yeah, it’s a big mess, and Kurzweil is just totally wrong. “Flops” is the closest thing we have to a standard, and that’s just for floating-point.

    (I’m sure some nerd can nitpick this post)

    And for those who want to cite Moore’s Law >:|
    http://arstechnica.com/hardware/news/2008/09/moore.ars

  73. says

    I’d like to point out that we already tried having one of the smartest beings ever to walk the face of the Earth that didn’t have tentacles build a thinking machine. His name was John von Neumann. The idea that human design is anything but a second rate hack compared to evolutionary selection is pure prejudice. For example, the fastest Fourier transform algorithms today are developed in place on specific processors by genetic algorithms (FFTW). When humans look at them, we have no idea how they work.

    This is similar to my current frustration with folks trying to apply engineering tools in biology. The tools are *synthetic*! Just because you would like to draw little boxes around pieces of your flow chart doesn’t mean they correspond to anything real!

  74. llewelly says

    If you’re going to use basic biology as milestones in the countdown to singularity, we can find similar taxonomic divisions in the cow lineage, so they were tracking along with us primates all through the first few billion years of this chart. Were they on course to the Singularity? Are they still?

    Where do you think we get dairy cows that can provide 150 gallons of milk a day?

  75. Otto says

    Hey, futurists are fun!
    My first encounter with the breed was at my first
    job as electronics engineer at the German branch of ITT
    during the early ’60s.
    He would tell us with the utmost certainty things like
    “By 1985 we will have accurate machine translation”.
    Has anybody tried Babblefish lately?
    A simple example, English to German:
    go play with yourself! >> gehen Spiel mit selbst!
    Pretty much rubbish.

    Kurzweil is in the same league, amusing jesters.
    Lighten up, PZ!

  76. craig says

    We will never be able to preserve human consciousness. The best we can hope for is to replicate it.

    In the same way that when using a Star-Trek style transporter you would die, only to be replaced by a new creature that has your memories, same thing would happen when your mind is machine stored. You would die, the end. Then there’s a machine that acts like you, even thinks its you.

    This is NOT because the machine would “have no soul” and you do or some bullshit like that, it is inevitable because there IS no such thing as a soul. You are a thinking machine, and when the continuity ends, you end.

    So even if our AI skills advance much faster than it seems they’re doing so far, poor Ray Kurzweil is still gonna die.

  77. Tulse says

    Even if it is possible, it seems to me that there is no reason to assume the singularity would be a good thing for humans. Vinge’s definition of singularity seems the most honest to me — once we’ve got machines that can out-think us, and make even smarter machines in a feedback loop, all bets are off. There’s no guarantee we humans get physical immortality and all the other goodies out of that — heck, there’s no guarantee the hyperintelligent machines will even bother to keep us as pets, rather than just wipe us out altogether.

    Kurzweil’s abundant, exuberant optimism is, ironically, very narrow-minded and conventional — his problem is that he’s not thinking broadly enough. When you do, the future singularity doesn’t necessarily look so appealing.

  78. Tulse says

    In the same way that when using a Star-Trek style transporter you would die, only to be replaced by a new creature that has your memories, same thing would happen when your mind is machine stored. You would die, the end. Then there’s a machine that acts like you, even thinks its you.

    And the difference between that machine and “you” is?

    By these criteria, you “die” a little each time the cells in your body are replaced. Physical continuity is a lousy criterion for personal identity.

  79. J.D. says

    JD, at the risk of turning this political, and I swear I dont, I would merely ask you to ponder on when that’s been tried before in history and how well its turned out.

    I can’t say I’ve encountered any account of history devoid of magical thinking, certainly not our present age.

    To be irrational, at least occasionally, is to be human. This is one obstacle to ‘equivalent’ AI, if far from the most important.

    I don’t see why. No reason to think an AI wouldn’t itself be able to demonstrate irrationality, especially if closely modeled on human behavioral processes.

    Furthermore, if as you say, human complexity is just an algorithm, wouldnt it follow human life is the same?

    Well yes that’s kind of my point. If you don’t believe there is anything “magical” about human consciousness that fundamentally you and I are very complex stimulus response machines. Some people really don’t like that idea. I see nothing wrong with it, I don’t feel somehow diminished to think that this is the case. I’m still a damned interesting outcome of an energy gradient emergent after millions of years of natural processes.

    If the algorithm for the human brain can be solved, cant the algorithm for all the various human brain algorithms, colliding together, be solved? Given time and sufficient RAM, of course.

    If we can solve human consciousness then certainly that solves all human thought. However the emergent properties of society and interaction amongst those consciousnesses is likely intractable computationally. Not to say that statistical models couldn’t be made better, hell we already have those. I don’t really subscribe to quantum consciousness which would say its actually impossible based on the fundamental uncertainty principle. I think the stuff posited on that though is bunk.

  80. Ultima Thule says

    andrew: “There’ve basically been no revolutionary technologies adopted in the last 50 years, except the internet.”, uhm….medicine has evolved a lot in the last 10…imagine in more 10 years.

    About the singularity. It’s true for silicon based chips. But if we find how to make quantum computers…sky is the limit.

    I do not like to think about singularities, cause it’s a deterministic theory. And “biG like to play his dice”.
    About all this subject i prefer to think in what we can actually do in the next century. I am a supporter of Michio Kaku (his book Physics of the Impossible” is amazing).

    Immortality is, i am afraid, a very far far objective yet to conquer. The genetic material we all carry are stuborn enough to don’t want to replicat more than x times…

    About AI…please, at this day and age a fly has more understanding of the environment than the most advanced AI. But very cool research going around it :D

    I agree with PZ about the “little machines” (bacteria) already around. But humans like to imitate nature. Maybe the nanotech is going for a mix of bacteria plus machine (armies of bacteria with a 6 atom size atenna :P).

    Couldn’t read all the threads, but i hope one day there is a machine to download the entire page to the brain – we would be Homo WWW :p

  81. says

    Well I’ll be damned. PZ Myers has a mystical side where he believes that human sentience has some kind of transcendental quality that will evade technologic understanding and duplication.

    I’ve always claimed I’m more of a materialist than any Darwin worshipper. Pray tell what exactly is the supernatural component about ourselves that will serve to halt the accelerating pace of reverse engineering living things at some point short of what the laws of physics will allow?

    The next major inflection point is, as some here are aware, nanotechnology. When we can design & program bacteria on an engineering workstation to build anything within the realm of the physically possible it will be a leap greater than fire, metallurgy, agriculture, and written language all combined.

    The beauty of this is that we’ve been handed the technology on a silver platter. We have little to invent other than the tools needed to reverse engineer what’s already here. The rate of progression in those tools is indeed exponential. Look at the history of DNA sequencers. Just a few short years ago it was a years long billion dollar effort to sequence a human genome. Now we’re already on the verge of doing it for a thousand bucks in a week. Look at the work Craig Venter is doing. He’s already constructed an artificial minimal genome, spliced it together from mail-order DNA snippets, and got it working via nuclear transplant to an anucleate bacterial shell. Meanwhile he’s used shotgun sequencing to catalogue millions of genes from all sorts of extremophiles in anticipation of the day when he can start mixing and matching them together in artificial genomes. I don’t think he’s very far from that day. Within our lifetimes is not much of a stretch. Within the lifetime of a mouse would be a stretch.

    As always I encourage everyone to read the original, seminal tome on nanotechnology “Engines of Creation” by K.Eric Drexler, written 23 years ago (which is when I first read it), and note that both the timeframe and milestones on the way to true nanotechnology are right on track. He predicted 30-50 years. It appears like it’s going to be on the near side of that if progress towards that first protein based replicator proceeds apace. A good portion of the book was devoted to exploring the limits of technology. What we have today is so far removed from those limits it seems they’re nothing but woolgathering. But they’re real and don’t involve any big new discoveries but rather just chunking away at reverse engineering the molecular machinery of simple living things until we know enough to fully harness them.

    You can find Engines discussed on Wickedpedia

    http://en.wikipedia.org/wiki/Engines_of_Creation

    along with links to the original version, an up to date version, and related work.

  82. Boosterz says

    To borrow a term from computer programming, I’d say the human mind is essentially “object oriented”. Your brain contains an “instance” of you. If it were possible to make an exact replica of your brain down to every last little impulse firing between every little neuron, all you would have is a new instance of that wetware program. While this would be pretty cool, it in no way gives you any kind of immortality.

  83. Miguel says

    Plenty (most?) of futurists (and this includes transhumanists) are quite skeptical of Kurzweil’s claims. In fact, don’t let him fool you: he didn’t coin the term Singularity, either.

    There’re actually three major Singularity schools.

    It’s well-explained in this quite short article: http://www.singinst.org/blog/2007/09/30/three-major-singularity-schools/

    Anyway, I enjoyed your post. You really hold nothing sacred, and that’s a wonderful thing!

  84. craig says

    “And the difference between that machine and “you” is?
    By these criteria, you “die” a little each time the cells in your body are replaced. Physical continuity is a lousy criterion for personal identity.”

    It may indeed be a lousy criterion… it wouldn’t matter to the people interacting with the new you, it wouldn’t matter to the new you either… and it wouldn’t even matter to the old, original you – because you’d be dead.

    If a new physical entity is created with your memories and thought patterns, YOU will not feel or experience what it feels… whether you die before or during (if it’s a destructive process) or whether you’re still alive.

    If we duplicate you while you’re still alive and send the new you to Disneyland, the current you will NOT experience shaking hands with Mickey.

    A new you might be a consolation to your family, and it might be a consolation to your ego to think that you’ll live forever, but you will still die.

    There’s nothing magical to be transferred.

  85. says

    “something we aren’t seeing.”

    that’s coz it’s all being done in CIA bunkers!

    and they can see you through the telephone! even if it’s hung up!

  86. says

    Remember this, AI fans:

    Suppose we do find a way to duplicate the functions of a human brain within a computer — massively parallel processing on the cheap, adaptive algorithms, a level of consciousness more complex than a simple garbage collector. You still have nothing more than a brain in a jar, and a tabula rasa at that; how are you going to interface it to the outside to teach it to be human, especially when right now we don’t even have a clear understanding of instinct?

  87. says

    Well I’ll be damned. PZ Myers has a mystical side where he believes that human sentience has some kind of transcendental quality that will evade technologic understanding and duplication.

    Typical DaveScot incomprehension and ignorance. I believe no such thing.

  88. Andrew Hay says

    It always bugs me when people lump together the Three Major Singularity Schools (putting it in not very technical terms):

    1:Accelerating change: Technology increases at an exponential rate, and at some arbitary point (greater than human intelligence, for example) when tech is increasing fast enough, the singularity has been reached.
    Advocates: Ray Kurzweil

    2:Event Horizon: When techonology becomes great enough to enhance human intelligence, we will no longer be able to easliy predict the future, the future will become much weirder than we can imagine. Advocates: Vernor Vinge.

    3:Intelligence Explosion: Once technology enhances intelligence that closes the loop. For all of human history it is: Intelligence-> technology, intelligence is pretty much constant. Now: Technology -> intelligence, the loop is closed and intelligence goes ‘FOOM’.
    Advocates: Eliezer Yudkowsky.

    Google: three schools singularity; to get a better idea of the distinctions between these three, I agree with Myers that the first one is silly.

  89. Knockgoats says

    You noted that the increase we’ve seen in the past was due to workarounds for the limitations on processing power and algorithm improvement that humans face (such as pooling power by communicating through an extremely inefficient and inaccurate datalink [language] and recording past knowledge through a similarly inefficient and inaccuarate method [writing]), but I see no link from that to “it won’t make a big difference if we intelligently improve our brain architecture.” – Eric

    We already intelligently improve the architecture of the physical basis of our cognition – which is, and has been for tens of thousands of years at least, more than the brain. From simple tricks like using tokens to count with, to writing, the abacus, printing, slide rules and digital computers, that’s what we’ve been doing. I didn’t use the word “workarounds”, because that is not what social interaction and external information storage and manipulation are. On the contrary, they are central to what makes humans as organised collectives so much more capable than humans individually. It is because we are different, and argue with each other that we can detect and correct error: see philosophy, democratic politics, and above all, science. Also, I said a “radical” difference rather than a “big” difference – my point being that this will not be something fundamentally new, but another step on the road of social interaction and partially externalised cognition.

    I could be wrong here, but the sentence I quoted strongly suggests to me that Yudkowsky has never even thought about the issue from this perspective (I know Kurzweil hasn’t, having read The Singularity is Near).

  90. QrazyQat says

    Old book, but let me mention (probably again) Max Dublin’s Futurehype: The Tyranny of Prophecy, which goes after the general silliness at the heart of virtually all futurism.

    And as my favorite example of technology that might help people remember that what we see now isn’t necessarily all that revolutionary (ie. are mobile phones really that revolutionary; are smaller mobile phones really all that revolutionary compared to the clunky old mobile phones?) the seed drill. Either imported from the mysterious East or thought up independently by Jethro Tull, it allowed people to go from needing 50-75% of their crop for next year’s planting to needing less than 20%. Instant surefire increase in crop yield of about a third. Now that’s progress!

  91. JJ says

    Ah, the singularity.Let´s quote Mitch Kapor , founder of Lotus Development:

    “It’s intelligent design for the IQ 140 people. This proposition that we’re heading to this point at which everything is going to be just unimaginably different – it’s fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can’t obscure that fact for me, no matter what numbers he marshals in favor of it. He’s very good at having a lot of curves that point up to the right.”

    By the way in the same article you can read about the wonderful hedge fund founded by Kurzweil. In this difficult times is a great tip to invest your money.

    More seriously, if you want to read an interesting and serious lambasting of Kurzweil by actual experts (Rodney Brooks, Jaron Lanier, John Baez ) please check this discussion at Edge from 9 years ago!

  92. llewelly says

    Shortly after I first encountered transhumanism, and the accompanying notion of the singularity (this would be in 2002 or so), I encountered a parody idea … transpenism.

  93. Ryan Cunningham says

    I couldn’t agree more. This guy is Ayn Rand (nonsense philosophy that ignores all contrary evidence) + Steve Jobs (ridiculous hype) + Deepak Chopra (loves hearing himself talk for long periods of time without saying anything.)

  94. says

    Knockgoats @53,

    This is exactly what I was talking about on the previous thread. Sure, human intelligence is enhanced by our social dimension. Now, if only computers could somehow mimic that massive distributed parallel processing by connecting to one another in some kind of vast network. If a data-sharing protocol faster and less-subjective than verbal/written language could be devised, it just might be possible!

  95. Andrew says

    30,000 BC Cave art
    14,000 BC Origin of agriculture
    6,000 BC Metalworking (copper)
    2000 BC Writing, wheel, astronomy, monotheism
    1 AD Geometry, philosophy, iron widely adopted, multi-national empires
    1000 AD Dark Ages: Steel, compass, hops added to beer, Denmark unified
    1500 AD Renaissance: Printing press, voyages of discovery, whiskey, harpsichords, the letter J
    1757 AD Age of Enlightenment: Steam engine, vaccination, scientific method, early industrial machinery, modern chemistry
    1885 AD Railroads, evolution, transatlantic telegraph, electric light, automobiles, telephone, radio, automatic weapons
    1949 AD Widespread electrification, nuclear weapons, jet aircraft, rockets, television, antibiotics
    1981 AD Home computers, early internet, PCR, birth control pill, Rubik’s cube, disco
    1997 AD Human genome, cloning, nanotubes, widespread internet, Tamagotchi, Windows 98, Pentium II, Napster
    2005 AD Windows Vista, Intel Core 2, Mars Reconaissance Orbiter, Chinese astronaut, Youtube, Lolcats, Facebook
    2009 AD Global cellphone penetration >50%, First black US president, Windows 7 Beta, Porn 2.0 leads to complete human reproductive failure
    2011 AD Intel Core 7 processor, Windows Apocalypse Edition, cure for cancer, space elevator
    2012 AD Revolutionary technological advances completely transform society on a weekly basis, singularity occurs on Dec 21

  96. says

    I agree with what you’ve written here, PZ, but for one thing in the following sentence:

    Probably the most radical evolutionary changes would be found in our immune systems as we adapted to new diets and pathogens, but people are still people, and we can find cultures living a neolithic life style and an information age lifestyle, and they can still communicate and even interbreed.

    Though we human beings tend to privilege ourselves by using people (and person) to refer to our own species exclusively, the word can also refer to non-human species. One presumes that even if there was this speciation event from Homo sapiens sapiens, the individuals of the purported new species would still be people.

  97. Qwerty says

    This was all predicted in the musical “A Chorus Line” when Edward Kleban wrote the song “One” as in the line “One, singular sensation!”

    Okay, just kidding.

  98. John says

    I think PZ should stick to biology and scientific skepticism and other things he knows about instead of venturing off into economics and this futurism stuff and everything else that others are way more knowledgeable about.

    That’s just my opinion, though.

  99. Ben says

    Jeebus H Krist people, lighten up. He’s a f@cking sci-fi writer.

    The ‘singularity’ is a plot device like the 3 Laws of Robotics.

    End of story.

  100. Steven Sullivan says

    Kurzweil? The keyboard inventor who thinks he’s going to beat death?

    Every generation needs its groovy, far-out visionaries spouting trippy tales of the future, I guess. Remember when Bucky Fuller and Tim Leary and the Whole Earth Catalog and Consciousness III and that guy who talked to dolphins were gonna change *everything*, man?

    While I’m not quite old enough to have been a bona fide hippie, techno-geek culture still reminds me of hippie culture, with worse social skills but the same attraction to dubious quasi-science. So you can have your Kurzweils, I’ll stick with my Pinkers and Dawkinses, thanks.

    (and speaking of the 60s, if we’re gonna talk sci-fi roots of the vision of future that embraces genetic manipulation as a routine facet of pop culture, Fred Pohl wrote the book on it…actually the short story … back in ’66: ‘Day Million’)

  101. Cliff Hendroval says

    You’ll get me to believe in Kurzweil’s singularity when you show me my Jetsons-style flying car.

    What a beautiful world this will be
    What a glorious time to be free…

  102. tony says

    Why is it that people who are otherwise sane, rational, and critical become ga-ga when faced with the singularity concept?

    From the side of futurists, it’s as if they got there already, and their minds have already vacated their slow meat brain for the accelerating hardware cascade that is beyond the singularity!

    For the non-futurists, it’s as if they’re talking about religion – and the commentary devolves to ‘show me’ and ‘not in my lifetime’ and other inanities.

    Personally I’m somewhere in the middle. I have no doubt that technology will progress, and that we will have many ‘discontinuities’ that make it extremely difficult (impossible) to accurately predict society and technology from this side. I think AI is a hard problem, and I don’t think it will be solved anytime ‘soon’, but I think it will eventually be solved. My thought is that it will be extremely expensive – in many ways

    My own bet is that we reach computational ‘human’ equivalence, and beyond, within a reasonable time frame, but still find ourselves unable to get anything that thinks. We’ll have super-google™, and semantic net: faster and smarter search, and faster and smarter heuristic reasoning – giving us the semblance of ‘near-AI’. Our PDAs will really act like assistants (but not like friends)!

    I expect to have life extension (courtesy of biology) much sooner than we get real AI – simply because we have the opportunity to see what works on real organisms – thanks PZ for helping me live forever*!

    * forever is however long as I find life interesting!

  103. farinosa says

    Kurzweil is a crackpot. I formulated that opinion after the first time I read about him some years back in Wired magazine. He was all giddy about nanotechnology and little molecu-bots dancing around in our bloodstream doing disease fighter detail a-la a Doom video game or some such nonsense. Has he never heard of the immune response?(dumbass!)

    According to another Wired article last year sometime, he’s now so hot and bothered about potentially missing the “singularity”. Oh noes! So, he’s gone off the deep end with the life extension cultists. It must be some sort of machine envy to think that we’d be so much better off without biological bodies.

    Get over it people, there is no mind/body duality! This is the latest delusional faction of people who don’t understand that basic fact. They’re just as delusional as the god-strokers and new age woo mongers.

  104. Michael Lidman says

    UPDATE: Via email, Ray Kurzweil responds:

    It isn’t valid to extend a log-log plot. A progression is valid by showing exponential growth along a linear time axis, so a graph with a linear x (time) axis and a log y axis can be validly extended (provided of course that one has analyzed the paradigm being measured and shown that it will not saturate to an asymptote). So that is why I analyzed extensively the limits of matter and energy to support computation and communication, as well as the specific technologies that could support these densities including analysis of the heat and energy issues. And an exponential trend (a straight line on a plot with a linear time axis and a log y axis) or a double exponential (an exponential on a plot with linear time axis and a log y axis) does not reach a mathematical singularity but it does reach fantastic levels eventually.

    So the point of the log-log plot is simply to show that a phenomenon has in fact accelerated in the past. It is not valid to extend the line. For one thing the log-log plot cannot go into the future because that is the nature of the log time axis. If one wanted to extend this trend, one should plot it on a linear time (x) axis showing exponential progression of the paradigm shift rate. I did that in another chart where I show the adoption times (for mass use) of communication technologies such as television, telephone up to cell phones, etc.

  105. amphiox says

    Say you could design and build a computer with parallel processing power equal to a human brain. Then you build a robot body to house this computer, as close a model of the human body as you could make it, with the same range and acuity of senses. Then you “gestate” your creation over a decade or so, letting it gradually accumulate experience of the world, even grow as a human might grow, and expose it to social interactions with human beings, just as a human child might have. Then you see if in the end you have produced something equivalent to a human being in behavior and capability.

    It would be a fascinating experiment, but its chief value would be what it will tell us about ourselves, via simulation. And it would be a one-off, singular thing that won’t be replicated (well maybe a couple times, to verify the experiment).

    If we’re talking about making human-level AI ubiquitous to a level that might create a singularity-type scenario, I don’t think it’ll happen that way. That would probably take commercialization, and this process wouldn’t fly commercially.

    Because if that is what it takes to make a human-level AI, with human-equivalent capability, what would be the point? We can already do it. It takes two individuals, and just the bare minimum of expertise. We’ve already produced maybe some 100 billion models, about 6 billion of which are still extant.

    The whole point of AI is to make something with capabilities that humans don’t have, that can do useful things we can’t do, or can’t do as well. No doubt eventually these constructs will be as complex as the human brain and equal in raw computing power, but their capabilities will not be equivalent. The first generation of them will in all likelihood be idiot savants, specifically designed to be stupendously superhuman at a specific task, and hopeless at everything else.

  106. Tulse says

    If we duplicate you while you’re still alive and send the new you to Disneyland, the current you will NOT experience shaking hands with Mickey.
    A new you might be a consolation to your family, and it might be a consolation to your ego to think that you’ll live forever, but you will still die.
    There’s nothing magical to be transferred.

    That’s right, there’s nothing magical, because there’s nothing magical about personal identity — it’s just the continuation of psychological states, not physical continuity. (One of the best thinkers in the domain of personal identity is Derek Parfit, who advances just this view.) There is no special “you-ness” separate from your psychological states, and thus it is not the case that personal identity is some fixed absolute that is attached to your birth body, and that can’t be copied, graded, or even split. To think otherwise is to suggest that personal identity is essentially like a soul.

    Yes, if you make copies of me, each of those copies will start off with the same psychological history that I had, and as time goes on, each will diverge from that psychological history and become less “me”. But at the time of copying, each is arguably equally me. There is not some other mystical thing that is me — I am just my psychological states (including their history). So, if it were possible to make multiple copies, what would happen is that those multiple copies would diverge from my states, and become less “me”. Of course this sounds like nonsense if you think that there is something to personal identity beyond my psychological states, but I can’t imagine what that would be that doesn’t involve mysticism.

    We have a hard time thinking in this way because we don’t have to confront this kind of situation in actual practice. However, we also used to think of the notion of “mother” as being a clear, unitary concept because we never faced situations where, say, one woman donates her eggs to another woman to carry a fetus to term for a third woman. This kind of situation splits up our traditional notion of “mother”, and likewise, if we had replicators and transporters and downloadable, copiable minds, our notions of personal identity would also change.

  107. tony says

    Tulse

    I’ve always had a soft spot for the concept of ‘me’ as a collection of states rather than the hardware those states are running on… something like the heechee stored minds (another Fred Pohl original from 1977!). I remember thinking that would be cool – so long as I could have a virtual copy of wiakiki and farrah fawcett in there with me!)

  108. JoAnne says

    Even if the data were clean, which it obviously isn’t, JackC is closest to explaining why this graph is a problem: the unexamined assumption that the x-axis should intersect the y-axis at y=10^0 years.

    He’s basing the zero point on the base, in this case, one year, which is simply an artifact of the unit of measurement, a year.

    The number of years until next event occurs going to one means…what? Why is this more significant than the number of seconds? fortnights? centuries? It isn’t. By choosing this unit, he’s in essence decided that one event a year is singularity time.

    He’s basically used the ‘select arbitrary position of x-axis’ trick, but hidden it by using a logarithmic graph. Using a straight-value graph, you’d see not a straight line, but an asymptote.

    You’d see there’s no crisis point where we cross the line, because the line isn’t really there.

  109. Jones says

    In a sense some other the other posters are correct, we hit a singularity at the dawn of the industrial revolution. The pace of change since then has been dictated more by our ability to handle that change as fallible and frightened humans than our ability to conceive and invent.

  110. NelC says

    Tulse said most of what I was going to say. If a reasonable duplicate of me can be created (which is a big ‘if’ I’ll grant) then that will be me, by any test that you can bring to bear. He will love the same people I do, he will hate the same as I do, he will find the same things funny or annoying that I do. He will know the number of my Swiss bank account and all my passwords. If the process leaves a dead me behind, then the duplicate will probably owe the same money that I do, and be guilty of the same crimes that I am. (In such a world, just try using death as an excuse to get out of debts and criminal sentences. The law-makers will close that escape route very firmly, I’m sure.)

    That the new me will not be the same as the old me by philosphical hair-splitting is an interesting point, or not interesting at all, depending on your temperament. But it would be irrelevent to both of us, one being dead, and the other demonstrably me. Why would I worry about it?

    Is it the break in continuity of consciousness I should be worried about? That breaks every day when I sleep. It’s been broken once in my life when I was anesthetised for an operation. One more break won’t make a lot of difference.

    Is it the silent scream of my genes being snuffed out if I am copied into a shiny carbon-fibre and metal body that should worry me? I am more than my genes.

    Is it the worry that during the process some little part of me may get copied wrong, or left out? I suffer greater losses of memory everyday. In five minutes’ time I won’t remember very much about this moment, the immediacy of the experience of tapping the keys, the sound of the rain falling outside, the creation of the words being typed; all of it will be gone, except for a very vague, fuzzy little seed which I might use to recreate the memory using bits of other memories. A day or two, and I’ll vaguely remember this comment. A week might pass and I might forget it altogether. This moment that was so bright and clear, gone.

    This me isn’t the me of five minutes ago, and the me that will awake in eight hours or so will be even less like the me now. Why should I care if one day I might awake in a new body?

  111. HP says

    131 comments, and no one has picked up on PZ’s cow singularity and its implications.

    The thing is, I can easily imagine a cow singularity within my lifetime (assuming I adopt a healthier lifestyle and medical care improves at the same rate). I expect that within a hundred years, it will be entirely possible to slice a piece of tender beef filet off of a substrate of connective tissues on a conveyor belt. That really doesn’t seem too terribly far-fetched. From a human perspective, that situation represents the cow singularity.

    But from a cow’s perspective, such a situation is trivial, and has nothing to do with being a cow. It’s likely a cow wouldn’t notice that people were slicing filets off of a cow cell culture, nor would the cow recognize any kinship with the flesh thus obtained.

    In the event all the beef we needed were obtained from cow cell cultures, I suspect cows would go back to being big, shaggy, forest-dwelling ungulates, and would forget about us, to the degree the environment allows.

    If, in some fantastic world that violates everything we know about history, Kurzweil’s human singularity were to occur, I doubt that it would include us, or that we would recognize it as anything to do with humanity. If the singularity occurs, the primary beneficiaries will be them, and we will revert to being savannah-dwelling bipedal social primates, same as always.

  112. John says

    There’s nothing silly about the concept of the singularity, only Kurzweil’s absurd depiction of it. It’s a simple fact that technological progress is increasing exponentially. Consider this for example: three hundred years ago a person could go her entire life without noticing any tangible technological progress. Two hundred years ago, one was bound to notice one or two important innovations in the course of a lifetime. A hundred years ago, one would have noticed quite a few. Fifty years ago, progress had a normal part of people’s lives, but it was only over the course of a lifetime that one would see revolutionary tech progress. After that, revolutionary technological advancement started occurring with increasing regularity. These days we have about one technological revolution per decade. (The most recent being cellphones; the one before that was internet; before that computers themselves. And that’s if you look only at digital technology; never mind progress in medicine, space exploration, etc.) Looking at how things have been going so far, it’s fully plausible that the rate at which revolutionary technological advancements occur will decrease to five years over the next few decades, to two or three years over the next fifty years, to one over the next seventy years, to nine months over the next century, to six months over the next century and a half, to a couple of months over the next two centuries, etc. until revolutionary advancements start happening on a weekly basis. I’m not saying this will occur (although I personally believe it will); I’m just saying that at its core, the singularity is perfectly plausible. It’s just that Kurzweil goes a little (okay, a lot, a WHOLE lot) overboard with his conception of it. Incidentally, this comic I happened to be reading in another tab while I was reading this (what an odd coincidence) gives a humorous double-perspective on Kurzweil: http://dresdencodak.com/cartoons/dc_034.htm

  113. Max says

    I’m afraid that Kurzweil’s details are his downfall. If you forget the fudged biology and the lumping of events, the message is still the same: change happens at an exponentially faster rate. It took the vast majority of life’s history for multicellular creatures to evolve, but a little less time for intelligent life to evolve. For more than a million years, human technology consisted of stone axes, and then within the last 45,000 years all of our culture appeared. The 20th century and what we’ve seen of the 21st saw an incredible amount more change than at any comparable period in history. And so on…

  114. HP says

    Yes, but John, if you were alive 2000-3000 years ago, you would have noticed technological progress at an astounding rate. Everything from writing systems to republican (small-r) goverment within an incredibly short period of time. Then, if you were lucky (?) enough to live a thousand years, you would have seen everything go to hell, and you would have died in a world more primitive than the one you were born in.

    The Antikythera mechanism was for all intents and purposes a mechanical computer — it disappeared and was not replicated for 2000 years. Progress is not guaranteed.

    There’s no reason to believe that knowledge is cumulative. In my lifetime, I’ve seen humanity go from being spacefarers to being incredibly earthbound. There are no guarantees, ever.

  115. says

    P.Z.: Have you heard of Boreas.net? That is how the U of M connects to Internet2. It started out as a 10 Gbps Madison-to-Chicago network about 5 years ago. Now it is 20-10 Gbps lambdas connecting WI, MN, IL and IA. And we’re building the Northern Tier Network from Chicago thru Milwaukee, Madison, the Twin Cities and west through North Dakota, Montana, Wyoming and eventually to Seattle, Washington. The bandwidth is growing at an exponential rate, but the costs are growing at an arithmetic rate. Therefore, the cost-per-bit-per-second keeps going down. When the 100 Gbps Ethernet standard is ratified, and then implemented by network equipment manufacturers, all 20 lambdas will see a 1,000% increase in bandwidth. Commercial network providers can’t compete with declining prices and increasing traffic. That is what is meant by the phrase disruptive technology. And at some point the cost-per-bit will be so low and the bandwidth so high that the cost-per-bit will effectively be infinitely close to zero.

  116. Anton Mates says

    If we duplicate you while you’re still alive and send the new you to Disneyland, the current you will NOT experience shaking hands with Mickey.

    The current me would not experience that under any circumstances. If I head off to Disneyland in a day or two, the person who ends up shaking hands with Mickey will share many of my memories, beliefs, and personality traits, but most of what’s in my consciousness at this particular moment will have been irretrievably lost. That person will know almost nothing of how I’m feeling, what I’m perceiving, what trains of thought are running through my head. Hell, the only reason he’ll even remember my geographic location is that I’m always in this room at this time of night.

    Future me and future clone-me are equidistant from current me.

  117. Shawn S. says

    This is one of those sacred cows of some professed skeptics, myself included. It is nice to see it undermined and questioned. I have even revised my opinion of the singularity and chucked it out with soul and free will.

    Thanks, PZ. Nice to see skeptics can question everything.

  118. says

    It seems that the people from The Skeptics Guide To The Universe want this man on their show, praising his insight into technology.

  119. meh1963 says

    The Singularity is one of SF’s greatest themes, or tropes, or whatever you want to call it.

    IMHO, the best novels about it are Ken MacLeod’s Fall Revolution books, a series of four related novels in which the singularity occurs sort of magically. That said, his exposition of possible consequences is remarkable – his political perspective is unique and provides a very different view from the usual dystopian/utopian divide in such fiction. MacLeod picks up where Gibson leaves off, in the messy details of the real world or real-world analogues, without giving in to noir conventions.

  120. Anton Mates says

    Yes, we’ve coped with replicating machines that evolved in parallel with us. We haven’t ever had to cope with replicating machines that were intelligently designed. Evolution is stupid and blind and bad at searching design space for efficient designs. Humans are much better, and most likely would not have too much trouble coming up with a design our bodies can’t cope with.

    But evolution is far more intelligent than humans when it comes up to finding robust designs. Human machines tend to be spectacularly good at accomplishing one goal in one environment, and lousy at coping in any other situation. Naturally-evolved machines come from lineages that have survived through, literally, every environment under the sun, and they have the versatility to show for it.

    I would expect human-designed replicators to be really good at gooifying the first thousand-or-so humans they meet. Then they encounter a temperature band they don’t like, or a particular wavelength of sunlight which breaks them down, or humans with an allele for a certain protein which can jam their machinery, or bacteria which produce a certain enzyme that dissolves them, or a combination of their own waste products which becomes toxic at a certain concentration, and suddenly they’re facing constraints and limiting factors and countermeasures just like everything else in the universe.

    Synthetic replicators may become permanent and ineradicable elements of our biosphere, but I find it hard to imagine them replacing it anytime soon.

  121. says

    Tulse and friends do appear to have a basic misunderstanding of the universe. No matter how faithful, how detailed the reproduction, it can never be the original. All these characters are showing is the need to have words mean what the Tulsites need them to mean.

    A copy of you is not another you. A copy of you is another person entirely. One based on you, but once created a different person all together. One who is having different experiences than you can have. That is how reality works. The idea that you can be duplicated, and this you be you, is magical thinking of a low and YEC sort.

    That’s right, Tulse, I called you an IDiot.

    For uploading to work we would have to be non-physical beings embedded in a physical medium. We would then have to develop the technology to transfer us from one physical medium to another. In short, by developing the ability to upload us into a computer we would, of necessity, prove the existence of the soul and potential existence of life after death.

    Really, what the Tulsites are doing is denying reality and engaging in wishful thinking. Uploading will not come about until we know how mind and personality works, and how to transfer mind and personality from one vessel to another.

    As for Tulse et al.; they are winds that stir nothing.

  122. Badger3k says

    Scientific American had a critique of Kurtzweil a few months back (haven’t read it, but heard the podcast). I do think that the idea that a neural network can recreate a human being is a bit laughable – we are more than just the connections, we are the biology, the chemicals, hormones, and the rest of the squishy bits. We think with emotions, and I’m not sure that the effects of hormones on our bodies and brains can be reproduced, especially if we don’t know all the interactions ourselves. A computer may be able to simulate something close, perhaps enough to fool a human (hell, I’ve been caught up by video game characters who seem pretty real, so it isn’t to hard), but it won’t be. Perhaps if we make organic computers out of nervous tissue, with appropriate blood etc, then maybe we can come close to humanity or a “human” machine.

    Kurtzweil’s idea of a computer that duplicate human beings in the far (or not-so-far) future and gives us immortality is a really, really, stupid idea. Let’s just ignore us for a moment and have a computer create, complete, a medieval peasant in France. Not only will we have to know everything about the person, we would have to know everything about everythign else around them, every interaction, action, reaction, thought, word, and deed. If you get one thing wrong, say the “new” peasant thinks a different thought once, and this changes one thing slightly, the peasant isn’t a duplicate, but a close copy. Of course, how will we even know? How will a computer in the future know everything about me? Will it run simulations first, and try to figure out the real me? It’s a total fantasy (and I hope that I am not mixing people here, I think this is one of the things he has predicted). Total crock.

    As for technology growing, the sad fact is that the beneficiaries of the technology are usually few and far between. We can build a house with T1 (or better, assuming there is better), but unless you also pay for the wires to the server, and the wires elsewhere, eventually the signal will go through lower capacity wires. It’s like HD tv. Not true HD at all. I also have a lot of students who do not have a computer in their homes, who don’t really know too much about the internet, or how to operate a computer. There are places in the US where getting online requires a satellite dish. This idea that we will all be awash in high tech is laughable, and that is just us, one of the wealthier countries in the world.

  123. Anonymous Coward says

    To those of you who think that you’d die in the case you attempted to achieve immortality through the emulated brain method: you could do it gradually. Cut out a small piece of brain to be replaced. Small enough that although it’d be a bother that it isn’t there, it doesn’t in any sense make you feel that you’re dieing. Analyse its connections, initialise its replacement and connect it with nano-electrodes or whatever. Wait until things normalize, and when everything is working correctly, proceed to the next bit. And in the process we might find out what the ultimate question was.

    P.S. Yes, Kurtzweil is a fruitcake. And I hate the word singularity, it’s a buzzword that implies way more than can be justified. But that doesn’t mean that nothing wonderful is going to happen. Lots of wonderful things have already happened, and unless there just was a council of tech folks and scientists where everyone decided to hang their metaphorical labcoats to the willows, we’re in for more.

  124. says

    @Tulse
    There is no special “you-ness” separate from your psychological states, and thus it is not the case that personal identity is some fixed absolute that is attached to your birth body, and that can’t be copied, graded, or even split. To think otherwise is to suggest that personal identity is essentially like a soul.

    You have this ABSOLUTELY backward!!!
    You seem to identify the “you-ness” with only the psychological states.
    It IS to suppose that the “you-ness” can be desintricated from the complete grand total of body, psychological states, memories, life history, etc… which would give personal identity the “essence” of a soul.

  125. says

    To save you all time, here’s a summary of my point: The Singularity is the point after which we can’t make good predictions any more. RK then makes predictions after this point, and, surprise surprise, they’re not very good. This doesn’t mean the concept isn’t important or that we shouldn’t consider the possible outcomes to developing AI. What I keep coming back to are economic issues. We’re still seeing fallout from the last major technological event, the rise of the internet.
    ……
    As someone who spends a lot of his free time taking a half-kidding, half-serious look at AI, I gotta say I feel like a lot of people are throwing the baby out with the bathwater. Mind Uploading is getting savaged (and, mostly, for good reason) but that doesn’t undermine the more basic and broader point- we can’t predict anything past the advent of AI (or more specifically quasi human level AI). For the first time ever we’ll have a non human intelligence to contend with and we don’t know what will happen. RK’s views are, yes, a bit of nerd rapture. Others believe it’ll be a dystopian future.

    AI could very well dramatically change the economic and, by extension, political landscape of the entire world. And it could do it in the span of a few years- from the time the first strong AI program hits a lab til it hits a price point of about 30k- the point at which it would undoubtedly rip through the American job market… yes, up will still be up, down will still be down, etc. But in many respect the world will be completely changed. And really, really, when you think about it, the AI doesn’t have to be all that strong…

    Already we’re seeing accounting companies advertise the virtues of humans over a program. That’s an odd meta-ad, you don’t see car companies positioning themselves against horse ownership. Think too about the phonebook’s ads from a few years ago, when was the last time you went to a phonebook instead of google? Even the phone is now yellowpages.com.

    And from the last big shift, the rise of the web, we’re still seeing effects immerge. The New York Times is poised to collapse (it’s the only profitable part of its publishing empire and, if memory serves, its debts mean potentially losing its headquarters). This in turn is going to have rippling effects as, if it fails, all of its printers are also likely to go under. The death of the video store and the record store are both regarded as over do, and personally the idea of going to a grocery store is almost laughable to me, as is the idea of waiting in line to speak to an agent at an airline counter when I could use the kiosk.

    There are massive inefficiencies that exist even now, when we add AI to the mix this will become even more pronounced.

    PS: Complimenting PZ in these threads is a little like wearing the shirt of the band whose concert you’re going to see. Don’t be that guy, Gutter.

  126. Peter Ashby says

    Yes, we’ve coped with replicating machines that evolved in parallel with us. We haven’t ever had to cope with replicating machines that were intelligently designed. Evolution is stupid and blind and bad at searching design space for efficient designs. Humans are much better, and most likely would not have too much trouble coming up with a design our bodies can’t cope with.

    All our bodies? even nanotech has to have an accessway, has to be designed to utilise some ‘hand holds’, but down at that level we have variation. Not even the Black Death or the 1918 ‘flu epidemic could take down even close to 50% of the total population. Yes, in the Black Death whole villages died, villages filled with closely related people. Other villages lost relatively few. London did not become self sustaining wrt population until the 19thC, it required constant immigration from the countryside. Yet it grew and prospered.

    Also if we can figure out biological pathogens in containment labs, hostile grey goo will be easy since it’s design will be logical, not historically contingent. It will simply be an arms race.

    The real problem with increases in computer processing power is what do you do with it? Has there been a revolution in that? no. They are just better and better at processing data. Instead of brute processor power what we need is much better algorithms and new things to do with a computer than simply make them more mobile. An iPod is not a major leap forward, it’s just a better Sony Walkman (remember those) or Palm Pilot.

    WRT AI where are the proper working face recognition systems? They work only under ideal conditions whereas we humans can do it on the fly in multiple contexts and lighting environments. This is not a processing or digital camera problem, we need better algorithms. That is real creativity, not ever faster crunchers of numbers. Pass me that abacus please.

  127. Anton Mates says

    Alan,

    Tulse and friends do appear to have a basic misunderstanding of the universe. No matter how faithful, how detailed the reproduction, it can never be the original.

    This claim appears meaningless to me. One ingredient in a “basic understanding of the universe” is the principle of indistinguishability of similar particles. If you give me a ground-state helium atom, and I assemble a second ground-state helium atom from its component particles and drop both atoms in a box, you have no empirical way to determine which is the “original” and which the “reproduction.” Classical physics permitted you to distinguish between them via continuity in spacetime; quantum mechanics does not.

    So if me and copy-me are initially identical down to the subatomic level, on what grounds do you claim that one of us is the original and one is not? What essential property is either of us missing…other than the “soul” or some other undetectable element?

    A copy of you is not another you. A copy of you is another person entirely. One based on you, but once created a different person all together. One who is having different experiences than you can have.

    Again, I will be having different experiences forty-eight hours from now than I am currently having. Future me will have different experiences, thoughts, and reactions than current me. So what?

    For uploading to work we would have to be non-physical beings embedded in a physical medium.

    No; we merely have to be physical processes running on a physical medium.

    I can transfer Firefox from one computer to another, even if the computers have different hardware; it remains the same program, and, functionally, does all the same things. That doesn’t mean Firefox has a soul.

  128. Anton Mates says

    Kevembuangga,

    You have this ABSOLUTELY backward!!!
    You seem to identify the “you-ness” with only the psychological states.
    It IS to suppose that the “you-ness” can be desintricated from the complete grand total of body, psychological states, memories, life history, etc… which would give personal identity the “essence” of a soul.

    I don’t see why it can’t. To take the elements you listed:

    Psychological states: “You-ness” needs those. (But, really, only some of them, and only at a fairly crude level.)

    Memories: Part of psychological states, so already covered.

    Life history: Irrelevant. Whether or not “you-ness” is defined by this, you can’t get rid of it. Even if you’re a clone or an uploaded software consciousness or what have you, your existence and nature is still causally dependent on your former life as an ordinary person. Your life history is intact.

    Body: “You-ness” doesn’t need this. You can grow from a child to an adult, or have your arms and legs amputated, or wait a few years and have almost all of your body mass replaced with new material. None of these radical physical changes threaten your personal identity, either subjectively or in the eyes of other people.

    The thing is, “you-ness” is a comparatively low-res and simple property. The vast bulk of the information associated with your physical body, and even with your brain, is irrelevant to it. “You” are not affected in the slightest by the spin state of an electron in your left big toe, and “you” barely notice the death of a single neuron. Its simplicity is what makes it plausible that it could be mirrored in a very different physical system.

    It’s kind of like color. You can easily observe the color of a table, and then paint a chair to match. This isn’t because color is a special immaterial “essence” that can be moved from the table to the chair, but because it’s a very crude property; it doesn’t depend on any of the details of structure and composition that make the table a table and the chair a chair.

  129. Liberal Atheist says

    I would be very surprised if the AI singularity will occur only decades from now. I don’t see why a superhuman AI would be impossible to create, or why it would not have the ability to selfimprove, and do so at an exponential rate.

  130. Tulse says

    Alan Kellogg:

    A copy of you is not another you. A copy of you is another person entirely. One based on you, but once created a different person all together. One who is having different experiences than you can have.

    You’re simply asserting this without responding to the arguments made. It is of course the case that once created the copy will have different experiences from “me”. As Anton says, 48 hours from now that “me” will be different from the current “me”. You seem to suggest that there is something other than continuity of psychological states that composes personal identity. Do you have an argument for that position?

    The idea that you can be duplicated, and this you be you, is magical thinking of a low and YEC sort.

    I’m not the one suggesting that there is something “special” that carries personal identity, that can’t be duplicated and transferred.

    For uploading to work we would have to be non-physical beings embedded in a physical medium. We would then have to develop the technology to transfer us from one physical medium to another.

    To be completely clear, I have never suggested that copying/transferring psychological states would be easy, or even possible, from a technological perspective. I suspect very strongly that it is not possible, at least in any foreseeable future, because of how complex the brain is, and how difficult it would be to measure/record the relevant physical aspects of a particular person’s brain. My point was simply the philosophical one that, if possible, the notions of personal identity we have will need to be clarified.

    In short, by developing the ability to upload us into a computer we would, of necessity, prove the existence of the soul

    Ummmm…..huh?

    Kevembuangga:

    You seem to identify the “you-ness” with only the psychological states.

    Right so far.

    It IS to suppose that the “you-ness” can be desintricated from the complete grand total of body, psychological states, memories, life history, etc… which would give personal identity the “essence” of a soul.

    Memories (which is how I remember my life history) are psychological states. And of course my body is important in terms of impacting on my psychological states, but I presume that if I lost a leg I would still be “me”. Indeed, I presume that if I lost all my limbs I would still be “me”. Heck, if my body were completely destroyed and my brain transferred to an android, I would still be “me”. Don’t you agree? So why is the body somehow “special” in defining the “essence of a soul”? What the hell does that even mean?

  131. Knockgoats says

    This is exactly what I was talking about on the previous thread. Sure, human intelligence is enhanced by our social dimension. Now, if only computers could somehow mimic that massive distributed parallel processing by connecting to one another in some kind of vast network. If a data-sharing protocol faster and less-subjective than verbal/written language could be devised, it just might be possible! – Stephen Couchman

    You were? Where? I’ve been through all your comments on Futurists make me cranky and I don’t see anything relevant. Presumably your point here is that the internet (or future versions thereof) is the basis of a society of superhuman AIs, but it’s certainly not sufficient. The existence of a network of social relationships between individuals combining cooperation, competition and complex, robust institutional systems is what gives collective human intelligence its problem-solving and creative power. I’m not saying either that real AI is impossible, or that it won’t make a difference, but if and when it does appear, that difference is not going to take the form of a singularity in Kurzweil’s sense. It will appear, if it does, in a society where cognitive prostheses (attached to or possibly embedded in the body) and quasi-telepathic cooperative problem solving integrating both humans and non-self-aware machines, are already commonplace. It will be another node in the network (I speak in the singular for convenience, and initially there might indeed only be one), with certain specialised abilities, but not something that can magically design a better version of itself – because it will have been designed (or the algorithms producing it will have been designed) by a network of cognitively-augmented humans that exceeds its own stand-alone problem-solving power. Achieving its useful integration into that network will be a work of years or even decades.

    Meanwhile, there are a few minor problems to solve, in order to ensure that our descendants are still around and in a position to design it.

  132. Trismos says

    Goodness… I could almost envision you sitting there pouting after your self indulgent rant about Kurtzweil and then reading all the posts telling you to get off your high horse. And then of course you had to defend yourself with this long winded bit. Get back to what you’re good at, slapping down creationists and..and.., whatever. Let the dreamers dream. Kurtzweil has given alot to society.

  133. Knockgoats says

    JJ@113,
    Thanks for the link. so far I’ve only read Jaron Lanier’s essay, and while I disagree with much of it (I’m less sceptical of the “cybernetic totalist” viewpoint than he is), there’s certainly much to ponder on a nd smile at. I particualrly liked this:

    “What is a long term future scenario like in which hardware keeps getting better and software remains mediocre? The great thing about crummy software is the amount of employment it generates. If Moore’s Law is upheld for another twenty or thirty years, there will not only be a vast amount of computation going on Planet Earth, but also the maintenance of that computation will consume the efforts of almost every living person. We’re talking about a planet of helpdesks.”

  134. Knockgoats says

    AI could very well dramatically change the economic and, by extension, political landscape of the entire world. And it could do it in the span of a few years- from the time the first strong AI program hits a lab til it hits a price point of about 30k- the point at which it would undoubtedly rip through the American job market. TheMissedCall

    Didn’t you get the memo? Cheaper foreign labour already did that.

  135. says

    Anton Mates, #150

    That is so stupid it makes my hair hurt. The universe is not going to change because you come up with a ton of bogus argumentation.

    No matter how faithful the reproduction, how comprehensive the duplication, the difficulty of distinguishing the duplicate from the source, the copy can never be the original. Being able to identify the replication has nothing to do with it, it is and shall always remain a copy. Your reasoning is creationist, your arguments too. It’s magical thinking, and lame magical thinking at that.

  136. says

    Tulse, #154

    I’m irrational, you’re insane. I can engage in a conversation with another because we can agree on the basics. Your understanding of the basics is so far out in left field third base is lost in the distance. You show such a poor grasp of reality I wouldn’t be surprised to learn you have no grasp of “in” and “out”. I have to ask, do you even understand the concept of “precision grip”? Really, what is so hard about “original” and “copy”?

    Not that I expect an answer. You have shown yourself so ready to duck the question, I’m starting to think you’re a Christian Fundamentalist who took a correspondence course in pre-Aristotalian logic and sophistry.

  137. Knockgoats says

    Alan Kellogg,
    No, the magical thinking is undoubtedly yours. Specifically, it is essentialist: the belief that things have intrinsic, unalterable and uncopyable “essences” that make them uniquely themselves. Suppose your brain could be removed and replaced by silicon chips, bit by bit (Ha!), in such a way that you retained psychological continuity throughout, while the bits could be frozen, stored, then reassembled and revived once the whole brain was available, and implanted in a robot. (Yes, I know this isn’t possible now, but can you be sure it never could be? If so, how?) Which would then be the “real you”?

  138. Torben says

    The point of the singularity is that an AI would be able to rewrite its own source code, thus exponentially increasing its cognitive abilities.

  139. Tulse says

    No matter how faithful the reproduction, how comprehensive the duplication, the difficulty of distinguishing the duplicate from the source, the copy can never be the original.

    That assumes that the relevant features of personal identity are tied to the physical original. This certainly isn’t the case for other notions of identity (e.g., does it matter if the Word document on your hard drive gets moved from the drive sectors on which it originated?), and various folks have argued that what we value with regards to personal identity does not track with physical origin. In the face of those arguments, you have not offered any counter or engaged in the debate, but simply repeated the claim that “the copy can never be the original”, which completely misses the point. Contrary to your ramblings, it is you who seems to think that there is something magical and unique and immaterial about personal identity, some “soul” that exists separate from our psychological states.

    All that “I” am is my current collection of psychological states. If such could be reproduced (which is a huge if, and may well be impossible), then “I” would be reproduced. It’s that simple.

  140. Morgan says

    Your reasoning is creationist, your arguments too. It’s magical thinking, and lame magical thinking at that.

    This is a bizarre accusation. What is “creationist” or “magical” about what Tulse or Anton are saying?

    I am not my body. At any two given moments in time, my body is made up of different things, arranged differently. I remain myself despite the replacement of my matter with new material as I eat, metabolise and excrete, because what I and people around me call me is not bound to that matter. It is a pattern, the same way that a mountain range can keep its name despite continual erosion, or a nation remain a nation despite changing demographics. The contention is that the things that I and the people around me consider relevant in calling me myself are amenable to being copied into an artificial substrate.

    I think perhaps you’re simply misunderstanding either Tulse’s arguments or the basic issue being argued. No one is saying that if you upload Ray Kurzweil and his body then dies, that Ray Kurzweil has not died. The point is that he has died, but has also survived, because when you uploaded you had two Ray Kurzweils. Uploading does not mean that no one called Ray Kurzweil ever has to die. It means that one person who has a perfectly justified claim to be Ray Kurzweil can survive while another dies.

    It seems to me from your responses that you imagine Tulse or others believe that at the point of upload, a mysterious “Ray-Kurzweil-ness” will leave Ray’s meat body and migrate across to his virtual one, like a soul. If I may make the modest assumption (reasonable based on what I’ve seen him state here) that Tulse’s thinking on this is similar to mine, then this is a total misapprehension.

    I don’t like to get in to psychoanalyzing people over the internet, but granting the assumption that Kurzweil is partially or mostly driven by a fear of his own death, I don’t imagine he thinks his immortal soul can be released from a mortal body and freed to inhabit an immortal world of information. I think he’s just looking at the process above and noting that one of the two Rays will have a continuous subjective experience of life from birth through upload and in to a machine existence, and that’s the one he’s choosing to consider his future self before the fact.

  141. says

    All these characters are showing is the need to have words mean what the Tulsites need them to mean.

    New principle: when you start applying religious affectations to someone in order to discredit their argument (like applying the -ite suffix to a commenter’s alias and using your new cult name to refer to anyone who agrees with them), you have lost the debate. Honestly, it’s the most transparent kind of linguistic deception to attempt to create a subconscious link in the reader’s mind between your opponent and the irrationality of religion. Since you have to resort to this tactic instead of rational debate, the two most likely conclusions are that you either have no rational response or you are an asshole. In either case, no one wants to hear from you. Hush now, child.

  142. TheLady says

    Thank you Morgan (#167) for the intelligent summary of the two positions; it’s really helping me clarify my own thoughts on the subject, which are somewhere along the lines of “I am my body *and/or* the dynamic pattern of my mind-states”.

    It seems to me that my sense of self is a mind state, which itself is a products of other mind states (e.g. memories) – but some of those are induced by mind-events outside my brain, mostly very basic ones that have to do with survival, like getting cranky when you’re hungry, having PMS, or feeling in love.

    Assuming that the massive complexity of the brain can be replicated in some digital form, it will serve as a substitute for the original, as it were, meat interface; so there’s not reason _in principle_ that some of the exo-cranial complexity shouldn’t be able to be replicated as well. And no, the copy won’t be the same as the original, but then the _original_ isn’t the same as the original, either. It changes mass as it grows, or has differnt stomach contents on different days, or gets a tattoo.

    I think the polarisation is in some ways artificial. All this “you’re a crypto-cartesian!” “no, *you’re* a crypto-dualist!” stuff come from reading too much Dennett, uncritically. There’s also the whole problem of not really understanding death very well, what causes it, and what exactly happens to conciousness at the boundary between alive/not alive. Which is probably the reason it scares hyper-rationalists like Kurtzweil I guess. *shrug*

    In other news, I just ate a date with no stone. Never had that happen before. Theories?

  143. NelC says

    Alan, be assured that I have no illusions that I will live long enough to be copied. It looks like a hard problem to me, one that won’t be solved in my lifetime, or for generations after, if at all. So wishful thinking doesn’t come into it, as far as I’m concerned.

    As to your other argument, what is so special about being the original? In this digital age, where creative works are produced all the time that have no originals, only copies, each of which is functionally identical to an original, why do you make such a fetish of the state of being the original Alan?

    Does it matter to you that the words you’re reading on your monitor aren’t the ones that I’ve typed, but ones that have been copied from machine to machine, translated from electrical impulses to bursts of light, back to elctricity, preserved as magnetic domains on a server, then translated back to electrical impulses, etc to make their way to your monitor? Would you prefer to read them over my shoulder on my monitor? Would that be somehow a more valid experience?

  144. says

    I’m reminded of the hammer question. If I have a hammer, and I replace the head, then I replace the handle, is it still the same hammer? If not, how can you justify claiming to be yourself, when you are a modified version of a previous you?

  145. says

    Despite Kurzweil’s sloppiness, I don’t think it is grounds to toss out the singularity altogether. I’ve read “The Age of Spiritual Machines,” and found it interesting. He was definitely off about some predictions, and as PZ Meyers has mentioned, his graphing is sloppily patched together. But, PZ hasn’t addressed why the singularity should be rejected altogether. What’s wrong with the three claims: accelerating change, event horizon, and intelligence explosion?

    Kurzweil isn’t the only one writing about this.

    From my limited knowledge of evolution, we are evolving at a faster rate than before. Perhaps not linearly, but Kurzweil also claims that evolution is non-linear, but exponential. One primary example is the fact that dinosaurs took a much longer time to evolve into their full mass than mammals did. Biological evolution does seem to be happening faster, and isn’t this the crux of what the singularity is talking about? The other alleged fallacy is its connection with technological evolution as being a part of evolution as a whole. I think it’s quite possible, but what are the refutations? The counter-evidence? We need to look at more than Kurzweil’s work, because there are good arguments out there for the singularity.

  146. Morgan says

    It seems to me that my sense of self is a mind state, which itself is a products of other mind states (e.g. memories) – but some of those are induced by mind-events outside my brain, mostly very basic ones that have to do with survival, like getting cranky when you’re hungry, having PMS, or feeling in love.

    Let me strongly recommend the fascinating book “Creation: Life and how to make it”, by Steve Grand, which is one of the major formative influences on my thinking in this area.

    Grand was the programmer of a series of computer games based around artificial life, and the book distils his insights into the questions of identity, mind/body, life and thought from that experience. Among the insights:

    – When you consider closely, everything around us is not really a thing in itself but a pattern built out of other things, which can persist despite a shifting makeup of components. About the only things which might be considered “real” in and of it/themselves are the fundamental forces and energies in the universe. Even elementary particles are not really “things” but the equivalent of standing waves in the electromagnetic (or whatever) field.

    – As an extension of this, a “thing” at the higher, abstracted levels will be just as legitimately a “thing” whatever the nature of the parts it’s made out of, so long as their behaviour is indistinguishable at the level of abstraction where the “thing” exists. If you program a simulated neuron on a computer, then write a program which combines many of those neurons into a network patterned like a human (or whatever) brain, you haven’t made a simulated brain. You’ve made a real brain out of simulated parts. Its brain-hood is just as real despite the unreal nature of its components.

    – Minds need bodies – at least, to learn to make minds like ours the easiest way by far is to start out by giving them bodies. The neural networks that were the “minds” of his creatures were proportionally far simpler than real animals’ brains, and a really surprising amount of work went in to giving them drives and hormones and reflexes and emotional states, which were what really drove most of their interesting behaviour. He had to give them a virtual environment and embody them at a point within it, with limited perceptions and powers, with needs that they had to work to satisfy. This lesson may be more relevant to creating AI from scratch (point being: just having enough computation connected together doesn’t do the trick), but certainly it makes sense that an uploaded consciousness would probably need a simulated (or real, but made of simulated parts) body-image to think at all, and certainly to think at all like a human would (as never feeling hungry, tired, depressed, drunk, horny or excited would surely lead to some fascinating new psychological problems).

  147. Tulse says

    when you start applying religious affectations to someone in order to discredit their argument (like applying the -ite suffix to a commenter’s alias and using your new cult name to refer to anyone who agrees with them), you have lost the debate.

    But I like having acolytes! Now all I need is minions, and I’m all set. I just have to practice my insane laugh…

    “Kellogg said I was mad, MAD I say…well I’ll show him, I’ll show them ALL!!!!!”

    (Damn…was that out loud?)

  148. TheLady says

    Quoth shaman sun, #173:

    we are evolving at a faster rate than before. Perhaps not linearly, but Kurzweil also claims that evolution is non-linear, but exponential. One primary example is the fact that dinosaurs took a much longer time to evolve into their full mass than mammals did. Biological evolution does seem to be happening faster, and isn’t this the crux of what the singularity is talking about?

    There are three major implicit assumptions underpinning this argument, each one in its way a gem of evolution-mangling:

    1. Evolution is a process that works towards a pre-defined end-state – which, of course it doesn’t; it’s completely blind. The massively varied group of repltilian megafauna we think of as dinosaurs evolved and evolved and evolved and evolved and then went extinct. The length of time it took them from going from a notional “appearance” on the evolutionary scene to a (still notional, since the concensus these days is that they in fact survive, as birds) point of extinction, is not how long it took them to evolve – it represents the length of time that they were the most successful and dominant group on the planet. Which, incidentally, they managed to do for a heck of a lot longer than mammals have so far been around for – and that is a measure of their evolutionary success, not of being late bloomers or anything like that. Survival is hte only measure of success, never exoticism, size or subjective measures of complexity and interest.

    2. Dinosaurs were all big as houses – just look at that Tinanoboa thing! – a lot of ancestral animals were very large – both ones we think of as dinosaurs and one we are more comfortable thinking of in terms of contemporary clades such as birds or mammals – the mammoth, the giant sloth etc. For the dinosaurs this was only possible during times when the earth was way hotter than it is now, because they’re cold blooded reptiles nd so wouldn’t have been able to sustain their metabolism on those scales when it got colder. In short, not all “dinosaurs” made an appearance in Jurassic Park, and it’s erroneous to foget that there were scores and hundreds and thousands of species of so called “dinosaurs”, whith the exact same dialectic of speciating, evolving and going extinct as the species that are alive on Earth today. Why this should tell you anything about a so-called pace of evolution, and how the hell you’d derive such a claim, is a mystery to me.

    Based on the above two points, what you describe as Kurtzweil’s claim that evolution is somehow speeding up i already nothing short of a big whopper. However.

    3. The current state of play of species in the world is the final stage, the end-game of evolution – evolution has no end game. It is a mindless, random process of incremental change. Imagine an infinite eggtimer, with sand pouring for ever from one half to the next at a rate of only a few grains at a time, but not smoothly or continuously, but in fits and starts. That is evolution, and it will continue for ever in its meaningless flow. Where you are at any given time – whether you take as your point of view our time or the time of the dinosaurs – is never any kind of conclusion, and can carry with it no more ethical implication than a snapshot of Trafalgar Square taken by a speed camera at a random time on a random day. The nembers of buses, tourists and pigeons (in ascending order of annoyingness) in the background of that random snapshot tell you nothing about how Trafalgar Square is “meant” to be. In that sense talking about the dinosaurs “full mass” or mammalian “full mass” is meaningless. All the species currently alive on Earth just evolved and evolved and evolved and evolved and one day will go extinct or speciate into new species or converge with other species and disappear from the future fossil record. We have absolutely no way of telling whether that will be tomorrow or half a billion years into the future.

    Sorry if this is teaching anyone to suck eggs. I was compelled – the spirit of science proseletising was upon me and I just had to testify… :/

  149. Badger3k says

    Have to go to work soon, so not a lot of time, but the whole “the person I am now is different than the person I will be in the future” argument should be taken to it’s logical conclusion. There is no “you”, so no “you” could be copied. It’s a meaningless term when used in that sense. The argument fails logically. That said, Word docs do not have identity. When I send a copy of the file over the internet, or store it on my USB drive, it is just that – a copy. It is not the original collection of data stored on my computer, it is a copy, stored in different locations, etc. If we reproduced my laptop completely down to the molecular level, including aging, then perhaps we can say we have made a complete duplicate, but it is still a copy. 1+1 =/= 1.

    Frederick Pohl’s “Saga of Cuckoo” stories illustrates this concept of copying – the original person steps into a machine, he is copied and that information is sent light years away, where he is reproduced. Still feels he is the original, even though he knows differently. Same way with The Lords of the Diamond series (Chalker, IIRC) – the protagonist is copied and put into other bodies, still feels the same, but the original is still present. The fact that they are the same in some ways does not mean they are the same.

    A copy of a painting that passes for the original is called a forgery, not the original painting. Do paintings have “souls”, no, just a recognition of what “original” means. If Ray thinks he will live forever by uploading his consciousness into a computer, he is sadly mistaken – he will die while a copy will live on. Imagine his surprise when he finds out that he is stuck in his original body while another “he” lives on. It all depends on your perspective (which is really the point from the sci fi references – whether you are you depends a lot on your situation in such incidences.

    Hope that made some kind of sense – have to go.

  150. says

    Gah. All this talk of strong/weak AI and no-one has mentioned Roger Penrose and The Emperor’s New Mind! Essential reading – if only to expand your understanding of quantum mechanics, if you’re not already a physicist, and expose any strong-AI believer to a coherent, well-structured weak-AI viewpoint.

    Which is correct, of course, is an open question. I certainly wouldn’t like to say either way (though as I am studying AI, I do have a good idea of what is currently possible, and from where I’m standing, the singularity is a *long* way off)

  151. says

    1980s- “the idea that everyone will be using personal computers is a nerd sci-fi fantasy- only the nerds will ever spend their days clicking away at a computer”

    1990s- “the idea that everyone will be using mobile telecommunications and electronic messages is a nerd fantasy- only those geeky cyberpunk kids will ever be carrying around mobile computers or communicating in cyberspace like their geeky sci-fi books”

    2000s- “the idea that everyone will put nanotechnology in their bodies or use direct computer-brain interfaces so they can visit or upload into virtual worlds and augment/change themselves is a nerd fantasy- maybe some of those Transhumanist freaks will do it- but not the rest of us!”

    this sci-fi cyberpunk nerd will be more than thrilled to ONCE AGAIN be there to say “I told you so” when as before- the masses adopt the new information technologies which will ultimately liberate their consciousness from all physical limitations [except for those they desire]

    oh- and on copies:

    the person you believe you are NOW did not exist in ANY WAY 2 months ago- every cell in your brain and every atom in those cells was ejected into the blood stream and shat out of your guts over the last 8 weeks- while the fatty proteins and salts in the food you ate made their way to your brain and were very loosely/inexactly built into a neuron with dendrites and synaptic connections that sort-of mostly traced the circuit pathways of the cell it replaced-

    everything we are- our souls- our memories- our selves- are sloppy copies of previous brain states that are now dead and gone- the being that you were 10 years ago is DEAD- overwritten by YOU NOW- you have some of his memories but only a few- and only very generally copied with an increasing number of errors and deviations which makes all your memories ultimately fiction loosely based on the events that happened to that person-

    SO- if you claim that a copy of you is not you- whether identical or abstracted- then you are not you now either!

    it is unavoidable- either you are:

    1- a self-aware pattern in matter that is freely copyable and abstracted as any other information

    2- you are a few week old infant personality that has been grafted with a poor copy of a person whose body and memories you have murdered and parasitically taken over from within

    the cold hard fact of our brain’s 2 month process of replacing itself entirely atom for atom/ cell for cell TOTALLY DESTROYS any argument about a copy not being the original person- in order to hold the view that your body/brain are special- you must admit that you are a copy now and will die the next time you defecate

  152. Eric says

    @109 Knockgoats:

    From simple tricks like using tokens to count with, to writing, the abacus, printing, slide rules and digital computers, that’s what we’ve been doing.

    How many of those tricks and tools actually increased our ability to come up with new tricks or tools? The answer is not very many. Agriculture, writing, the industrial revolution, computers… the list is pretty short of inventions that aided or allowed more invention. Sure, an abacus or a slide rule helps us think, and may cascade somewhat, but it’s not going to cause an intellectual reaction of critical mass. Being able to modify your own source code (neurons, if you happen to be human) very easily could.

    I could be wrong here, but the sentence I quoted strongly suggests to me that Yudkowsky has never even thought about the issue from this perspective

    He’s thought about it quite a bit (small sampling – much more available on that site or his homepage).

    @143 Anton Mates:

    But evolution is far more intelligent than humans when it comes up to finding robust designs. Human machines tend to be spectacularly good at accomplishing one goal in one environment, and lousy at coping in any other situation. Naturally-evolved machines come from lineages that have survived through, literally, every environment under the sun, and they have the versatility to show for it.

    As for evolution being more intelligent than humans, you really think you couldn’t come up with a better design for any organism if you had four billion years to do it? Evolution is blind and dumb. And organisms are not as hardy as you make them out to be – put a polar bear in a rainforest, and it’ll die. Put a frog in Antarctica and it will die. Put almost any organism in space, or in a different food source, or a different atmosphere, and it will die. Yet humans have been able to design a machine that allows these fragile organisms to go to the moon and back, and it didn’t take anywhere close to 4 billion years.

    And I’m confused why you think that, even though you were able to predict many of the failure modes of a human-designed nanomachine in a few seconds of a blog comment, you don’t think that the designers of it would think of fixing those things.

    @149 Peter:

    All our bodies? even nanotech has to have an accessway, has to be designed to utilise some ‘hand holds’, but down at that level we have variation.

    What’s the percent variation in genetic code between humans? Also, we’re not talking about biological weapons that attack your cells. We’re talking about nanomachines that process matter.

    Alan Kellogg:

    No matter how faithful the reproduction, how comprehensive the duplication, the difficulty of distinguishing the duplicate from the source, the copy can never be the original.

    Actually, Quantum Mechanics says exactly the opposite. If the copy is identical, then it’s literally impossible to distinguish an “original” or “copy” at all. It screws up the math if you try. There are no xml tags on quarks that say “original” or “copy”.

  153. dead santa says

    Interesting discussion about that old mind/body conundrum.

    I have to say that I am on the side of the meat puppets. It seems to me that our “selves” are simply illusions generated by our physical minds as side-effects of our sensory processing. Since there is really nothing that exists to copy, true duplication becomes impossible.

    Destroy the wrong part of your brain, and you no longer can create new memories. Suffer a closed-head injury, and you become a reduced version of your former self. The brain is not a computer, it’s just another organ in a messy organic body. You’re going to die. You will not pass Go.

    Eh, what do I know?

  154. J.D. says

    For the dinosaurs this was only possible during times when the earth was way hotter than it is now, because they’re cold blooded reptiles nd so wouldn’t have been able to sustain their metabolism on those scales when it got colder. – TheLady

    Sorry, going to have to call you out on this one. There is mounting evidence to suggest that many species of dinosaurs, especially therapods from which birds are most likely descended were in fact homeotherms (warm blooded). Birds of course are themselves homeotherms as well. And in fact, very large animals like the big sauropods even if not homeothermic do not need a hot environment in which to survive as they can exhibit the phenomenon of gigantothermy AKA intertial homeothermy. Due to their very large mass and volume to surface area they are able to hold heat internally without needing to generate it. There are modern examples of this (great white sharks, large sea turtles, etc. ).

    Sorry if this is teaching anyone to suck eggs. I was compelled – the spirit of science proseletising was upon me and I just had to testify… :/

    Please make sure you have the science right if such a spirit consumes you to testify before the universe again.

  155. swight says

    @1:

    Kobra, the word Singularity is used as an analogy not a literal truth for advancing technology. It is the point where we can no longer predict future change because an intelligence that is not human and vastly superior to humans are calling the shots. Just like the laws of physics break down for a real singularity (as we understand them now), the rules humans set for future change break down once we are not the intelligence calling the future shots.

    I’m not sure if Kurzweil’s Law of Accelerating Returns is correct. But I do think we will achieve AI and a super-intelligence along with it. AI seems to be coming together quite nicely even right now, it’s only a matter of time.
    Predicting the future beyond that does seem to be less certain.

  156. says

    @177 Badger3k:
    If Ray thinks he will live forever by uploading his consciousness into a computer, he is sadly mistaken – he will die while a copy will live on. Imagine his surprise when he finds out that he is stuck in his original body while another “he” lives on.

    I doubt very much that Ray would be surprised in that situation. Actually, it is inaccurate to say that “Ray” would feel anything in particular, since there is no longer a single “Ray” to feel it. One version of Ray would face the knowledge that he is inescapably dying. Another would find itself in a computer system with a reasonable expectation of living indefinitely long. The knowledge of the second Ray’s existence and survival would probably comfort the first Ray, since the first Ray’s mental state (still facing death) would more closely approximate the mental state of the past, single Ray who decided to copy his mind (assuming it could be done nondestructively) with full knowledge that the dying Ray would still exist.

    The problem is, we simply do not have adequate language to talk about a forked personality. Once there are multiple instances (whether or not you wish to designate one as “the original”), it is inaccurate to speak of any of them as though they are the only one, or to refer to them collectively the same way you would refer to a modern, single consciousness.

    I wonder how much of the current confusion is rooted in muddy and inadequate language.

    On a side note, has anyone read the Warstrider series? One of the main characters, by being in virtual contact with a vast alien intellect at the time his body is destroyed, finds his mind able to persist in the computer simulation he was occupying. The first person to live this way, he learns to make copies of himself and dispatch them on dangerous tasks. The copy, of course, is surprised to find itself in the expendable probe upon being created, but upon realizing it’s just a copy, it accepts its duty. I suppose it would depend on the individual’s psychology, if they would be comfortable with their own impending nonexistence as long as they knew a copy forked from the same original would survive.

  157. says

    Once there are multiple instances (whether or not you wish to designate one as “the original”), it is inaccurate to speak of any of them as though they are the only one, or to refer to them collectively the same way you would refer to a modern, single consciousness.

    I disagree that it’s complex. If you fork a single consciousness you have two distinct individuals — there is an original, but the copies are separate people who happen to share a hitherto-common consciousness. That no more makes them the same person than identical twins are the same person because they share the same DNA.

  158. Morgan says

    Brian X @185:

    Yes, they’re two different people, but in a way they’re both the same person as the pre-fork entity. Fork A couldn’t claim damages if someone injures Fork B, because they’re different people. But what of friends, family etc. of Pre-fork Z? Do parents suddenly have a twofold child, or is one fork a new orphan who remembers a past whose inhabitants don’t recognize him?

  159. says

    I’m not saying they’re the same mind, but our current ways of speaking and thinking about identity are inadequate for their situation. They can both lay claim to the identity of the original. If the original dies, shouldn’t the copy have all the same rights he did? Access to the bank account? Property rights? Unless he is formally and legally declared to be a new person (which has its own pitfalls), why wouldn’t we just accept that they are “Ray”? If they are agreed to be “Ray” after that death, why weren’t they “Ray” before it?

    This situation just doesn’t come up in our everyday lives; we’re not used to thinking or speaking about it, and we don’t have old and well established linguistic and legal structures to handle it.

  160. Peter Ashby says

    Also, we’re not talking about biological weapons that attack your cells. We’re talking about nanomachines that process matter.

    Oh right and biological systems don’t fight against that sort of entropy all the fucking time? I just love it when people demonstrate their ignorance of basic biology. There are acids and alkalis and hydroxyl radicals and superoxide in your body right now, both being dealt with and used and produced by your cells. Care to detail exactly how this ‘matter processing’ will work? Detergents? nature got there first. Heat and cold are basically the only weapons and where are these nanotechs going to get enough energy from to do enough damage? Perhaps you need to buff up on your thermodynamics while you revise your biophysics and biochemistry.

  161. Anton Mates says

    Alan,

    No matter how faithful the reproduction, how comprehensive the duplication, the difficulty of distinguishing the duplicate from the source, the copy can never be the original. Being able to identify the replication has nothing to do with it, it is and shall always remain a copy.

    You recognize that the properties of “original” and “exact copy” are undetectable and empirically meaningless…yet you swear up and down that one object must be the original and the other must be the copy.

    Just as two wafers can weigh, look, taste, smell the same, yet one of them’s just a baked good and the other is part of Christ. Hm.

    Your reasoning is creationist, your arguments too. It’s magical thinking, and lame magical thinking at that.

    Personally, I think it’s a rather good example of “magical thinking” to assert the existence of significant yet undetectable properties–properties that don’t depend on an object’s physical structure or spacetime location in any way.

  162. Anton Mates says

    Badger3k,

    Have to go to work soon, so not a lot of time, but the whole “the person I am now is different than the person I will be in the future” argument should be taken to it’s logical conclusion. There is no “you”, so no “you” could be copied.

    That’s not really the logical conclusion, because we do nonetheless consider now-you and future-you to both be “you.” The logical conclusion is that there can be a definable “you;” it’s just a large and vague category that embraces lots of possible people.

    That said, Word docs do not have identity. When I send a copy of the file over the internet, or store it on my USB drive, it is just that – a copy. It is not the original collection of data stored on my computer, it is a copy, stored in different locations, etc.

    What about when you drag the file into a different folder? Or open it, add an extra sentence to it, then delete that sentence and save? (I believe that would have the effect of copying it back and forth between the hard drive and the RAM.) At what point do the differences in the location and configuration of the data make it a different file?

    1+1 =/= 1.

    Sure, but which of the two “1”s on the left is the same as the “1” on the right, in that equation. Make an exact copy and destroy one of the pair: 1+1-1=1. Did the original “1” survive or not?

    Same way with The Lords of the Diamond series (Chalker, IIRC) – the protagonist is copied and put into other bodies, still feels the same, but the original is still present. The fact that they are the same in some ways does not mean they are the same.

    What about when they’re the same in every way we can possibly determine? What’s left to be different?

    A copy of a painting that passes for the original is called a forgery, not the original painting. Do paintings have “souls”, no, just a recognition of what “original” means.

    This is because no forgery of a painting has ever actually been indistinguishable from the original; forgeries just aren’t that detailed. As NelC points out, if you buy a piece of digital art, it isn’t considered a forgery; people usually aren’t jonesing to buy the original hard drive on which the file was first located.
    (It’s also because people do tend to assign “souls” to great works of art, along with certain “rights.” In some ways, emotionally and conceptually, we treat artworks like people; many of us would wince if somebody verbally and physically disrespected the Mona Lisa, even if no actual damage was done to the painting.)

  163. says

    Kurzweil is an uncritical diet supplement enthusiast, to the degree that he takes hundreds of pills and a couple of enemas daily. Poor fellow.

  164. Anton Mates says

    Eric,

    As for evolution being more intelligent than humans, you really think you couldn’t come up with a better design for any organism if you had four billion years to do it?

    a) We don’t have four billion years to do it, unless you’re arguing that grey goo may become a problem in the distant future, after four billion years of posthuman R&D…which I can’t really argue with.

    and b), yes, I do think that. We use genetic algorithms right now precisely because they can come up with better designs, in equal time, than direct human design. In fact, I’d bet that any synthetic replicators we do create will be largely designed by evolutionary methods.

    And organisms are not as hardy as you make them out to be – put a polar bear in a rainforest, and it’ll die. Put a frog in Antarctica and it will die.

    Multicellular eukaryotes (WATER BEARS!!! excepted) are not optimized for versatility. Pseudomonas bacteria can survive just fine in both Antarctic soil and rainforests, to my knowledge; hell, they survive in soap residue and in stratospheric clouds.

    Put almost any organism in space, or in a different food source, or a different atmosphere, and it will die. Yet humans have been able to design a machine that allows these fragile organisms to go to the moon and back, and it didn’t take anywhere close to 4 billion years.

    A machine which depends for its raw material and energy on a massive amount of previous labor performed by those fragile organisms in a much friendlier environment.

    And I’m confused why you think that, even though you were able to predict many of the failure modes of a human-designed nanomachine in a few seconds of a blog comment, you don’t think that the designers of it would think of fixing those things.

    Fixing a problem is much harder than predicting it, and there are plenty of failure modes I didn’t and couldn’t predict.

  165. Knockgoats says

    AI seems to be coming together quite nicely even right now, it’s only a matter of time. – swight

    Probably so, but how much time? Pioneers of the field were predicting in the 1950s that computers would be playing world-champion standard chess in 10 years (it took 50, and was achieved almost entirely by use of special-purpose hardware); accurate machine translation would be available in 30 years (nowhere near even now), and human-level intelligence in 50 (even more distant). On the other hand one of my D.Phil supervisors, Aaron Sloman, suggested in the 1970s that the latter might take 500 years. Really, we’ve no idea – but we can be pretty certain that just building a computer with the raw computing power to simulate a human brain at the neuronal level won’t do it, as Kurzweil believes. And, as I’ve argued above, if and when it does a “superhuman” computer is built, this will not cause a singularity – in any sense.

  166. Knockgoats says

    The point of the singularity is that an AI would be able to rewrite its own source code, thus exponentially increasing its cognitive abilities.

    What is your argument for the claim that being able to write its own source code would enable it to exponentially increase its cognitive abilities?

  167. Knockgoats says

    All this talk of strong/weak AI and no-one has mentioned Roger Penrose and The Emperor’s New Mind! Essential reading – if only to expand your understanding of quantum mechanics, if you’re not already a physicist, and expose any strong-AI believer to a coherent, well-structured weak-AI viewpoint. – Nick

    Sorry, no. Penrose is a brilliant mathematical physicist, but he knows fuck-all about either AI or philosophy of mind, and was too arrogant to try and learn. His “quantum gravity and microtubules” theory of consciousness is just loopy. Basically, he just can’t stand the thought that a machine could be cleverer than him, just as Ray Kurzweil can’t stand the thought of a world without Ray Kurzweil.

  168. Knockgoats says

    1980s- “the idea that everyone will be using personal computers is a nerd sci-fi fantasy- only the nerds will ever spend their days clicking away at a computer” – some stupid nym I’m not going to try and reproduce

    You evidently weren’t around in the 1980s, or else your memory’s not up to much. PCs were already spreading rapidly, mostly being used to play games, and anyone with half a brain could see that every kid would have one before long.

  169. CJO says

    Pioneers of the field were predicting in the 1950s that computers would be playing world-champion standard chess in 10 years (it took 50, and was achieved almost entirely by use of special-purpose hardware)

    Deep Blue, yes. But programs like Fritz and Rybka will run on standard-issue hardware now, and they’re rated in the top 10 in the world.

  170. Knockgoats says

    @109 Knockgoats:

    “From simple tricks like using tokens to count with, to writing, the abacus, printing, slide rules and digital computers, that’s what we’ve been doing.”

    How many of those tricks and tools actually increased our ability to come up with new tricks or tools? The answer is not very many. Agriculture, writing, the industrial revolution, computers… the list is pretty short of inventions that aided or allowed more invention. Sure, an abacus or a slide rule helps us think, and may cascade somewhat, but it’s not going to cause an intellectual reaction of critical mass. Being able to modify your own source code (neurons, if you happen to be human) very easily could. – Eric

    Neither agriculture, writing, the industrial revolution nor computers is “an invention”: all involved multiple inventions, developed in complex historical processes. The inventions that (directly) aided or allowed more invention are primarily those in information technology (broadly interpreted to include everything from cave-painting to computers – anything that improves the capability to store, transmit or manipulate information); and information-related institutional systems – language, pedagogy, schools, libraries, formal debate and assemblies, museums, universities, journals, learned societies, blogs…).

    Just to list a few of the advances in information technology that come to mind: cave painting, tallies, writing, numerals, measuring sticks and ropes, papyrus, syllabaries, standardised weights and measures, alphabets, geometric diagrams, clear glass vessels, paper, books, “Arabic” (really Hindu) numerals (place notation and zero), compass, clocks, maps, perspective drawing, spectacles, telescope, pencils, negative numbers, imaginary numbers, notation for trigonometry, and for differential and integral calculus, microscope, barometer, tables of logarithms, Napier’s bones, mechanical calculators, visual telegraph and semaphore systems, electric telegraph (it’s arguable that the real revolutionary age of information technology was the ’60s – the 1860s that is, when London became connected to New York and Delhi by telegraph), Hollerith cards, statistical formalisms, electromechanical calculators, scientific instruments of all kinds, telephone, radio, television, programmable computers, assemblers, compilers, operating systems, transistors, integrated circuits…
    Every one of these, and many, many more increased individual and/or collective human cognitive capabilities, and so made subsequent invention easier. Some of them – such as written language – are known to affect brain development quite profoundly.

    I’ll follow your links to Yudkowsky and get back to you on that – I’m quite prepared to find he has thought about these things, but the first link to his stuff suggested otherwise. My central point remains that human capacity is not limited by the capabilities of the individual human brain, as he explicitly states – and hasn’t been, far back into prehistory.

  171. Knockgoats says

    To “improve your own capabilities” is an instrumental goal, and if a smarter intelligence than my own is focused on that goal, I should expect to be surprised. The mind may find ways to produce larger jumps in capability than I can visualize myself. Where higher creativity than mine is at work and looking for shorter shortcuts, the discontinuities that I imagine may be dwarfed by the discontinuities that it can imagine.

    And remember how little progress it takes – just a hundred years of human time, with everyone still human – to turn things that would once have been “unimaginable” into heated debates about feasibility. So if you build a mind smarter than you
    Yudkowsky, from http://www.overcomingbias.com/2008/11/recursion-magic.html linked to by Eric@180. Still looks to me like he doesn’t get the simple fact that human problem-solving capability is not limited to that of the individual brain or mind. I’ll take a more leisurely look at his homepage when I’ve time.

  172. Escuerd says

    I think we should mention that QM (or rather the no-cloning theorem) also prohibits making a copy of a quantum state without destroying the original. But the interchangeability of particles still answers people who think that there is a fundamental difference between the “copy” and “original”.

    I think that the QM stuff is moot, though. I don’t believe that we would have to copy someone’s entire quantum state to effectively copy their personality.

    Alan Kellog is really hitting below the belt with the bizarre “ID” comment. I find that a lot of people share his “essentialist” view, but that such people usually haven’t put much thought into what it means to be “the same person”.

    Lurkbot @29: It almost sounds like you’re hitting the other side below the belt. Or is Searle really associated with the Discovery Institute? I’ve never been particularly impressed with the guy’s arguments (e.g. the Chinese room), but it’d be surprising if he were working for a group like the Discovery Institute.

  173. jo5ef says

    Right on PZ. Kurzweil may be a good inventor but his singularity stuff is as wacky as most religions IMO

  174. Anton Mates says

    Escuerd,

    Or is Searle really associated with the Discovery Institute? I’ve never been particularly impressed with the guy’s arguments (e.g. the Chinese room), but it’d be surprising if he were working for a group like the Discovery Institute.

    It certainly would, since Searle is a nontheist who accepts evolution and believes that “consciousness and other mental phenomena are biological phenomena; they are created by biological processes and are specific to certain sorts of biological organisms.”

    Searle may share an opposition to Strong AI with Dembski, Gilder et al., but he employs very different arguments and doesn’t see eye-to-eye with the DI on much of anything.

  175. says

    I am completely baffled by Kurzweil’s popularity, and in particular the respect he gets in some circles . . .

    The criticisms that have been given of Kurzweil’s “singularity” concept are worthwhile. But, just to respond to this point here, and in a spirit of fairness, let me say that Kurzweil is not unworthy of respect even if he’s badly wrong.

    Whatever the problems may be with “Singularity” (and, as some have commented here, the idea is not unthinkable – he just seems to have done a sadly weak and incautious job of defending it), Kurzweil looms large as a technology visionary. He has done outstanding work in several distinct fields of technology development (creating the first workable text-to-speech synthesizer for the blind, while basically inventing OCR as a preliminary step in the course of that project; he also created innovative music synthesizers and a bunch of other stuff), and most of his work centers around the general idea of the human/machine interface. In his later years he has drifted out of hard technology into more airy-fairy theorizing, but he paid his dues.

    As for his more outre musings about singularities and transhumanism, it’s an open question whether he’s a total whackaloon, a misguided enthusiast, or a technology prophet too far ahead of his times. Is he Kary Mullis, weirdo nutcase with a notable past?; Linus Pauling, certified genius with an unfortunate late-life idee fixe?; or Buckminster Fuller, true visionary who sometimes let the nuts-and-bolts stuff slide? Take your pick – but I think a case can be made that he’s at least worth taking seriously, even if he’s wrong.

    (Final note: I once attended a lecture by Pauling, before which the host introduced him by recounting all his many achievements, and then alluded indirectly to the Vitamin C controversy by saying that “Linus Pauling’s hunches are better than most scientists’ data”. I thought that was a decidedly backhanded compliment, but there’s something in it, nonetheless. Perhaps Kurzweil is the same.)

  176. Escuerd says

    Anton Mates @202,

    Yeah, I actually just watched a video of him giving a lecture to a bunch of Google employees, and I remember that he used evolution (specifically a kind of adaptivism) to argue against free will being illusory. I didn’t think it was very compelling, but it seems unlikely that anyone who places much importance on natural selection would work for the DI.

  177. Badger3k says

    Hope this works:

    “Badger3k,

    Have to go to work soon, so not a lot of time, but the whole “the person I am now is different than the person I will be in the future” argument should be taken to it’s logical conclusion. There is no “you”, so no “you” could be copied.

    That’s not really the logical conclusion, because we do nonetheless consider now-you and future-you to both be “you.” The logical conclusion is that there can be a definable “you;” it’s just a large and vague category that embraces lots of possible people.”

    Actually, if you understand the concept of anatta, and take the chain of thought of “we are not exactly the same as we were a minute ago, in the sense of atoms moving, molecules being created or destroyed, even neurons changing as you read and understand this” to its extreme, there is no real “you”, so the question is meaningless. We can consider that “you” or “I” exist in certain legal and practical senses, but not in any real, unchanging state. If you (or whoever) want to argue about the changes, then the very idea of a “you” that is copied or duplicated makes no sense. How can nothing – a non-existent thing – be duplicated. For the argument, it’s better to not make that one. Just my opinion on that.

    ” That said, Word docs do not have identity. When I send a copy of the file over the internet, or store it on my USB drive, it is just that – a copy. It is not the original collection of data stored on my computer, it is a copy, stored in different locations, etc.

    What about when you drag the file into a different folder? Or open it, add an extra sentence to it, then delete that sentence and save? (I believe that would have the effect of copying it back and forth between the hard drive and the RAM.) At what point do the differences in the location and configuration of the data make it a different file?”

    It’s no longer the same file. It’s changed. It’s different as soon as something changes it, even if we don’t see the change. It all depends on what level you want to stop. Ultimately, though, it is different. We choose to think of it as being the same because that is the way we see things – we do not consider small changes to be meaningful for identity.

    ” 1+1 =/= 1.

    Sure, but which of the two “1”s on the left is the same as the “1” on the right, in that equation. Make an exact copy and destroy one of the pair: 1+1-1=1. Did the original “1” survive or not?”

    Which of the two “1”s is the same? WTF? I guess I wasn’t clear – I meant, if you take one thing, make a copy, you have two things, not one. You have the original and a copy, not two originals. If we were not keeping track of what was destroyed, you have a 50% chance of destroying the original, and letting the copy survive.

    ” Same way with The Lords of the Diamond series (Chalker, IIRC) – the protagonist is copied and put into other bodies, still feels the same, but the original is still present. The fact that they are the same in some ways does not mean they are the same.

    What about when they’re the same in every way we can possibly determine? What’s left to be different?”

    I assume that by copying perfectly you include all genetic material, including aging and the like. Unless you can produce a computer made of neurons, hormones, etc etc, it’s not the same as the original. I’m sorry, unless you can tell me how a computer can be affected by adrenaline, for example, you can only simulate the general effects. Biological interactions are part of who we are – I seriously doubt that computer simulations can approach reality. However, I suspect that you are not going to that level, and stick to a superficial duplication. I have a bad knee, and the way I walk and react due to that is who I am. Can a computer simulate that? Basically, to be a perfect duplicate you need to duplicate me, completely, at a cellular (and really, molecular) level.

    ” A copy of a painting that passes for the original is called a forgery, not the original painting. Do paintings have “souls”, no, just a recognition of what “original” means.

    This is because no forgery of a painting has ever actually been indistinguishable from the original; forgeries just aren’t that detailed. As NelC points out, if you buy a piece of digital art, it isn’t considered a forgery; people usually aren’t jonesing to buy the original hard drive on which the file was first located.
    (It’s also because people do tend to assign “souls” to great works of art, along with certain “rights.” In some ways, emotionally and conceptually, we treat artworks like people; many of us would wince if somebody verbally and physically disrespected the Mona Lisa, even if no actual damage was done to the painting.)”

    Digital art isn’t as concrete a work as other art. It has no real “permanence” in the same sense as a The Thinker, or the Mona Lisa. It is a far more transitory and changeable medium. It’s apples and oranges. Besides, forgeries are designed with the intention of fraudulently representing it. About the worst you can do with digital art is copy it and attribute it to someone other than the creator, something that most people do not like and consider criminal (in behavior if not in law).

    Our technology and the digital age ideas have made us a people of copies. Our language still relates this (“I’ve got my copy of the book”, or “I’ll send a copy through email”.) While we do shorten it to “I’ll send the file”, I don’t know anyone who thinks they are sending the original – you still retain possession of your copy. Mass printing (and production) has changed the way we think for a lot of things, but there is still great interest in original works. I have a signed page from an old Badger comic, the original used to create the book (I forget the term, like a storyboard-type thing). It is more valuable for being unique than the comic (even though it may be signed as well, I forget). People still want one of a kind things, and place more value on that than on digital media. It’s our culture. Copies are not as valuable as the original, whatever it is. Items of transience have less value than more solid things – a first printing manuscript of some best seller is worth more than the hard drive it was stored on, precisely for that reason. The more there are of something, the less worth we tend to give it.

    As for me, call art whatever you want – I am only concerned for the historical value. Call Mona a Beyotch…so what. Write on “her” with a magic marker…defacing history.

  178. Jason says

    It this kind of ridiculous cornucopian thinking that is causing the world to be blind to (and not prepare for) global peak oil (which came and passed back in July 08).

  179. jo5ef says

    OK, I’ve now read the whole thread (phew). There are 2 different arguments going here, i want to comment on each one:
    The singularity:
    Evolution is probably not speeding up, however I have argued elsewhere that the rate/volume, of information exchange/flow is; 1st by the development of new evolutionary “tricks” like sexual reproduction and more recently, by the appearance of new ways of exchanging information: spoken language, printing, radio, telephony, computers, the internet. Personally I doubt this is teleological, more a consequence of the laws governing information itself which we have yet to fully understand. I don’t think anyone has any way to currently predict the consequences but I doubt that 2045 or thereabouts will be as momentous as RK is hoping.

    The 2nd argument about whether we could be copied is interesting: I’ve got a thought experiment: suppose in the future scientists created an exact alternate you and told you that it had all your memories feelings etc (like the alternate RK in the scenarios outlined above) but was not susceptible to disease etc and could live forever but there was one catch, only one of you could live in the world so you had to make the choice, to allow them to kill either it or you – how many people would accept death willingly?

  180. Escuerd says

    Badger3k @ 205:

    Actually, if you understand the concept of anatta, and take the chain of thought of “we are not exactly the same as we were a minute ago, in the sense of atoms moving, molecules being created or destroyed, even neurons changing as you read and understand this” to its extreme, there is no real “you”, so the question is meaningless. We can consider that “you” or “I” exist in certain legal and practical senses, but not in any real, unchanging state. If you (or whoever) want to argue about the changes, then the very idea of a “you” that is copied or duplicated makes no sense. How can nothing – a non-existent thing – be duplicated. For the argument, it’s better to not make that one. Just my opinion on that.

    Yeah, the self isn’t actually a physical thing, but it’s a convenient concept, just like the concept of a computer file. If you could make a perfect copy of someone, then which is the “original” and which is the “copy” would not have any relevance to that concept.

    They’d share the same past and different presents/futures. The copy’s sense of continuity with his/her past self would be no more an illusion than the original’s.

    I assume that by copying perfectly you include all genetic material, including aging and the like. Unless you can produce a computer made of neurons, hormones, etc etc, it’s not the same as the original.

    Why do we need to be talking about a computer? Why not a flesh and blood copy (though I don’t see any reason any other kind of computer couldn’t simulate any of the relevant physical states/processes given enough time and storage space).

    I’m sorry, unless you can tell me how a computer can be affected by adrenaline, for example, you can only simulate the general effects. Biological interactions are part of who we are – I seriously doubt that computer simulations can approach reality.

    Even in principle given unlimited time and storage space? This seems like more of a technical limitation than one that exists in principle.

    However, I suspect that you are not going to that level, and stick to a superficial duplication.

    I can’t speak for anyone else, but I am going down to that level. Actually, I will go out on a limb and say that when other people talk about perfect copies, they are probably going down to that level too.

    The whole point is to imagine duplicating someone down to the greatest detail that’s possible in principle (i.e. not ruled out by the no-cloning theorem).

  181. Escuerd says

    jo5ef:

    I’ve got a thought experiment: suppose in the future scientists created an exact alternate you and told you that it had all your memories feelings etc (like the alternate RK in the scenarios outlined above) but was not susceptible to disease etc and could live forever but there was one catch, only one of you could live in the world so you had to make the choice, to allow them to kill either it or you – how many people would accept death willingly?

    This is one that could go either way for me. If you asked me before whether the original or the improved copy should be allowed to live, I’d probably go with the copy.

    If you asked the mortal original after the copying which should be allowed to live, though, it’s a little harder to decide. Our desire to live is just arational instinct at some level, and following that would lead me to say that, once we’ve diverged, whichever version of me you asked would want to preserve itself at the expense of the other.

    On the other hand, I’m less bothered by the idea of death than I used to be, and if I knew that something that was essentially me would live as long as it wanted to, I’d be even less bothered. Some character in a Greg Egan book went through such a situation and joked that he/she/whatever (I don’t remember) didn’t consider it as much like death as like amnesia.

  182. jo5ef says

    Thanks Escuerd, I reckon most of us would find it hard to let go. I must say the whole discussion re copying has been fascinating and has made me think a lot about the concept of the conscious self. It may be only be an illusion but if you let go of that what else do you have?

  183. says

    I’ve always found the futurists to be really fascinating. They appear to desperately want to believe (based on past results) that there is an infinite potential up-space for technology and science. If you try to point out that, for most purposes, Newton got physics right, they start sounding like YECs “yeah but science has been wrong so many times before!” Faster-than-light travel seems to be the holy cracker; it’s practically assumed that eventually we’ll figure out a way of bypassing relativity and everything’ll be great just like on Star Trek.

    Might as well wait for jebus to come.

  184. Lurkbot says

    Escuerd @ 200 & Anton Mates @ 202:

    I know I’m really late to this discussion, but I tried to find a listing of Are We Spiritual Machines in some library catalog or Amazon or Barnes & Noble that gave chapter headings, but no luck. I know Searle isn’t a Discovery Institute “fellow” or anything like that, but he did agree to write a chapter in a book published by the DI. I guess his zeal to attack the concept of AI overcame any scruples about the venue.

    I can remember the first time I read that “Chinese Room” idiocy of his: it was in Doug Hofstadter’s column in Scientific American. I had to keep checking the cover to see if it was the April issue, on the theory it might be another April Fool hoax column like Martin Gardner’s in…1973(?) I still think it’s the most imbecilic argument I’ve ever seen written down in black and white, and 27 years more repetition hasn’t made it any more intelligent.

    As long as I’m here, there seem to be two arguments going on. If you could nondestructively form an exact copy of yourself, or as an easier alternative, “simply” create a simulation of your psychological states on a computer, so that the new “you” would have all your memories and an illusion of continuity, then there would be two of you. Assuming that the new you is immortal, that’s nice for him, but I never understood what good that was supposed to do for the original you: you’re still trapped in the same body and are going to die.

    The Star Trek transporter type scenario is a little different, though. An exact copy of you, created by a destructive scan of the original you, with all the psychological states you had at the moment of deconstruction–I believe that would be perceived as simply moving from one place to another. As long as there are not two instantiations of you at the same time, there would be no creation of two separate consciousnesses. We perceive ourselves as being the same person after coming to from unconsciousness, in fact momentary blackouts are probably a lot more common than most people realize.

    All “we” really are is an imperfect set of memories, a heavily edited five- or ten-second movie clip of what just happened, and an illusion of continuity. All of these have to be reconstructed after an episode of unconsciousness, but they reliably are, most of the time. I think they could be maintained through a destructive scan and copying in some other place or on some other medium.

    TREKKIE ALERT: It’s really amusing the convolutions the writers of Star Trek, at least TNG, had to put themselves through to explain why their technology didn’t work as advertised. The only difference I can see between a transporter, a replicator, and a holodeck is the source of the pattern: respectively a destructive scan of an existing object, a stored pattern of one once existing object, or a calculated pattern of an imaginary object. They had to think up reasons why “holodeck matter can’t leave the holodeck” or they would quite simply be gods.

    If you got killed on an away mission, they could simply reconstitute you from the pattern buffer. Well, one “you” would have been killed, but another “you” with all your memories up to the time of transport is better than nothing. They were always “editing” disease organisms or weapons out of incoming transportees; what’s to keep them from “editing” your body periodically so it’s always the same age? And if they could create anything they wanted on the holodeck, or create as much as they want of anything, how does that not make them gods? (The good kind, not the nasty YHWH kind.)

    That’s what happens when you don’t have the courage of your convictions. But then the stories would have been kind of boring. Come to think of it, in a perfect world like that, we’d all die of boredom!

  185. says

    When we can design & program bacteria on an engineering workstation to build anything within the realm of the physically possible it will be a leap greater than fire, metallurgy, agriculture, and written language all combined.

    “When” not “if”??

    There are constraints; always constraints. And, what if those constraints turn out to be huge? What if the constraints on energy available to nanomachines means that they need to “eat” or something like that? What if the constraints on nanomachines turn out to be very similar to the constraints on bacteria? Because those nanomachines will have to live in the same universe as bacteria. Sure, we might be able to “intelligently design” our own bacteria2.0 that are 100 times more efficient than real bacteria but, so what? That doesn’t mean we’re gonna transform Jupiter’s mass into a wad of solid computronium.

    Basically, the problem of nanomachines is close to the problem of creating life. I won’t go so far as to posit that “if bacteria-like things could survive on Jupiter, they’d already be there” but – something close to it.

  186. Anton Mates says

    Badger3k,

    Actually, if you understand the concept of anatta, and take the chain of thought of “we are not exactly the same as we were a minute ago, in the sense of atoms moving, molecules being created or destroyed, even neurons changing as you read and understand this” to its extreme, there is no real “you”, so the question is meaningless. We can consider that “you” or “I” exist in certain legal and practical senses, but not in any real, unchanging state.

    I understand what you mean here, but you are arbitrarily ruling out a different and very common conception of “you” and “I,” which is the conception I find more significant to this discussion. And I don’t agree that your definition of personal identity is any more “real” than the typical and sloppier one; “you” and “I” are just words, and they mean whatever the people using them want them to mean.

    Most people consider me to remain myself under some changes but not others. If I am moved two feet to the left, or have my arm amputated, or go to sleep and wake up again, or take a course on Basque cooking or just go about living my life for five years, “I” still exist. On the other hand, if I am shot through the head, decapitated, dismembered, burned and have my ashes buried and the land above plowed and strewn with salt, “I” no longer exist.

    What I and others are arguing is that the changes involved in “uploading” me to another physical form–whether a perfect quantum duplicate or a cloned organic body or a computer or whatever–can be the sort of changes which preserve identity in the same sloppy sense as the nonlethal changes above. That they fail to preserve identity in the infinitesimally-detailed sense you mention is true, but not really relevant to fears about whether uploading = death.

    It’s no longer the same file. It’s changed. It’s different as soon as something changes it, even if we don’t see the change. It all depends on what level you want to stop. Ultimately, though, it is different. We choose to think of it as being the same because that is the way we see things – we do not consider small changes to be meaningful for identity.

    Well, precisely. We choose to think of it as being the same under small changes. We understand that if we defined identity as dependent on every last associated electron, it would no longer be the same; but we don’t define the identity of the file that way.

    I guess I wasn’t clear – I meant, if you take one thing, make a copy, you have two things, not one. You have the original and a copy, not two originals.

    Right, you’ve said that, but you have yet to justify it. If they are currently indistinguishable, or were indistinguishable at any point in the past, how do you propose to demonstrate that one is the original? If you can’t demonstrate it, even in principle, what does it mean to claim that one is the original?

    I assume that by copying perfectly you include all genetic material, including aging and the like.

    In this scenario, yes. I do think that even an imperfect copy can preserve identity, but for those who don’t, I’m asking whether a perfect one can do so, and if not, why not.

    Biological interactions are part of who we are – I seriously doubt that computer simulations can approach reality.

    Well, this is a thought experiment. I don’t see any theoretical problem with a sufficently-kickass computer modeling an entire human down to the subatomic level. It doesn’t even seem terribly implausible in reality, given (say) a few hundred more millennia of technological development.

    I have a bad knee, and the way I walk and react due to that is who I am.

    Again, this simply doesn’t match our normal understanding of human identity. If you were scheduled for knee surgery, would you really be filled with despair, preparing for your own death and replacement by person-like-you-but-with-better-knees?

    (I’m not judging you if you would, but I will say I think you’d be rather unusual.)

    Digital art isn’t as concrete a work as other art. It has no real “permanence” in the same sense as a The Thinker, or the Mona Lisa. It is a far more transitory and changeable medium.

    I’m not really sure what this means. A digital file on a high-quality DVD, properly stored, could outlast the Mona Lisa.

    People still want one of a kind things, and place more value on that than on digital media.

    I don’t disagree. But such things can be “one of a kind” precisely because it’s impossible to copy them exactly. Otherwise, “one of a kind” would be meaningless.

  187. says

    in a perfect world like that, we’d all die of boredom!

    Nonsense. You’d only die of boredom if you’re a boring person.

    Personally, I could keep myself busy indefinitely – indeed, I’m pretty sure that, the longer I lived, the more crazy stuff I’d want to do to keep myself busy.

    First, I’d read the internet. Including the pr0n. Then I’d make a copy of stonehenge out of meringue. And then…

  188. Anton Mates says

    jo5ef,

    suppose in the future scientists created an exact alternate you and told you that it had all your memories feelings etc (like the alternate RK in the scenarios outlined above) but was not susceptible to disease etc and could live forever but there was one catch, only one of you could live in the world so you had to make the choice, to allow them to kill either it or you – how many people would accept death willingly?

    I would. But then, if I had no friends or relatives who missed me, I wouldn’t mind dying even if there wasn’t a clone to replace me. (I’m not depressed or anything; death just doesn’t trouble me much, as long as the dying bit’s not painful or scary.)

  189. Anton Mates says

    Escuerd, on quantum copying:

    I think we should mention that QM (or rather the no-cloning theorem) also prohibits making a copy of a quantum state without destroying the original.

    True. However, AFAIK, nothing prevents us from creating (in Thought Experiment Land) an arbitrarily large number of low-res copies without actually measuring every detail of the original quantum state and thus destroying the original. Then it’s likely that at least one of them will replicate the unknown original quantum state exactly…even though we’ll never know which one.

    More importantly for the issue of identity and indistinguishability, QM also prevents us from measuring every aspect of the quantum states of copy and original anyway. So we could create a near-perfect copy which is still empirically indistinguishable from the original.

  190. Anton Mates says

    Nonsense. You’d only die of boredom if you’re a boring person.

    And even if you’re a boring person, surely in Future Techno-Utopia you have the option of redesigning your brain so you don’t get bored?

    First, I’d excise my sense of boredom. Then I’d make a copy of stonehenge out of meringue. Then I’d make a copy of stonehenge out of meringue. Then I’d make a copy of stonehenge out of meringue. Then….

    [Earth implodes deliciously five years later as the Meringuo-Architectural Singularity is attained]

  191. says

    Actually, it looks like technological improvement is stagnating in many ways. Instead of a glib Jetsons like future, our planes, jets, cars, bombs, even computers have some improvement (especially the latter) but most of them work in basically the same way they did decades ago. We certainly aren’t getting better at human space exploration (the Russians are still using their old late 60s vintage Soyuz with little change. (And consider quasinovelties like electric cars, around since what, the early 20th c?)

    Sadly, in some ways our advanced societies are in decline. We aren’t forgetting knowledge and can’t go back in that sense, but a high standard of living seems harder and harder for average people in the US at least, as “finanicialization” of the economy and class-economics regression (but perhaps entering a more hopeful phase) brings most of the wealth to an increasingly corrupt and estranged aristocracy.

    As for the very interesting personal identity discussion, per e.g. #128, #167 etc. there is an interesting problem about “you” persisting throughout time (i.e., the preservation of personal identity even during life, not just perhaps after.) I’m not sure what to think (even having a reason for laws to be life-friendly for example, wouldn’t answer these questions.) But in any case, if the processes of “you” are slightly different from time to time, then there are two problematical issues:

    1. For what really cogent logical reason should you care more about what happens to the body that will be the continuance of yours five years from now, than someone else’s? Don’t let the crude semantics seem to prove anything, I mean think about what is really there in the future versus what is really going on in you now. If there isn’t a global self of some kind constituting your identity, your big problem is not whether something can survive your body’s destruction – it’s whether “you” can survive until next week.

    2.It makes sense to consider your mind as “what happens” and not the “machine” it runs on. That is the actual “rational” position, however ironic that may seem. If so, there is indeed no reason that process can’t “run” somewhere else. I have no argument here for a specific somewhere else (“Platonic computer”?) but the principle of the thing looks sound. And it isn’t relevant, whether an “exact copy” can be made because you aren’t the same all the time anyway, as noted! There can thus be no specific pattern etc. that is precisely “you” to have that be the deciding factor.

    BTW Kevin Drum is a terrific blogger, albeit showing some “dorky innocent naive liberal” weaknesses but I comment at the new digs at Mother Jones. His old outfit Washington Monthly is still terrific, Steven Benen does a good job, cogent writer and thinker like Drum. The commenters at WaMo are very sharp, amazing to see so many well-thought, well-written and zingy pieces from just the freaking commenters. (Compare to the riff-raff on the other side like Instapundit, etc.)

  192. Douglas McClean says

    @35:

    “If a human can (eventually) invent a machine smarter than himself, then can that machine can also invent a machine smarter than itself? And being smarter, will it do it more quickly?

    If the answer is yes, we have a divergent series. Imagine the real-world outcome.”

    Says who? Not all monotonically non-decreasing or even monotonically increasing series are divergent. Remember Zeno? What if the answer is a (in my opinion more likely) “yes, but with diminishing returns”?

  193. John K Clark says

    The reason Kurzweil did not treat things like the invention of the wheel and writing as separate events is due to the nature of logarithmic plots. If you wanted those events treated separately you’d need a chart the size of a football field for the human eye to distinguish such closely spaced points. And the reason each species back to the Cambrian is not awarded equivalent significance is because he’s only interested in the most powerful information processing machine that existed in various eras and how their capacity has increased over time. He wasn’t interested in a new species of nematodes and neither am I.

    John K Clark

  194. says

    I think that a serious error in comments on this thread is the concept of the self or mind as a thing, a noun.

    Self/mind is a process, a verb.

    Self/mind is housed within the brains, nervous systems & bodies of humans. It is not a part of those organs: it is an emergent property of the action of those organs.

    Self/mind is the output of the biological machine interacting with itself & the external world. Each individual machine has a broad range of variable behaviors. Even if the machines were identical, there is no reason to believe that the outputs of the machines will be so.

    My thought is that the transfer of self/mind would be impossible because there is no thing to be transferred. Even if you were able to recreate a body to be absolutely identical (whatever that means since the body rapidly & constantly changes its components), there is no thing to put into the recreation.

  195. says

    Jaycubed, couldn’t you have the process happen somewhere else, as I and others have suggested? It doesn’t have to be a thing to be somewhere else, indeed it’s better for “survival” if mind/self isn’t a “thing” but a process as you suggest.

  196. John Morales says

    Jaycubed, if I may quibble, a noun is a descriptor, and it can describe a process. A verb is not a process, but a predicate applied to nouns.

    I do agree with the thrust of your comment, however.

  197. Anton Mates says

    Jaycubed,

    Self/mind is a process, a verb.

    Self/mind is housed within the brains, nervous systems & bodies of humans. It is not a part of those organs: it is an emergent property of the action of those organs.
    ……
    My thought is that the transfer of self/mind would be impossible because there is no thing to be transferred.

    But you can transfer processes, properties and patterns just fine. Think of wave mechanics. A wave is nothing more than a set of properties and processes which is continuously transferred from one part of a medium to another. The wave can travel from place to place, retaining its distinctive properties, even though the material embodying the wave at a given moment doesn’t go anywhere.

    Or think of a hurricane or other weather system, retaining its identity as it travels over hundreds of miles, from one set of air molecules to another.

    Or think of a human body over a period of years, continously replacing its mass while retaining its form and behavior patterns.

    Yes, self is a verb–but a replacement brain could self just like the old one, if set up properly.

  198. Sven DiMilo says

    As with all emergent properties, self/mind in the brain results from the organization of its component parts (for the brain, neurons and glial cells). It follows that to emulate a human self/mind in a model (computer, I guess, of some kind), you’d have to match the functional organization of the brain’s neurons and support structures. That means accounting for all (? or some critical subset) of the brain’s synapses, negative and positive, the particular post-synaptic integration properties of all (subset?) neurons, their response to a variety of neuromodulating chemicals, mediated in part by spatial variation in receptor density, PLUS effects of hormones, sensory input (conscious and autonomic)…
    I am far, far less optimistic (?) than Anton about our ability to do anything like that any time soon.
    Yes, I have been stimulating cannabanoid receptors this evening.

  199. Anton Mates says

    I am far, far less optimistic (?) than Anton about our ability to do anything like that any time soon.

    I said nothing about “soon”; I said it didn’t seem terribly implausible that we could attain that ability after a few hundred thousand years of technological development. For the foreseeable future, it’s just a thought experiment.

  200. Triple McSlice says

    Ray Kurzweil indulges in some very pseudo-scientific wishful thinking in his interpretation of technological trends; thus, anything that might ever be written about the broad category of phenomena that the term “singularity” represents is, beyond question, a moonbat crock of shit. Right. A dialogue:

    “I say, have you gotten wind of this Lamarck fellow?”
    “Ha! The fool! To think that a giraffe, having obtained an elongated neck during the course of its lifetime, would bear similarly long-necked offspring!”
    “Ridiculous, I agree. It sounds to me like a very literal, quite supercilious superposition of Hegelianism onto the natural order.”
    “No doubt. How silly! Silly Hegelians, trying to do science! Ha! Ha!”
    “Well, I trust we have thoroughly eviscerated evolution.”
    “Quite. Evolution, in any iteration, is and shall always remain a fallacy.”
    “Thanks be to God.”
    “Yes. Thanks be to God.”

  201. Triple McSlice says

    I think the analogy is appropriate. Lamarckian evolution had no discernible mechanism. It was based entirely on a hazy, almost comically intuitive sense of how evolution — which as early as the late 1700s seemed more or less evident to many educated people, cf. Schelling and Hegel — might work.

    Similarly, Kurzweil looks broadly at evident general trends — positive feedback cycles in history, diminishing intervals between technological innovations — and intuits a mystical “law” that states that recursive technological advancement ad infinitum will occur *regardless* of the mechanism. Obviously this is silly, for the same reasons that Lamarckian evolution, in retrospect, strikes us as silly.

    Of course, people like Darwin and Mendel would eventually flesh out the mechanism that holds evolutionary theory together. Yudkowsky, Vinge, et al are doing something similar in delineating conditions — many of which, though counter-intuitive, are rational and even plausible — that conceivably might lead to a divergent series of technological progress and cognitive enhancement that Kurzweil fuzzily anticipates.

    Based on what I’ve read, the frequency of straw men and ad hominem attacks in their entire body of rhetoric falls way short of what I’ve read in and following this article alone. Take that for what it’s worth.

  202. nothing's sacred says

    Self/mind is a process, a verb.

    First, “self”, “mind”, and “process” are all nouns. Second, anyone who is computationally literate knows that process states can be captured, stored, replicated, and executed.

    if you were able to recreate a body to be absolutely identical (whatever that means since the body rapidly & constantly changes its components)

    So it’s impossible to back up a computer because its state is always changing? In fact, we capture a snapshot — the state at one moment in time. And for humans, we could possibly cool the body to the point where there are effectively no changes occurring. There may be reasons why replicating a human body isn’t feasible, but they aren’t the reasons you’re offering, which are uninformed and poorly thought out.

  203. nothing's sacred says

    Of course, people like Darwin and Mendel would eventually flesh out the mechanism that holds evolutionary theory together. Yudkowsky, Vinge, et al are doing something similar in delineating conditions — many of which, though counter-intuitive, are rational and even plausible — that conceivably might lead to a divergent series of technological progress and cognitive enhancement that Kurzweil fuzzily anticipates.

    Odd that you call two quite dissimilar things similar. Darwin offered an explanatory mechanism for an observed phenomenon; his work was mostly to explain past events. He did not wave his hands around about what “conceivably might” happen in the future. You aren’t doing Yudkowsky any favors by ofering an argument that is so transparently bad.

  204. nothing's sacred says

    Not all monotonically non-decreasing or even monotonically increasing series are divergent.

    There isn’t even any reason to think the series would be monotonic. Notably, natural evolution does not result in ever-increasing speed or efficiency or any other monotonicity. And the notion that we can embed into robots a directive, faithfully replicated from generation to generation, that would result in a steady drive toward some goal when we ourselves don’t know the path to that goal is sheer fantasy.

  205. nothing's sacred says

    suppose in the future scientists created an exact alternate you and told you that it had all your memories feelings etc (like the alternate RK in the scenarios outlined above) but was not susceptible to disease etc and could live forever but there was one catch, only one of you could live in the world so you had to make the choice, to allow them to kill either it or you – how many people would accept death willingly?

    Like many “thought experiments”, yours is rigged with prior biases and thus leads to a foregone conclusion.

    Why do they only offer me the choice and ignore the desires of the clone? More realistically, having created such a clone they wouldn’t consult the inferior copy for its worthless opinions.

  206. nothing's sacred says

    There’s a lot of very naive thinking about ontology here. Before tackling questions about personal identity and copying of same, people ought to at least recognize that their ontological notions are problematic. For instance, are there laps, smiles, voices, or words? If so, what are they made of? If I make up a word and you repeat it, have you copied it? Which is the original? If you smile for 5 seconds, is it all the same smile, or a bunch of different smiles? Is the answer different depending on whether you hold your face rigid or not? How do we recognize a person’s voice when it is ever-changing? What’s the justification for calling it “their voice” at all? When you stand, you don’t have a lap but when you sit you do — where did it come from? What is it made of? Is your lap the same lap each time you sit, or are they different laps? If a lap is just a concept, how can someone sit on it?

    Many of these questions can be more easily answered if one allows patterns into one’s ontology.

  207. nothing's sacred says

    I’m sorry, unless you can tell me how a computer can be affected by adrenaline, for example, you can only simulate the general effects. Biological interactions are part of who we are – I seriously doubt that computer simulations can approach reality.

    David Chalmers has pointed out that there’s a class of entities — which he calls “organizationally invariant” — for which a simulation is the real thing. For instance, a simulation of a hurricane is not a hurricane, but a simulation of a chessplayer is a chessplayer. More generally, the mind can be viewed as organizationally invariant — a simulation of someone thinking about how to build a robot is thinking about how to build a robot. Now consider the effects of adrenaline on a mind — whatever changes it makes to the mind’s activity, the same changes can be simulated, meaning that the same changes occur in the activity of the simulation — without actually applying adrenaline to it.

  208. says

    But you can transfer processes, properties and patterns just fine.

    I believe you are talking about extremely simple processes. As complexity is introduced, the quality of such transfers is impacted.

    Think of wave mechanics. A wave is nothing more than a set of properties and processes which is continuously transferred from one part of a medium to another. The wave can travel from place to place, retaining its distinctive properties, even though the material embodying the wave at a given moment doesn’t go anywhere.

    This is not true, the material does move, typically in small circles.

    Or think of a hurricane or other weather system, retaining its identity as it travels over hundreds of miles, from one set of air molecules to another.

    A hurricane constantly changes & evolves. It is not the same from moment to moment. We humans give it a name and consider it to be the same thing, but again, a hurricane is a process (a verb) and not a thing.

    Or think of a human body over a period of years, continously replacing its mass while retaining its form and behavior patterns.

    I am not the same being I was twenty years ago, despite a sense of continuity as a person. I am different as well as the same.

    Yes, self is a verb–but a replacement brain could self just like the old one, if set up properly.

    It is impossible to set up properly as there is no state of “proper” that exists. The process (mind/self) is constantly changing. The structure (brain/body) that supports the process is constantly changing.

    And Nothing sacred;

    A computer is nothing like a brain in most regards. Shoot a bullet through a computer & see what happens. Either nothing or total failure. A bullet in a brain can provide a vast range of effects, many novel & unrelated to programming.

  209. nothing's sacred says

    Jaycubed: you’re a fool and an ignoramus. Try at least to spell my handle correctly.

  210. says

    When you have no argument to support your position, all one can do is call others names, right “nothing sacred”.

    And I chose to call you “nothing sacred”: it was not an error, it was a description of your comments.

  211. Anonymous says

    Idiocy. The amount of information produced doubles every year. We in fact produced more unique information in 2008 than the entire preceding record of human history. This leads to 2 possible scenarios: 1. Either our information generating ability slows down extremely for some reason, be it bandwidth or energy or the ability to store the information. 2. The exponential growth continues until it becomes functionally infinite (hence the term “Singularity”) Of course Kurzweil believes in scenario 2. I disagree with him of course, however the author of this blog didn’t really refute any of his arguments, as it can be PROVEN that information generation has increased exponentially. The question is, what happens when that stops?

  212. Anton Mates says

    Jaycubed,

    I believe you are talking about extremely simple processes. As complexity is introduced, the quality of such transfers is impacted.

    Sure, but the quality need not be degraded so far that the transfer becomes impossible. A complex sound wave can make it across a room just fine; it can make it across the planet if we provide a better transmission mechanism than air.

    This is not true, the material does move, typically in small circles.

    Well yeah, but since circles leave you back where you started, the material doesn’t go anywhere in the end. Only the wave travels, and leaves that material behind.

    A hurricane constantly changes & evolves. It is not the same from moment to moment. We humans give it a name and consider it to be the same thing, but again, a hurricane is a process (a verb) and not a thing.

    But we’re talking about things from the point of view of we humans. It’s fine if you want to think of identity in such a way that it is necessarily lost after a split-second, but the people who are objecting to (the hypothetical procedure) of uploading aren’t doing that. They’re thinking of identity, as people do, in such a way that it’s preserved over decades of ordinary human life.

    I am not the same being I was twenty years ago, despite a sense of continuity as a person. I am different as well as the same.

    Okay. So, of the ways in which you are the same, are there any significant ways which would be lost if your brain patterns would be uploaded to a super-advanced computer?

    A computer is nothing like a brain in most regards. Shoot a bullet through a computer & see what happens. Either nothing or total failure. A bullet in a brain can provide a vast range of effects, many novel & unrelated to programming.

    This is rather like saying “a human is nothing like a fish in most regards, because they respond very differently to being permanently submerged in water.”

  213. says

    I was running out the door the other day when I made my last post, so it was truncated. Let me expand a little.

    First, let’s look at the difference between a brain/body & a computer.

    A computer is a fixed form. All of its components & connections are pre-planned & permanently formed into the structure. This structure is made exclusively of what are called ionic compounds(1). It doesn’t change its form or structure, even on the atomic level, as time passes. Even connections that are newly made (firmware) are strictly limited by the pre-formed structure, in other words, a switch must exist before it can be turned on or off. No novel structure can be created. There is a direct correlation between input & output. No input = No output.

    A brain is a very different type of object made of a very different type of material. Its form is not fixed, it is mutable. It changes over time. It changes in response to stimuli in ways that can easily be novel. It changes itself AS it performs its functions. It is made almost exclusively from molecular compounds.(2) The brain/body changes from moment to moment on atomic, molecular, cellular and higher structural scales. New connections are constantly being created & destroyed. These new connections are novel.

    The switching processes that take place in a computer and a brain/body are quite different also. A computer switch has two possible states, on or off.(3) Organic switches behave in a completely different fashion. There is a range of possible responses and an indirect correlation between input & output. They regularly switch on & off without any external trigger. When you look at a finer scale, you see that the real activity is chemical & actually takes place outside of the neuron/switch. No input regularly = output

    About 120 years people invented high velocity firearms to use as weapons and modern battlefield medical techniques. This, oddly, led to the first real look at how the brain functions. A large slug of lead fired into a skull was fatal. A smaller chunk at higher speeds often caused non-fatal injuries. When a sufficient population of such injuries was created, insight into the internal functions of the brain became possible. One of the most important findings was that the brain was able to compensate for damage, with other structures co-opted to fill the needed functions. Damage a tiny part of a computer & it can cause total failure. Nothing fish out of water or human in water about it.

    You talk about “brain patterns” being transferred. What is a “brain pattern”. It sounds to me as if you are naming some nebulous quality when you really have no idea just what it is you’re talking about.

    Mind/self is an emergent property of the brain/body. It arises within the structure. Data can come from outside the organism or within the organism.

    A computer is programmed. All of its “thoughts” are created outside of the structure and placed within the structure. All “data” comes from outside the structure.

    “But we’re talking about things from the point of view of we humans.”

    Sorry, I thought we were talking about the actual real world things, not how people categorize them. You know, those things that exist whether I believe in them or not.

    “A complex sound wave can make it across a room just fine; it can make it across the planet if we provide a better transmission mechanism than air.”

    What you mean by just fine? Any sound wave is immediately distorted upon transmission. First in the transduction process, then by self-interference, then by the variable qualities of the transmission medium, then by interference with external structures. Just because you can recognize the sound doesn’t mean it is “fine”. (Whatever a nebulous concept like “fine” means.)

    While I regularly use cybernetic metaphors to describe some aspects of mind/self & brain/body; I don’t forget that they are only metaphors, not actual descriptions.

    (1) Some insulating materials used might be molecular compounds.

    (2) All Organic materials are molecular compounds.

    (3) Just to be anal, I am talking about digital computers rather than analog devices called computers.

  214. Anton Mates says

    Posted by: Jaycubed,

    First, let’s look at the difference between a brain/body & a computer.

    The following is not terribly relevant, since we’re talking about arbitrarily advanced computers in the hypothetical future that could be of just about any design–including, if you like, a lump of biological tissue or even an exact replica of your original body and brain. No one’s planning to upload themselves into a 2009 desktop PC. (On the other hand, I would argue that any Turing machine can simulate a brain, given enough time and memory to do the calculations. But that’s a slightly different topic.)

    That said, concerning modern computers….

    A computer is a fixed form. All of its components & connections are pre-planned & permanently formed into the structure. This structure is made exclusively of what are called ionic compounds(1). It doesn’t change its form or structure, even on the atomic level, as time passes.

    That’s not true at all. A computer’s very function–like that of any physical tool–depends on changing its structure. The current in each wire, the charge on each capacitor and transistor, the polarity of each magnetic region on the hard disk…all of these change on a regular basis. And at an atomic scale, each of these “digital” events is a messy, analog, irreversible process that never occurs exactly the same way twice. We human users don’t care about that–computers are designed precisely so that we don’t have to care–but it’s still true.

    On a longer scale, semiconductors age. Metals corrode. Plastic degrades. Dust accumulates, moving parts wear out. Components are replaced and upgraded by the user. Using your quark-level definition of identity, a computer is no more fixed than a human brain.

    Even connections that are newly made (firmware) are strictly limited by the pre-formed structure, in other words, a switch must exist before it can be turned on or off. No novel structure can be created.

    Unless new memory components are added or connected to the computer. In any case, the existing memory provides enough opportunities for novel structure that it need never take the same configuration twice in the lifetime of the machine.

    Certainly a brain can remodel itself in many ways that a modern computer can’t, and those ways are important…to we humans.

    There is a direct correlation between input & output. No input = No output.

    That’s not true either. A computer can run programs indefinitely using only the data it’s already stored–or, for that matter, its own clock cycle.

    The switching processes that take place in a computer and a brain/body are quite different also. A computer switch has two possible states, on or off.

    Again, not true at the atomic level–and not necessarily true at the macroscopic level, since any number of switches can be stacked into a multistate switch. And it would be trivial easy to connect up some analog switches to a computer, if it’s terribly important to you–a malfunctioning digital switch will do fine.

    One of the most important findings was that the brain was able to compensate for damage, with other structures co-opted to fill the needed functions. Damage a tiny part of a computer & it can cause total failure.

    Ditto for the brain. Block one small artery…

    Conversely, modern computers can and do work around some forms of damage, such as bad disk sectors.

    You talk about “brain patterns” being transferred. What is a “brain pattern”. It sounds to me as if you are naming some nebulous quality when you really have no idea just what it is you’re talking about.

    Mind/self is an emergent property of the brain/body. It arises within the structure. Data can come from outside the organism or within the organism.

    You consider “brain patterns” nebulous and ill-defined, but are happy to discuss “mind/self?”

    A computer is programmed. All of its “thoughts” are created outside of the structure and placed within the structure. All “data” comes from outside the structure.

    This makes no sense. A computer can use its own clock signal as data, or the time it takes to run a particular program, or anything like that. And the output from previously-run programs can be stored and used as data for future runs. Heck, it can write and execute its own code. Its data processing is no more dependent on external input than a human brain.

    “But we’re talking about things from the point of view of we humans.”

    Sorry, I thought we were talking about the actual real world things, not how people categorize them. You know, those things that exist whether I believe in them or not.

    The real world doesn’t have a point of view. Only we sentient beings living in it can do so. Only we can judge whether two objects or systems are equivalent.

    And yes, we were talking about how people categorize things. Read up the thread–this is a discussion of whether uploading or duplication with destruction of the original is equivalent to death. “Death” isn’t something the real world cares about, it’s something we care about, and something we define.

    The same is true, by the way, for the “mutability” in which, as you recognize, human brains far excel modern computers.

    Just because you can recognize the sound doesn’t mean it is “fine”. (Whatever a nebulous concept like “fine” means.)

    That’s precisely what it means. The sound is transmitted with sufficiently high quality that I recognize it to be the same sound that was originally emitted. In other words, it’s fine.

    While I regularly use cybernetic metaphors to describe some aspects of mind/self & brain/body; I don’t forget that they are only metaphors, not actual descriptions.

    Cybernetic descriptions are equally applicable to biological and to technological systems. In all cases it’s a simplification, but a useful one–from the limited perspective of we measly humans.

  215. says

    “The following is not terribly relevant, since we’re talking about arbitrarily advanced computers in the hypothetical future that could be of just about any design–including, if you like, a lump of biological tissue or even an exact replica of your original body and brain.”

    What you are saying here is that you are talking about fantasy rather than anything with any basis in reality. Your hypothetical computer is just as real as a Warp Drive, a time machine or physical teleportation. A Turing machine is a fantasy, it doesn’t exist.

    “That’s not true at all. A computer’s very function–like that of any physical tool–depends on changing its structure. The current in each wire, the charge on each capacitor and transistor, the polarity of each magnetic region on the hard disk…all of these change on a regular basis.”

    Huh? So according to you a hammer “depends on changing its structure” to perform its function. Change in current, charge or magnetic polarity do not change the structure, they are hosted within the structure.

    “And at an atomic scale(1), each of these “digital” events is a messy, analog, irreversible process that never occurs exactly the same way twice.(2)”

    (1)It depends on the scale at which you are looking. When you look at Planck-level phenomena (rather then “quark-level” phenomena as you state), there is constant flux. But I never mentioned a level lower than the atomic scale, where the atomic components of a computer, ionic solids, are extremely stable. It is this rigid stability that makes it possible for a computer to function. Move just a few atoms around and the computer can fail completely. (2)Wrong. The “digital events” occur exactly the same way each time or the computer makes an error in its calculations.

    “On a longer scale, semiconductors age. Metals corrode. Plastic degrades. Dust accumulates, moving parts wear out. Components are replaced and upgraded by the user.”

    So what you are saying is that a computer is unable to repair itself or compensate for damage like a brain can. You are also pointing out that any repairs/upgrades must be performed from outside the computer.

    “You consider “brain patterns” nebulous and ill-defined, but are happy to discuss “mind/self?” “

    Demonstrate that “brain patterns” exist as meaningful phenomena. It is easy to find patterns. Whether they are really present is something else. When I discuss “mind/self” I am talking about terms that are loosely defined but understandable to any human. I do not claim to know what mind/self is (beyond it emerging from the actions of its physical “host”), merely that there is a real world phenomena that we all call mind/self. “Brain patterns” is a simplistic description of a complex phenomena. Mind/self is a name for a shared phenomena.

    A computer can use its own clock signal as data, or the time it takes to run a particular program, or anything like that. And the output from previously-run programs can be stored and used as data for future runs. Heck, it can write and execute its own code. Its data processing is no more dependent on external input than a human brain.”

    A computer does nothing until an outside agency designs & creates its structure and designs & creates its programming. Yes, a computer reiterates (Reuses old data). But it uses it only in the way it was programmed. Yes computers have internal regulation (clocks), but it’s programmed from outside. Yes, some computers can write & execute its own code, but the format & type of code is rigid & limited (made up of a limited set of possible instructions pre-programmed into the machine).

    “Death” isn’t something the real world cares about, it’s something we care about, and something we define.”

    And I thought death was a real world phenomena, you know, the cessation of life functions in a living organism.

    “Cybernetic descriptions are equally applicable to biological and to technological systems. In all cases it’s a simplification, but a useful one–from the limited perspective of we measly humans.”

    No, in many cases cybernetic descriptions are neither applicable nor useful because they are wrong. They can easily describe phenomena incorrectly.

    (Just because you can recognize the sound doesn’t mean it is “fine”. (Whatever a nebulous concept like “fine” means.)
    “That’s precisely what it means. The sound is transmitted with sufficiently high quality that I recognize it to be the same sound that was originally emitted. In other words, it’s fine.”

    As I pointed out before, the sound is measurably different. In fact, two people standing next to each other will hear different sounds not only for the several different reasons I noted above, but also because their transducing devices are different (differing ear shape affects the reception of sound, filtering some frequencies).

    “The real world doesn’t have a point of view.”

    True.

    “Only we sentient beings living in it can do so. Only we can judge whether two objects or systems are equivalent.”

    What nonsense. You replace objective reality with subjective reality. What one thinks or “judges” has NO effect on objective reality (as compared to what one does).

    “Read up the thread–this is a discussion of whether uploading or duplication with destruction of the original is equivalent to death.”

    And my response remains that uploading or duplication are not viable acts, at present or at any time in the forseeable future.

  216. Anton Mates says

    Jaycubed,

    What you are saying here is that you are talking about fantasy rather than anything with any basis in reality.

    Yes. This is a thought experiment.

    Huh? So according to you a hammer “depends on changing its structure” to perform its function.

    Even a hammer must alter the location, orientation, and velocity of its component particles in order to hit something. You can leave these out of your definition of “structure” if you want, but they’re a necessary part of the “real world” description of the hammer.

    Change in current, charge or magnetic polarity do not change the structure, they are hosted within the structure.

    How does it not change the structure to have a bunch of electrons hanging out here instead of here, or to have a bunch of atoms oriented this way instead of that way?

    But I never mentioned a level lower than the atomic scale, where the atomic components of a computer, ionic solids, are extremely stable.

    Untrue, given the aging effects I mentioned, but why didn’t you mention a lower level? Weren’t we talking about the “real world?” It doesn’t stop with atoms.

    (2)Wrong. The “digital events” occur exactly the same way each time or the computer makes an error in its calculations.

    This is a severe misunderstanding of digital technology. We don’t have the power to make a “digital event” in a computer occur exactly the same way each time. What we do instead is to design the system so that slight variations in one event do not significantly affect another. The computer avoids errors in its calculations because those calculations are insensitive to the subtleties of its internal state–not because that internal state is perfectly controllable on (even) an atomic scale.

    So what you are saying is that a computer is unable to repair itself or compensate for damage like a brain can. You are also pointing out that any repairs/upgrades must be performed from outside the computer.

    Yes. Neither a computer nor a brain is a fixed system, but the sorts of changes which occur in the brain are often ones which, from our point of view, are very helpful.

    Brain patterns” is a simplistic description of a complex phenomena. Mind/self is a name for a shared phenomena.

    A distinction without a difference, it seems to me.

    A computer does nothing until an outside agency designs & creates its structure and designs & creates its programming.

    When’s the last time a brain did anything before an outside agency grew it into existence? Does every human brain freely choose to enjoy the taste of sugar or fear the dark?

    And I thought death was a real world phenomena, you know, the cessation of life functions in a living organism.

    Nope. Is a person in a persistent vegetative state with atrophied brain still alive? How about a person whose body is largely decomposed but whose scattered tissues still show life functions—for instance, in donated organs, or in cell cultures? How about a comatose person without severe brain damage?
    The real world will give you lots of facts to make the decision, but you’ve got to decide which ones are relevant.

    What nonsense. You replace objective reality with subjective reality. What one thinks or “judges” has NO effect on objective reality (as compared to what one does).

    This is wrong, since thinking and judging are physical actions and necessarily impact objective reality, but so what? This isn’t a conversation objective reality is having with itself.

    “Read up the thread–this is a discussion of whether uploading or duplication with destruction of the original is equivalent to death.”
    And my response remains that uploading or duplication are not viable acts, at present or at any time in the forseeable future.

    Okay, but then your response remains irrelevant.

  217. says

    A running out the door response to a few issues:

    “Even a hammer must alter the location, orientation, and velocity of its component particles in order to hit something. You can leave these out of your definition of “structure” if you want, but they’re a necessary part of the “real world” description of the hammer.”

    The hammer alters nothing. An external agent, the user, moves the hammer. You, the user, are not part of the structure of the hammer in any way. A hammer remains a hammer whether it is in use or not.

    “Nope. Is a person in a persistent vegetative state with atrophied brain still alive?

    Yes.

    How about a person whose body is largely decomposed but whose scattered tissues still show life functions—for instance, in donated organs, or in cell cultures?

    No, the person is dead.

    How about a comatose person without severe brain damage?

    Alive.

    When’s the last time a brain did anything before an outside agency grew it into existence?

    What outside agency are you referring to. Also, please note that whenever I have used the word “brain” it has been as a composite term, “brain/body”, so if your outside agency refers to other parts of the organism then you need to reread my comments. And, brains constantly engage in novel behavior, even your poor arguments are novel.

    Does every human brain freely choose to enjoy the taste of sugar or fear the dark?

    No, not every human being enjoys the taste of sugar nor is afraid of the dark.

    Gotta run & go grocery shopping. I’ll continue to dispose of your objections later.

  218. Anton Mates says

    Jaycubed,

    The hammer alters nothing. An external agent, the user, moves the hammer. You, the user, are not part of the structure of the hammer in any way. A hammer remains a hammer whether it is in use or not.

    Now you’ve altered your original distinction from “fixed vs. mutable” to “fixed or mutable only under outside influence vs. mutable under its own power.” In either case, of course, both brains and computers end up on the same side.

    “Nope. Is a person in a persistent vegetative state with atrophied brain still alive?

    Yes.

    How about a person whose body is largely decomposed but whose scattered tissues still show life functions—for instance, in donated organs, or in cell cultures?

    No, the person is dead.

    How about a comatose person without severe brain damage?

    Alive.

    Great. Now, what objectively valid, disregarding-all-human-values standard did you use to arrive at the above conclusions? What argument would you use to demonstrate that, say, the statement that “brain-atrophied people in persistently vegetative states are dead” is empirically false?

    What outside agency are you referring to. Also, please note that whenever I have used the word “brain” it has been as a composite term, “brain/body”, so if your outside agency refers to other parts of the organism then you need to reread my comments.

    Actually, I was thinking more of the organism’s parent, who laid the egg or gestated the embryo which developed into its body–including the brain.

    And, brains constantly engage in novel behavior, even your poor arguments are novel.

    So do computers. You’ve argued that the novel behavior of computers is “constrained;” the same of course is true of brains. A modern computer is certainly more constrained than a brain in its behavior, but the difference is quantitative, not qualitative.

    Does every human brain freely choose to enjoy the taste of sugar or fear the dark?

    No, not every human being enjoys the taste of sugar nor is afraid of the dark.

    Same question, then–do the exceptions freely choose not to enjoy the one or fear the other, independent of outside input?

    I’ll continue to dispose of your objections later.

    I look forward to it.

  219. says

    “Now you’ve altered your original distinction from “fixed vs. mutable” to “fixed or mutable only under outside influence vs. mutable under its own power.” In either case, of course, both brains and computers end up on the same side.”

    Not at all, the object (hammer) is a fixed structure whether or not it is being manipulated. It doesn’t change its form or structure in any relevant or perceivable way (unless you’re talking about the minute transient deflections of atoms on the surface of the head of the hammer when it hits something).

    “Great. Now, what objectively valid, disregarding-all-human-values standard did you use to arrive at the above conclusions? What argument would you use to demonstrate that, say, the statement that “brain-atrophied people in persistently vegetative states are dead” is empirically false?

    I use the standard definitions of death & organism.

    Death 1 a: a permanent cessation of all vital functions : the end of life

    Organism 1 : a complex structure of interdependent and subordinate elements whose relations and properties are largely determined by their function in the whole 2 : an individual constituted to carry on the activities of life by means of organs separate in function but mutually dependent : a living being (M-W)

    The argument I would use is that the person/organism is alive because it performs the fundamental functions of a living organism (Even if the body is artificially assisted by machine). Consciousness is not a fundamental function of a living organism. It is an auxiliary, albeit important, function for an extremely limited class of living organisms. Brain death is not death of the organism. If you take a person who is “kept alive” by machines and “pull the plug” then the person dies.

    “And, brains constantly engage in novel behavior, even your poor arguments are novel.”

    So do computers.

    You need to provide some evidence of this. If a computer did engage in novel behavior, it would have little function as a computer. It would be untrustworthy.

    “Change in current, charge or magnetic polarity do not change the structure, they are hosted within the structure.”

    How does it not change the structure to have a bunch of electrons hanging out here instead of here, or to have a bunch of atoms oriented this way instead of that way?

    In exactly the same way that the structure of a water pipe isn’t changed whether water is filling it or not.
    A light switch is a simple digital device, it has two states, on & off, its structure is no different whether there is electricity at the input or not. The structure doesn’t change when you flick the switch. The structure incorporates the two possible output states in its design.

    “What outside agency are you referring to.”

    “Actually, I was thinking more of the organism’s parent, who laid the egg or gestated the embryo which developed into its body–including the brain.

    The parents provide the raw material and initial instruction set to the organism. In some animals there is behavior performed by the parents/relatives of the organism to protect or nourish the organism.
    However, the organism develops on a unique path of its own, buffeted by a vast array of external & internal stimuli. The parents, or any active (ie. goal oriented) agent, has little if any predictable influence on the specific pathway that the brain/body develops.

    How does this correspond to the intentional design & construction of a device such as a computer?

    In one, the “outside agency” provides an initial “seed” which grows on unique developmental pathways into a unique organism. In the other, the “outside agency” designs a specific fixed device to perform specific functions and those forms are repeatedly duplicated directly by the outside agency. Any failure to duplicate exactly is a failure of the device.

    “You’ve argued that the novel behavior of computers is “constrained;” the same of course is true of brains. A modern computer is certainly more constrained than a brain in its behavior, but the difference is quantitative, not qualitative.

    Not at all. You’re not even comparing apples & oranges here, you’re comparing apple & volcanoes. Until you provide an example of novel functional behavior from a computer then I will contend that the difference is qualitative.

    The physical constraints on a computer are part of its design. It is designed for consistency & accuracy.

    The behavioral constraints on a brain/mind are unknown. But it has developed not for consistency & accuracy, but for survival.

  220. says

    And:

    “The switching processes that take place in a computer and a brain/body are quite different also. A computer switch has two possible states, on or off.”

    “This is a severe misunderstanding of digital technology. We don’t have the power to make a “digital event” in a computer occur exactly the same way each time.”

    A digital event, by definition, has only two possible states. On or Off. If a threshold is exceeded, then the state is switched. A digital event in a computer is exactly the same each time. Even if there is some variation in the threshold quantity or the current, there is no variation between “different” on states & “different” off states. (different in this case referring to when the events occur)

    Again, not true at the atomic level–and not necessarily true at the macroscopic level, since any number of switches can be stacked into a multistate switch.

    And each individual switch remains a strictly digital device.

    The computer avoids errors in its calculations because those calculations are insensitive to the subtleties of its internal state–not because that internal state is perfectly controllable on (even) an atomic scale.

    No, the relevant internal states are strictly digital, on or off. The “subtleties” are predefined ranges (charge or magnetic potential) that are designed into the structure to produce the digital events. It doesn’t matter how far below the threshold an event is or how far above the threshold an event is (unless you try to add so much current or flux that the structure is destroyed). A digital event is digital.
    Also, it is not any “perfection” in structure I was talking about, but stability at atomic scales.

    “Brain patterns” is a simplistic description of a complex phenomena. Mind/self is a name for a shared phenomena.”

    “A distinction without a difference, it seems to me.

    You still are unable or unwilling to define what you mean by “brain patterns”, much less that they exist and are relevant. I see no need to reiterate the entirety of human philosophy regarding the nature of mind/self just to demonstrate that there is agreement as to the existence of mind/self even if there is disagreement to its nature.

    “But I never mentioned a level lower than the atomic scale, where the atomic components of a computer, ionic solids, are extremely stable.”

    “Untrue, given the aging effects I mentioned(1), but why didn’t you mention a lower level? Weren’t we talking about the “real world?” It doesn’t stop with atoms(2).

    (1) So you deny that ionic solids, like the silicon compounds comprising the structure of a computer chip, are extremely stable. Sounds like someone need to take high school chemistry.
    (2) Back to scale again, the quantum “fuzziness” at Planck scales have virtually no effect on the microscopic level of atomic structure. (The word virtually is chosen to reinforce that the “virtual” behavior of space & “virtual particles” found at Planck scales. It is not impossible for a Planck scale event to manifest itself at the scale of atomic structures, but is an extremely statistically insignificant possibility.) It is irrelevant.

  221. says

    Another problem is that the idea of instantaneously “scanning” a brain/body to make an identical copy is a physical impossibility. I am not talking about quantum states. I am talking about a 3-Dimensional macroscopic body which is in constant flux (the brain/body).

    Special relativity rears its head here to point out that it takes time for energy or information to move through space. Therefore the configuration of the brain/body has changed before you are done “scanning”, however your hypothetical “scanning” process works (Unless you’re resorting to Magic!).

    Your scan is in error even before it is completed because what you are “scanning” has changed and continues to change.

    It could be theoretically possible to “scan” a modern computer because of the stability of the structure in macroscopic time & space.

  222. says

    And:

    “What one thinks or “judges” has NO effect on objective reality (as compared to what one does).”

    “This is wrong, since thinking and judging are physical actions and necessarily impact objective reality, but so what?”

    You are partially correct here: thinking & judging are physical actions that effect the internal arrangement of the brain/body. But external reality is unaffected by the categories, judgements or beliefs of anyone.

    For example, the behaviors of the Earth, Sun, Moon, stars & planets are completely unaffected by whether you believe in a Mosaic, Ptolomaic, Keplerian, Copernican, Newtonian, Einsteinian or Flat-Earth/No-Gravity universe.

    What you think is completely irrelevant to external reality. Only what you do has effect.

  223. Anton Mates says

    Jaycubed,

    Not at all, the object (hammer) is a fixed structure whether or not it is being manipulated. It doesn’t change its form or structure in any relevant or perceivable way

    But it changes the positions and velocities of its component atoms in an extremely relevant and perceivable way. Again, you can rule those properties outside “structure”, if you want, but it’s hardly a fixed system in a physical sense.

    And, again, a computer changes its structure by any standard. The fans and the disk drive, for instance, contain vital macroscopic-scale moving parts.

    I use the standard definitions of death & organism.

    Death 1 a: a permanent cessation of all vital functions : the end of life

    Which isn’t particularly useful unless you define “life.” And many people consider human life to be characterized by consciousness, or at least the capacity for same. This is pretty important to arguments over, say, euthanasia or abortion.

    The argument I would use is that the person/organism is alive because it performs the fundamental functions of a living organism (Even if the body is artificially assisted by machine). Consciousness is not a fundamental function of a living organism.

    Is Henrietta Lacks alive, then? Her cells, singly or in colonies, continue to perform as living organisms.

    If someone’s heart and lungs are transplanted into another person, is the donor still actually alive? How about the recipient? For both of them, there exists a living organism which contains some of their tissues.

    You need to provide some evidence of this. If a computer did engage in novel behavior, it would have little function as a computer. It would be untrustworthy.

    If a computer could not engage in novel behavior, computer-assisted numerical research would be a dead field. Every time a new simulation or model is run, and a result is obtained which was not previously observed, that computer has behaved in a novel manner. Its trustworthiness depends on it not behaving completely unpredictably; its utility as a research device depends on it not behaving completely predictably, either.

    (Here of course I’m talking about predictability by the human user. Computers may be completely predictable to a sufficiently intelligent and well-informed entity, as may human beings, but we’re not that entity.)

    In exactly the same way that the structure of a water pipe isn’t changed whether water is filling it or not.

    But your water-delivery system isn’t just the pipe. A pipe by itself doesn’t give you any water, and a computer with no electricity doesn’t compute. The internal currents, charges and magnetization patterns are vital components of the computer.

    A light switch is a simple digital device, it has two states, on & off, its structure is no different whether there is electricity at the input or not. The structure doesn’t change when you flick the switch.

    Of course it does. You’ve flicked the switch; wires now touch which didn’t touch before. You’ve created a new configuration of conductors which permits electricity to flow. The switch would be detectably different in structure even if your power was dead.

    However, the organism develops on a unique path of its own, buffeted by a vast array of external & internal stimuli. The parents, or any active (ie. goal oriented) agent, has little if any predictable influence on the specific pathway that the brain/body develops.

    How does this correspond to the intentional design & construction of a device such as a computer?

    What does this have to do with anything? If the IDers (or theists in general) are right, and we’re intentionally designed and constructed by a superintelligent being who could predict most or all of our behavior, would that in itself make us more computer-like? Would our behavior then be different in any empirically-detectable way?

  224. Anton Mates says

    A digital event, by definition, has only two possible states. On or Off. If a threshold is exceeded, then the state is switched. A digital event in a computer is exactly the same each time. Even if there is some variation in the threshold quantity or the current, there is no variation between “different” on states & “different” off states. (different in this case referring to when the events occur)

    Which is to say, you choose to label one collection of states “on,” and another collection “off,” and you don’t care which particular state from each collection the system is currently in. A digital event has two possible states “by definition” because you (and I, and other human operators of this system) have chosen to define the states that way.

    You still are unable or unwilling to define what you mean by “brain patterns”, much less that they exist and are relevant. I see no need to reiterate the entirety of human philosophy regarding the nature of mind/self just to demonstrate that there is agreement as to the existence of mind/self even if there is disagreement to its nature.

    There is also, at least in current neurology, agreement that mind/self is causally dependent upon the brain. Whatever the properties of the brain are which determine the properties of the mind, those are the “brain patterns” I’m talking about.

    (1) So you deny that ionic solids, like the silicon compounds comprising the structure of a computer chip, are extremely stable. Sounds like someone need to take high school chemistry.

    Look up “semiconductor aging;” it’s a significant issue for oscillator chips, apparently, like the ones in every computer. And you’re still forgetting that a computer is not made entirely of silicon, doped or otherwise. All those little bits of metal and plastic are actually quite important. You refuse, reasonably, to consider a brain apart from the body which houses it; why do you consider computer chips apart from the rest of the computer?

    (2) Back to scale again, the quantum “fuzziness” at Planck scales have virtually no effect on the microscopic level of atomic structure.

    Even if this were true, so what? If you’re interested in “objective reality”, why privilege features of atomic structure over Planck-scale features? They’re equally part of reality.

    (The word virtually is chosen to reinforce that the “virtual” behavior of space & “virtual particles” found at Planck scales. It is not impossible for a Planck scale event to manifest itself at the scale of atomic structures, but is an extremely statistically insignificant possibility.)

    Well, without the cumulative effect of those virtual particles, there wouldn’t be any atomic structures, nor any atoms to have them. All forces are mediated by virtual particles, after all.

  225. Anton Mates says

    Special relativity rears its head here to point out that it takes time for energy or information to move through space. Therefore the configuration of the brain/body has changed before you are done “scanning”, however your hypothetical “scanning” process works (Unless you’re resorting to Magic!).

    Relativistic effects are a complication, but not (in principle) an insurmountable problem. Your scan would record different parts of the body at slightly different times, but you could in principle use that information to compute the state of the entire body at a single moment.

    You’re basically recording a time-like or light-like slice of local spacetime, and so far as I know, that’s just as good for reconstructing the entire local continuum as a space-like slice would be. Well, unless singularities are involved, but a person containing a singularity has bigger problems.

    It could be theoretically possible to “scan” a modern computer because of the stability of the structure in macroscopic time & space.

    No more or less easily than an organic body, I think. The lightspeed limit will still prevent you from directly measuring the charge on each capacitor and the current in each wire and the orientation of each magnetic domain at a single moment. Even if you don’t care about anything but macroscopic “digital states,” you risk scanning part of a component just before a particular digital event and the rest of it just afterwards–or at two different points during an event, since on the timescales we’re talking about here, digital events are not instantaneous.

    You are partially correct here: thinking & judging are physical actions that effect the internal arrangement of the brain/body. But external reality is unaffected by the categories, judgements or beliefs of anyone.

    Categories, judgments and beliefs are also physical actions of the brain/body, so this still isn’t quite right. What you mean, of course, is that a particular type of possible relationship between beliefs and external reality doesn’t hold.

    Switching your belief from a Copernican to a Ptolemaic universe doesn’t cause the sun, moon and other planets to start orbiting the Earth in concentric circles, but it does cause their orbits to perturb ever so slightly as your brain/body changes state (and therefore slightly alters its distribution of mass.)

  226. Robot Fucker says

    Why do you people hate on the idea of the Singularity and all the all hot robot sex I’ll be having?

  227. Truth says

    Maybe there are points at which Kurzweil is overly optimistic. Probably because he wants to see those things in his lifetime, BUT – to people who say that Singularity is impossible and all that…you know, there were people who once said that flying is impossible and communicating through telephone would be impossible, so think about it!