Media fails to pass the Turing test


I don’t get it — there are news reports everywhere credulously claiming that the Turing Test has been successfully passed, and they are all saying exactly the same thing: that over 30% of the judges couldn’t tell that a program called Eugene Goostman wasn’t a 13 year old boy from Odessa with limited language skills. We’re not hearing much about the judges, though: the most common thing to report is that one of them was actor Robert Llewellyn, who played robot Kryten in the sci-fi comedy TV series Red Dwarf.

Instead of parroting press releases, it seems to me that the actual result should be reported as a minority of poorly qualified judges in a single media-driven event were trivially fooled by a clumsy chatbot with a background story to excuse its bad grammar and flighty behavior into thinking they were talking to a real person. It’s not so much a validation of the capabilities of an AI as it is an indictment of the superficiality of this test, as implemented.

Or, if an editor really wanted a short, punchy, sensationalist title, they have permission to steal mine.

We don’t yet have transcripts of the conversation, but the text of a 2012 test of the same program are available. They are painfully unimpressive.

Comments

  1. Larry says

    Well, nobody is claiming the current crop of reporters demonstrate any signs of sentient beings. Rooms full of monkeys typing away at keyboards often produce more intelligent stories.

  2. David Marjanović says

    Besides, I already simulated a 4chan user with a computer in response to his press release of six years ago.

    “after having been banned from YouTube for commenting in a perspicacious and on-topic manner.”

    :-D :-D :-D :-D :-D :-D :-D :-D :-D :-D :-D

  3. anteprepro says

    I saw one report that was very skeptical of it, and there headline had “passed” in scare quotes for the reasons PZ mentioned.

    Further, I find it entertaining that they only had three judges and the mark for success is 30%. So in other words either the bot will pass or completely fail and there is no way to score somewhere in between. Also the words “small sample size” come to mind.

    Yeah. Unimpressive results in soooo many ways.

  4. Compuholic says

    I am also sceptical that the Turing test is an appropiate assessment of intelligent behavior. We are in the age of Big Data where you don’t have to actually understand the data anymore. You basically have a giant database of previous examples of human behavior in this database and pick a behavior that other humans displayed in a similar situation.

  5. PaulBC says

    I think these contests are useful and fun, but it’s unsurprising if the media overstates the significance. Fooling 70% of judges in a controlled contest isn’t very strong evidence that the machine is showing cognition. Turing’s main point was to introduce a thought experiment to illustrate how to think about the question in the first place, and this part is valid: you can eventually conclude that a computer is thinking if it consistently behaves like it is (rather than claim its lack of a soul, quantum tubules, or overall squishiness as reasons to maintain skepticism). Like any early ground-breaking paper (for AI in this case) it gets the ball rolling for future work, but it’s not conclusive and shouldn’t be taken as holy writ.

    A key passage is:

    http://www.loebner.net/Prizef/TuringArticle.html
    ‘I believe that in about fifty years’ time [from 1950] it will be possible, to programme computers, with a storage capacity of about 10^9 [i.e. a billion bits, corrected from link], to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’

    This is wrong in a many particulars, though it’s not totally off the mark. Computers already had more raw power than Turing expected by 2000, but nobody seriously talks about computers “thinking” (at least without expecting to be contradicted). I was surprised on rereading this to see that the 70% (matching the news account) comes from Turing. I think this is not a very good measure, though it is easier to see this in retrospect. I guess all I would say about these contests is “Yay! You passed the Turing test, but even Turing didn’t say that was enough to conclude that machines think.”

    I think there is something like thinking emerging from the confluence of a lot of commonly used technologies (search, voice recognition, extraction of facts from text). Not of it is indistinguishable from human thought so far, but if it were to become so, there would be little basis for denying the claim that machines can think. Computers have gradually move into areas previously associated only with human cognition: first arithmetic, later game-playing, and so on. The current performance of modern search engines is a kind of free association (with a huge fact store) that I would have considered exclusive to human thought until recently. If we reach a point where it is uncontroversial to claim a computer is thinking, most likely we will have got there by an accumulation of developments, not an attempt to win a contest.

  6. mikeyb says

    The Turing test works everyday. Corporate advertizing sells endless crap through sex, humor and delusions about the myth of the American dream, and people keep buying it. Pretend intelligent messages from pretend people fool and manipulate real people into reaching out into their wallets or purses and handing over little green pieces of paper for endless crap.

  7. twas brillig (stevem) says

    I saw this on the io9 site and responded there, that the Turing Test is not THE test of Intelligence/Person. That is a necessary test, but not the singular test. That what the most recent test showed was, “judges failing the test”, much more than the AI passing the test. Personhood is MUCH more than just a short conversation with a Real Person. Too much emphasis is placed on the Turing Test, to get any meaningful AI’s that really can be considered Persons. Turing just made it explicit that an AI should ALSO pass his test, not this his was the only test it had to pass. [Weak analogy ahead]: Should one expect to receive a valid PhD based on a single dissertation, with no classes, nor undergrad degrees, nor anything? And, would you give a PhD to someone who did everything else but could NOT write a dissertation? The Turing Test is the dissertation for an AI to pass, before getting the PhD of Personhood.

  8. =8)-DX says

    Reading the conversations it was an easy to see-through chatbot. Those judges really didn’t have any experience talking to artificial intelligences =S.

  9. A Masked Avenger says

    Reading the conversations it was an easy to see-through chatbot. Those judges really didn’t have any experience talking to artificial intelligences =S.

    Seriously. It’s barely an improvement over Eliza. Basically, “Idiots mistake idiotic chatbot for fellow idiot.”

  10. jerthebarbarian says

    You basically have a giant database of previous examples of human behavior in this database and pick a behavior that other humans displayed in a similar situation.

    If a machine did THAT and was able to fool you into thinking it was a human being, then I’d say it was actually intelligent. We don’t know enough about the human brain to definitively state that we’re NOT doing that ourselves, and hell if it was producing rational results in such a way that it was no different from what I would do I’d be forced to call it intelligent. And despite what you think that would be an incredibly impressive bit of technology to build a “Big Data” application that could do that in real time.

    But they don’t do that – these kinds of tests are full of “winners” that do nothing more than basic trickery to fool judges. It’s the equivalent of an 11th century miracle worker pulling doves out of a hat or making milk disappear down a funnel. Back then they disguised stage magicianship as magic because people believed in magic. These days they perform stage magic and claim it’s science because people believe in science.

  11. Stevko says

    Medias think this is very much big deal. They are poor media, but I would not talk that much about poorly qualified judges. It is hard to say who can be qualified judge. Maybe there could be „Turing test judge test“ to see who is good at judging Turing tests. Sometimes journalist, who sees transcript of conversation with human writes about how obvious it is that it is conversation with machine.

    If I remember correctly, judges have very limited time to talk to „contestant“ and also there are no restrictions on behavior of human test subjects (they can pretend to be machine). Also Turing test does not give much restrictions on judges.

    This test is more like competition whose chat program is better than to see if they fool some percent of judges (30% is quite arbitrary – mabye something like: who passes, gets bigger prize).

    The result that computer passed Turing test is nice but does not mean much (yes, also because „of the superficiality of this test, as implemented“) . Some time ago there were lists with titles like „Computer will be intelligent when they will be able to do this“ (stuff such as playing chess, recognition of faces or text) and when computers started doing those things, it was cool, but they still were not intelligent.

    I was once at Warwick’s lecture about these tests and impression I got was that these tests are more fun and media events. And if some bot passes, then AI field will make a small checkmark (now we can do even this) and not much will change.

  12. Who Cares says

    only 30%? Geez, Cleverbot managed to fool 60% of the judges in 2011. And that was not against a handful of selected people but against a complete audience, which rated the human half of the test at only 65% of the time as human.

  13. John Horstman says

    Chatbots go very quickly off the rails when they start failing to hold a conversation thread for more than a single line. I think this tells us more about the biases many adults hold with respect to current teenagers (i.e. that actual teens don’t hold a conversation for more than a single line). On the other hand, I wouldn’t be that surprised if I consistently labeled actual people bots based on the terrible responses of actual people (I may hold some of those biases myself). The better Turing test is whether a computer can imitate a person in a conversation that is in any way worth having: the respondent spamming “STFU FAG LOLOLOL UMAD???” could well be a person or a bot, but it doesn’t matter becasue it’s not worth engaging in either case.

  14. Nick Gotts says

    Discussions of “the Turing test” are almost invariably discussing something rather different from the test Turing proposed. Here is the original proposal, from the link PaulBC gave@6:

    The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B thus:

    C: Will X please tell me the length of his or her hair?

    Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be:

    “My hair is shingled, and the longest strands are about nine inches long.”

    In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.

    We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

    So the test as originally presented appears to be: can a machine imitate a woman as well as a man can? Admittedly, the rest of the article is rather ambiguous about exactly what the test is, and it may be that Turing wasn’t very clear about this in his own mind.

  15. khms says

    ‘I believe that in about fifty years’ time [from 1950] it will be possible, to programme computers, with a storage capacity of about 10^9 [i.e. a billion bits, corrected from link], to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.’

    In other words, he believed that around 2000, it would be possible to make a 128 MB machine imitate a human well enough to fool 70% of the people for 5 minutes.

    He does not say that this means it can reasonably be called intelligent.

    He does not say this can reasonably be expected of an intelligent machine.

    The Turing test is a measure of the capability of a computer and of an average person – but it is not, in this form, any sort of reliable measure of artificial intelligence.

    On the other hand, if you can fool, say, 90% when interacting with them for days, then I’d be willing to talk “intelligent”.

    And still, if it can convince 90% while interacting for days that it is intelligent but not that it is human, I’d still be willing to talk “intelligent”. There’s no rule that anything intelligent needs to be able to emulate a human.

  16. sugarfrosted says

    @13 How did cleverbot fool anyone? I’ve met more intelligent chat bots made by rank amateurs.

    Whenever the topic of the Turing test comes up I like to point out a famous rebuttal to it’s effectiveness of it, the “Chinese Room” thought experiment. The idea is you lock someone up in a room filled with Chinese texts, who has no knowledge of chinese. (The choice of language is arbitrary as long as there is very little intelligibility, none would be best.) Say you were to hand him a question in chinese and he was able to produce responses that would look like it was produced a chinese speaker. Would you say he knew Chinese? Probably not. So why is it you would say an AI was intelligent by what is basically the same criterion?

  17. moarscienceplz says

    So is 30% the new standard for winning elections now? Somebody should tell President Mitt Romney.

  18. CJO says

    Say you were to hand him a question in chinese and he was able to produce responses that would look like it was produced a chinese speaker. Would you say he knew Chinese? Probably not. So why is it you would say an AI was intelligent by what is basically the same criterion?

    Searle’s thought experiment equivocates by positing an automated system of lookup tables and heuristic mechanisms that might more or less plausibly be instantiated by an algorithm, and then basically just switches out one component with a person and says –Aha! S/he doesn’t understand a thing that’s going on and clearly doesn’t need to understand Chinese for the system to work. But the system, by the terms given, is what is supposed to reliably return appropriate responses to input. The ability of any one part of the whole to understand or not is completely irrelevant to whether the algorithm can be said to understand anything.

    I’m not saying that it’s obvious such an algorithm should be considered conscious or aware or what have you. But the Chinese Room does not do the work that Searle claims it does.

  19. blbt5 says

    On the subject of talking bots, anyone who has listened to an iPhone knows that Siri is not going to pass the Turing test! However, it is also an open secret that Australian Siri is real and human and so easily passes the test. As well as Australian Garmin GPS. (No I’m not Australian!)

  20. PaulBC says

    “Say you were to hand him a question in chinese and he was able to produce responses that would look like it was produced a chinese speaker. Would you say he knew Chinese?”

    I would say that the system as a whole “knew Chinese”, though the person carrying out the translation may not know Chinese. It’s also unlikely to work, because text look-up by a non-native speaker is insufficient to produce grammatical, idiomatic expression in another language. By the time he got good enough to do this, he would probably know Chinese in nearly any reasonable sense of the word. (Purely mechanical translation is now widely available on the net, but the results are still not good enough to claim the software “knows” the language, and probably will require intelligence to close that gap.)

    “So why is it you would say an AI was intelligent by what is basically the same criterion?”

    If a computer carried out consistent, intelligent conversation with me, then by applying “basically the same criterion” would I say its ALU was intelligent? No. Or its backend storage was intelligent? No. Or its network connections were intelligent. No, obviously not. But I would say that the system as a whole was intelligent. My working assumption would also be that it experienced consciousness (particularly after I engaged it in discussion related to self-awareness) because that is the working assumption I apply to other observably intelligent systems.

    Searle’s longwinded arguments are merely an attempt at proof by incredulity. Yes, I also have trouble wrapping my head around how something like self-awareness could exist in an assemblage of non-self-aware components (suppose it is literally just a Turing machine–a finite automata simpler than a vending machine and a giant static tape only changed one symbol at a time; neither part is intelligent, so where’s the intelligence?) But the fact that it defies my intuition is more likely to be proof of the limits of my intuition than proof of the limits of self-awareness.

    (My apologies in advanced for getting suckered into a Searle argument.)

  21. marcmagus says

    [anteprepro #3]

    Further, I find it entertaining that they only had three judges and the mark for success is 30%. So in other words either the bot will pass or completely fail and there is no way to score somewhere in between. Also the words “small sample size” come to mind.

    30 judges. 30 trials per chatbot. See the press release at http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx — the interesting details are at the bottom where any reporters who actually read the press release clearly ignored them.

  22. anteprepro says

    marcmagus:

    30 judges. 30 trials per chatbot.

    Well there we go. That makes way more sense. Gonna have to go hunt down the article I read earlier. See if they were wrong and did say only three judges, or just see if I am misreading things today!

  23. PaulBC says

    Sorry, I want to admit to poor reading comprehension in #6, though it doesn’t contradict my point. I actually thought 30% of the judges had guessed correctly, not that 30% were fooled. That was the fairly unremarkable accomplishment here, and it’s also consistent with Turing’s criteria:

    “so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

    Turing was making a cautious prediction. (Followed quickly by a bold, and incorrect prediction that we would being referring to computers uncontroversially as “thinking” by now.)

    I didn’t look at the details of how this was carried out. Obviously you’d want to set it up so a judge who always insisted they were talking to a computer didn’t skew the outcome.

  24. PaulBC says

    I wrote: “Followed quickly by a bold, and incorrect prediction that we would being referring to computers uncontroversially as “thinking” by now.”

    But this may tell us more about changes in culture and language since Turing’s time than it does about computing power.

    E.g., I was recently trying to find out some information recently about a movie I had seen some years back. I knew it was not actually called “windshield wiper guy” but that was about the best I could do, and I was pleased but not awestruck to discover that a major search engine could take me straight from this phrase to the IMDB entry for “Flash of Genius (2008)”. This is not classic information retrieval. My connection is tenuous, and the success relies on implicit crowd sourcing to match this phrase to a movie and not an auto repair shop.

    I would not conclude based on this that the computer can think. But someone magically warped here from 1950 might reach very different conclusions. In fact, almost nobody seriously calls a computer an “electronic brain” today, but it was commonly heard (at least in informal speech) when computers were much further from brains than they are today.

  25. whynot says

    Say you were to hand him a question in chinese and he was able to produce responses that would look like it was produced a chinese speaker. Would you say he knew Chinese? Probably not.

    I have tested this on my students. I gave small groups of them some short strings of binary numbers and asked them to perform some simple manipulations on them. Then I asked them if they knew what they had just done. “It’s binary multiplication!” said I. They were quite sure that, in spite of having gotten the output right, they did not understand binary multiplication. Searle FTW.

    I would say that the system as a whole “knew Chinese”, though the person carrying out the translation may not know Chinese.

    This isn’t my area of expertise, but I would argue that chess-playing computers don’t understand chess (and conversely, that human chess players don’t understand the task that chess computers perform). Skilled human players seem to think ahead by using pattern recognition; standard chess computers blitz through decision trees. They may both be moving pieces on a board in a way that lets them compete against each other, but their internal states are so different that I think it’d be fair to say they are engaged in different activities.

  26. EnlightenmentLiberal says

    As a trained professional with a degree in this field, it’s my estimation that every time you hear a story like this, either the actual event holders have no clue what they’re doing, or the news reporting has no clue what they’re doing.

    Currently, it’s trivial to tell the difference between a cooperative human and a chatbot, but AFAIK they artificially limit the tests which can be done. All you need to do to tell if it’s a cooperative human is teach them something and then give them a test.

    Usually, when I’m confronted with chatbots, I start out with something easy, like “I think you might be a chatbot. Convince me otherwise by answering the following question right twice and wrong once: What is 2 + 2?” Haven’t got one to pass yet, and it’s trivial even for someone with limited language skills to pass.

    That’s the key of the Turing Test IMHO which no bot is even close to reproducing – the human (and other animal) ability to learn and apply in a general way. In the future, a chatbot might be hard coded to respond to questions like I just asked, but just the trivialist change will completely screw with it.

  27. PaulBC says

    “They were quite sure that, in spite of having gotten the output right, they did not understand binary multiplication.”

    I wonder how many of them actually understand decimal multiplication? It’s clearly possible (and common) to memorize a process by rote without being aware of all its implications. In fact, we generally agree that the floating point unit of a computer does not “understand” the process it is carrying out. For that matter, I can throw a frisbee without really understanding how I do it.

    If I wanted to test someone on whether they “understand” binary multiplication, I would probably ask for at least an informal explanation of correctness (and I hope these weren’t computer science students, because they ought to be able to).

    There is a genuine distinction to make between doing something and understanding it, which is the ability to reason about the process and not just carry it out. I also agree that today’s chess-playing computers do not “understand” chess, or at least don’t show sufficient evidence that they do. But recasting the situation in exotic terms like the Chinese room just obfuscates. Again, just keep it simple. A universal Turing machine has two parts: the finite control and the tape storage. Neither of these alone is plausibly intelligent, so if it does produce intelligence, there is some level of incredulity about where that intelligence arises. I don’t have a good answer for that, but this would not be enough for me to conclude it was not intelligent if it behaved as if was.

  28. Nick Gotts says

    Searle FTW.- whynot@26

    Not really. As I recall (it’s a long time since I read the paper), Searle’s Chinese Room was stipulated to be able to carry on an indefinitely prolonged conversation, using an immense lookup table. That’s on a completely different scale from your experiment, and in fact impossible with any table that could actually be produced, since the tester could use requests like that Enlightenment Liberal suggests, references to “the answer you gave three questions ago”, references to news items, repetition of the same question 20 times, etc.

  29. PaulBC says

    Usually, when I’m confronted with chatbots, I start out with something easy, like “I think you might be a chatbot. Convince me otherwise by answering the following question right twice and wrong once: What is 2 + 2?”

    I think that’s when the chatbot resorts to “STFU NERRRRRRD LOLOLOLOLOLOLOL”

    Agreed that it might be tough to keep this up for five minutes.

  30. Nick Gotts says

    PaulBC@28,

    I’d strongly suspect trickery, since a UTM is, by definition, something that is given a single input (however long), and calculates a single output (or else never halts). Carrying on a conversation, like most intelligent human activities, requires an extended interchange with an external world that is affected by what the human agent does, and in many cases, can also affect that agent in ways that can’t be reduced to messages in a finite alphabet.

  31. PaulBC says

    OK, extend the TM as needed to make it equivalent to an interactive computer. Also give it some form of output that can be read by the human before while it is running.

    For input, you only need to be able to affect the finite state in some way, and eventually it can record this in an appropriate place on its tape. Likewise, it could sometimes reach a pause state at which point you can read the contents of the tape closest to the head before signaling it to resume.

    This is more cumbersome than typing in complete sentences, but there’s nothing missing that would be present in a more conventional electronic computer.

    In practice, the time frames aren’t going to work. If that’s a deal-breaker, you could imagine this conversation being carried out over centuries. In fact, you don’t need a machine at all, just notebooks and scribes, but then it does start to look more like Searle’s thought experiment.

    Regardless of how you set it up, it is still just argument from incredulity. I’ve given this particular scenario (smart UTM) some thought, because my own experience of consciousness seems to be an simultaneous awareness of many thoughts and observations, but a TM is “aware” of at most one symbol at a time and indeed looks no smarter than a bee moving from clover to clover. So if there is some consciousness, it would have to be spread out over time in a way that seems very mysterious to me.

    But, OK, it’s mysterious. I find it hard to wrap my head around it. That does not mean I have an especially good reason to rule it out. If a machine is smart enough to convince me that it is conscious and desires continued existence and happiness, I would be bound to extend it the same rights I extend to other humans. The fact that I didn’t really understand how it could be conscious at all wouldn’t even make this any situation much different from dealing with people.

  32. twas brillig (stevem) says

    io9 followed up this story with a piece titled “Why the Turing test is Bullshite”. There, I asked, would it be a better test if one just let one Eugen chat with another Eugene, and then the judges read the conversation afterwords? Would that give a better sense of who was participating in the challenge, thus removing the judges as participants? I would really like to see this version of the test, gimme, gimme, gimme.

  33. twas brillig (stevem) says

    re @33:
    tl;dr:

    participants:
    person A,
    person B,
    Eugene E1,
    Eugene E2.

    dialogs:
    A-B,
    A-E1,
    A-E2,
    B-E1,
    B-E2,
    E1-E2.

    Which dialog is people talking?, which person (in those dialogs) is really a computer?

    If A vs B vs E1 vs E2 getting statistically similar identification as persons, The Computer wins.

    Try it! Show me!

  34. David Marjanović says

    suppose it is literally just a Turing machine–a finite automat[on] simpler than a vending machine and a giant static tape only changed one symbol at a time; neither part is intelligent, so where’s the intelligence?

    Where’s the intelligence in the brain? Is a single nerve cell or a single synapse intelligent?

  35. PaulBC says

    Nick Gotts @ 31 “and in many cases, can also affect that agent in ways that can’t be reduced to messages in a finite alphabet.”

    I want to add that I don’t agree with this. Obviously the real world is full of analog stimuli, but I don’t think that substituting a discretized approximation in a finite alphabet has any bearing on the question of consciousness. I’m willing to admit I could be wrong. I just don’t see this as an obvious blocker.

  36. EnlightenmentLiberal says

    @PaulBC

    I think that’s when the chatbot resorts to “STFU NERRRRRRD LOLOLOLOLOLOLOL”

    Nick Gotts above explained it well in post 15. We are considering an experiment where the human is trying to convince you that it’s human, what I called a “cooperative human”. If the human is uncooperative, then of course you cannot tell the difference, but that’s rather uninteresting. It would be hard to tell the difference between an uncooperative human and a rock.

    @sugarfrosted
    The fundamental problem with the Chinese room thought experiment is that it implicitly assumes that a human being is something other than a mechanical computer. That assumption is wrong. The human mind is a mere mechanical computer. There is no free-floating spirit that is not bound by the laws of physics and the laws of information and computation theory. You only have one choice – that there exists C code (plus a true-random number generator) which can sufficiently approximate your mind.

    Any proposed alternatives run afoul of two problems.

    1- Currently, I’m willing to bet that the particles in your brain obey the same physics as the particles in a rock. There is nothing special about the particles in your brain. They’re just obeying particle physics, and it’s the same particle physics as that of rocks. You are able to make choices (in a compatiblist sense), and you can act on those choices. Acting on those choices involves spike trains coming from your brain to the muscles in your body.

    If you think that you can make choices and act on choices, and it’s something other than basic particle physics, that means you think that sometimes particles in your brain don’t obey particle physics. This is a testable prediction. I’m willing to bet quite a lot that you would be wrong if you made that bet.

    Oh – do you want to hide in quantum indeterminacy? That won’t help here. Either you think that the particles in the brain obey the statistics distributions of quantum mechanics like the particles in rocks, or you think that sometimes the particles in your brain are nudged a little. If this nudging happens every time you make a decision and act on it, that is in principle detectable. That is a difference in the statistical distribution of quantum events in your brain. And if you want to say that it’s not detectable, then you are saying that it is obeying merely quantum physics, and the behavior of particles in your brain is the same as that of rocks. It’s one or the other. Either it’s the same behavior as that of rocks, or there’s a detectable difference.

    2- There is a deeper problem. I understand the world in terms of computable processes plus true-random number generators. Proponents of libertarian free will often resort to a third option without ever specifying it. They never describe what it would look like, or how we could tell the difference between a mechanical process and libertarian free will. I say that this third option is incoherent.

    Also, adding a soul doesn’t do anything. Maybe particles in the brain sometimes don’t obey particle physics. Maybe there is a soul. However, that soul still is nothing more than computable mechanical processes plus maybe some true-random number generators. There is no third option. I am convinced that any observable phenomenon is a computable mechanical process with possibly some true-random number generators thrown in. (This includes souls because we can see the effects of decisions of souls via the body.) The alternative is incoherent.

  37. EnlightenmentLiberal says

    Sorry, I want to use the proper language. Of course a human brain and the human mind is nothing more than a deterministic finite state machine, a deterministic finite automaton, plus maybe a true-random number generator. This shouldn’t even be controversial. A partial argument for this is made in my above post.

  38. EnlightenmentLiberal says

    PS: Sorry. I am open to talking about differences between finite state machines and finite “analog” machines. Perhaps space and time are not quantized, and thus it would be inaccurate to say that the human mind is a DFM (plus random number generator). However, I will defend my earlier position that the human mind is obviously a mechanical machine just like any other machine. Maybe it’s digital. Maybe it’s analog. Irrelevant to my central point.

  39. PaulBC says

    “Of course a human brain and the human mind is nothing more than a deterministic finite state machine, a deterministic finite automaton, plus maybe a true-random number generator. This shouldn’t even be controversial.”

    That’s overstating the case. To caricature Penrose’s view, you could speculate that quantum weirdness is the magic ingredient that makes minds capable of consciousness, and this is what’s missing from discrete deterministic computational models, or even stochastic ones.

    I don’t believe this at all, but the question is empirically open until someone produces a standard digital computer that exhibits all the characteristics associated with self awareness. Anyone who would pull the plug on such a computer, citing its lack of “quantum tubules” or some such to justify its lack of Lockean rights is just another evil bigot in my view, continuing a long, sad tradition of defining the “other” as subhuman. For now it’s a hypothetical situation because we just don’t know how to make a computer do anything even close.

  40. carole says

    Robert Llewellyn is a good guy. He was at QED recently (sort of UK version of TAM) – he gave a great talk.

  41. consciousness razor says

    Oh – do you want to hide in quantum indeterminacy? That won’t help here.

    It also isn’t necessary to assume quantum mechanics isn’t deterministic. That’s just one choice of an interpretation. There are several well-defined theories that are deterministic and entirely consistent with the evidence and the predictions of quantum mechanics, so there isn’t any fact, as far as anybody knows, that we ought to treat it as equivalent to a “random number generator.”

    That’s overstating the case. To caricature Penrose’s view, you could speculate that quantum weirdness is the magic ingredient that makes minds capable of consciousness, and this is what’s missing from discrete deterministic computational models, or even stochastic ones.

    I don’t believe this at all, but the question is empirically open until someone produces a standard digital computer that exhibits all the characteristics associated with self awareness. Anyone who would pull the plug on such a computer, citing its lack of “quantum tubules” or some such to justify its lack of Lockean rights is just another evil bigot in my view, continuing a long, sad tradition of defining the “other” as subhuman. For now it’s a hypothetical situation because we just don’t know how to make a computer do anything even close.

    I don’t understand this at all. Is it nonlocality that is supposed to be the “weird” part? How would that explain anything about consciousness? And isn’t Bohmian mechanics a nonlocal deterministic theory anyway, so in what sense is that supposed to be impossible? And for that matter, what about some version of Many-Worlds?

    And even if that were somehow the case, why wouldn’t everything else be exactly the same, not just human brains? Isn’t this just the old problem of taking the word “measurement” or “observer” too seriously, to mean something like what a scientist does (or maybe a cat…) and not just any physical interaction?

    I mean, obviously we don’t have any computers that are conscious yet…. but it is just pure speculation (really bullshitty speculation) that quantum mechanics has anything to do with it. I mean, I’m sure it does have a lot to say at some level; but there’s no reason to believe it has anything to say to the effect that computers (or stuff that isn’t a human brain) can’t be conscious. That’s not an “open question,” it’s just ridiculous.

  42. Nick Gotts says

    PaulBC@36,

    I want to add that I don’t agree with this. Obviously the real world is full of analog stimuli, but I don’t think that substituting a discretized approximation in a finite alphabet has any bearing on the question of consciousness. I’m willing to admit I could be wrong. I just don’t see this as an obvious blocker.

    I’m thinking of things like the external world dropping something heavy on your foot, or warming you up/cooling you down. These are not just sensory stimuli, which could presumably be approximated in a finite alphabet: they affect your physiology in ways other than through nerve pathways. More broadly, it’s a mistake to think of the brain as a discrete object with an exact computational structure: that structure is continually changing, and can’t be separated from the rest of the body and the external world, with which it is in constant physical interaction. In this sense, it’s not a machine, despite the fact that there is nothing in it that works any differently from anything else in terms of physical law. In fact, I might even say because of that fact, it’s not a finite automaton (deterministic or otherwise), because it doesn’t have a precise boundary, and is constantly changing its structure. Indeed, even a digital computer isn’t a finite automaton in the mathematical sense, although we try to make it as good an approximation to one as we can: when your computer overheats and goes phut, it’s still acting in accordance with physical law! In contrast to the designer of a digital computer, however, natural selection didn’t set out to approximate a finite automaton as closely as possible. In this connection, there’s an illuminating paper by Adrian Thompson (unfortunately behind a paywall). Thompson used a genetic algorithm to evolve control systems for robots – but instead of doing the whole thing in simulation, tested the proposed soulations in actual hardware. The best solutions used:

    physical properties of the implementation (such as the semiconductor physics of integrated circuits) to obtain control circuits of unprecedented power. The space of these evolvable circuits is far larger than the space of solutions in which a human designer works, because to make design tractable, a more abstract view than that of detailed physics must be adopted.

    Almost certainly, natural selection does the same thing far more extensively, so the brain will not be a finite automaton unless you go down to the level of fundamental particles – and in fact, not even then, because at that level it’s not a discrete system – particles are moving in and out of it and thus reconfiguring it all the time.

  43. twas brillig (stevem) says

    Are any of the participants here, just a chatbox? Everyone denies it, but how do the non-chatbots know? Are the chatbots here passing the Turing test and just pretending they aren’t, by debating the validity of the Turing Test? The problem here is so obviously meta: How does my mechanical brain distinguish between the bio-mechanics and the electro-mechanics. Even the bio-brain is partially electric… spiralling down the black hole of meta-thinking.
    I’ll just settle by saying, “yes the ‘mind’ is a deterministic finite state machine, but it is distinguished as ‘conscious’ because it has so many possible states that even mega-hyper-computers still don’t have nearly as many possible states.” That’s as far as my bio-FSM can go. So did I pass the Turing? o_O [or am I just a clever chatbot?]

  44. EnlightenmentLiberal says

    @PaulBC

    That’s overstating the case. To caricature Penrose’s view, you could speculate that quantum weirdness is the magic ingredient that makes minds capable of consciousness, and this is what’s missing from discrete deterministic computational models, or even stochastic ones.

    I don’t understand how you can attribute that position to me when I just spend pages arguing against that position. I argued with quantum wierdness or not, your brain and mind is a mechanical device, just like any other mechanical device, bound to the laws of physics just like any computer or rock. Your mind and brain are a computable system, possibly with some true-random number generators thrown in, just like any other mechanical system. This should not be controversial.

    I don’t believe this at all, but the question is empirically open until someone produces a standard digital computer that exhibits all the characteristics associated with self awareness.

    I don’t know what the “this” refers to.

    Regardless, we don’t need to build a computer to show this. Our knowledge of biology, chemistry, and physics is enough to let us conclude rather safely that the brain is like any other organ of the human body, and is no more magic than a rock. It’s a physical system composed of physical parts obeying physical laws and no more.

  45. EnlightenmentLiberal says

    @Nick Gotts

    More broadly, it’s a mistake to think of the brain as a discrete object with an exact computational structure: that structure is continually changing, and can’t be separated from the rest of the body and the external world, with which it is in constant physical interaction.

    Of course. What I would try to get at is that physics itself is our universal turing machine, running the ever-changing configuration of our brain.

    You are also right that “finite state” may not be an apt description because: 1- physics may not be discrete, 2- the effective “memory size”, “state”, or “transition table” of the brain may increase over time.

    However, to the extent that any actual mechanical system is described wholly as the parts operating by physics, the brain is no different. It’s a mechanical system like any other.

  46. says

    @PaulBC #40:
    “all the characteristics associated with self awareness” trivially include the Turing test, but what would your larger test actually demonstrate? In fact, twas brillig (stevem) #44 has it right when suggesting, albeit facetiously, that we are all chatbots. Could a machine be conscious in the sense that we observe other people to be conscious? This question is poorly posed, since it implicitly assumes that consciousness is a property of people/bodies/computers/brains/minds/whatever, whereas all of these are actually properties of consciousness.

  47. Nerd of Redhead, Dances OM Trolls says

    Could a machine be conscious in the sense that we observe other people to be conscious? This question is poorly posed, since it implicitly assumes that consciousness is a property of people/bodies/computers/brains/minds/whatever, whereas all of these are actually properties of consciousness.

    Citation needed….

  48. EnlightenmentLiberal says

    unless you go down to the level of fundamental particles – and in fact, not even then, because at that level it’s not a discrete system

    Couldn’t help but note, but that might be wrong. Planck time and Planck length and everything. I’m really too ignorant of modern physics to know.

  49. PaulBC says

    “I don’t understand how you can attribute that position to me when I just spend pages arguing against that position. ”

    I didn’t. Replace “you” with “one” that helps (i.e. “one could speculate”). You (correct word choice in this case) seemed to be claiming that a brain was obviously reducible to a conventional computer, possibly with stochastic transitions (“shouldn’t even be controversial”). That’s what I said was an overstatement. The brain may be reducible to such a system. It would not surprise me if is. It would surprise me if it isn’t. I just don’t see the harm in withholding judgment until someone actually produces such a system.

    “I don’t know what the “this” refers to.”

    “This” refers to the assertion made most famously by Penrose that quantum mechanics is intimately tied into consciousness. Penrose’s claim seems extremely unlikely to me, but again, I’m happy to withhold judgment until somebody actually produces an AI using conventional digital computing and appropriate software. That is not a crazy thing to demand. I still hope to see it in happen in my lifetime. So I’ll wait patiently rather than insist one way or the other.

    Please reread what I wrote in @40. I don’t think it’s as incoherently written as all that, but maybe… I think you started out by assuming that I was arguing against what you said, rather than mostly agreeing with a few qualifications.

    Finally, I agree with Nick Gotts that human experience is a lot more than symbolic computation occurring inside the brain.

    An AI would probably have an entirely different internal frame of reference, including very different drives and sense of aesthetics (assuming this makes any sense in this context and isn’t really specific to human evolution). I think humans might have more in common with naturally evolved extraterrestrials than the AIs they themselves create (I alluded to the Lockean right to life, but does the AI care as much about survival as the products of nature do?).

    So the issue isn’t really the Turing test as such, which is a particular thought experiment proposed a long time ago. The general issue is one of recognizing intelligence and self-awareness.

    So my point is that if you did recognize these elements, though it may be a very different intelligent, you could still reduce the system to parts that were obviously not self-aware when taken alone. If the system was highly sequential, like a Turing machine, this strains my credulity, because I can see that each time snapshot is just a single symbolic operation carried out on a repository of non-computing symbolic storage. But the fact that it strains my credulity, doesn’t mean it isn’t so. In fact (as I said above) the limits of my intuition are more likely to be the explanation than some argument over why all appearance to the contrary, it is not a conscious entity.

  50. PaulBC says

    EnlightenmentLiberal: “Our knowledge of biology, chemistry, and physics is enough to let us conclude rather safely that the brain is like any other organ of the human body, and is no more magic than a rock.”

    Sure. But I wasn’t leaving open the need for magic, only the need for real physics as opposed to digitally simulated physics. It’s true that quantum physics ultimately follows mathematics that can be carried out to an arbitrarily high approximation on a digital computer. But the slowdown in speed is significant (or else there would not be so much current interest in quantum computing). More to the point, Penrose attributes the phenomenon of consciousness to quantum mechanics. That seems like nonsense to me, but I don’t see how to dismiss the idea that the human brain could rely on some physical phenomena that aren’t directly captured in discrete logic. I doubt it. I would just rather see it settled directly.

  51. tynk says

    A touring test of artificial intelligence is solved by a significantly large processor able to expediently query a significantly large database of expected responses given a relatively expected input and produce an expected relatively expected response.

    A true test of intelligence takes the same significantly large database of expected responses given a relatively expected input and produce a wholly unexpected relatively expected response.

    Intelligence is not deemed by command and response but but the creation of new responses given significant input.

  52. EnlightenmentLiberal says

    But I wasn’t leaving open the need for magic, only the need for real physics as opposed to digitally simulated physics.

    Ok… I see now. Apologies. What an interesting position.

    More to the point, Penrose attributes the phenomenon of consciousness to quantum mechanics.

    Yea. This position is just a baby-step away from Deepak Chopra.

  53. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    @Vijen:

    Just so you know, I’m ignoring your comments until you finally pony up that $730k you owe me. I am not amused that you show your face around here without satisfying that debt.

  54. Ichthyic says

    they are all saying exactly the same thing: that over 30% of the judges couldn’t tell that a program called Eugene Goostman wasn’t a 13 year old boy from Odessa with limited language skills.

    anyone recall how Turing came up with the 30% figure?

    I would have thought it should be much higher, at least double that.

  55. Ichthyic says

    example:

    30% of Americans are creationists (young earth).

    30% of Americans don’t know it takes one year for the earth to go around the sun.

    basically, 30% of people in general are very gullible.

    I’d be slightly more inclined to think a Turing test meant something, if say, 75% of the people were convinced they were speaking with a human.

  56. Nerd of Redhead, Dances OM Trolls says

    @Nerd #49:
    Can you disprove my claim?

    Don’t have to. You must evidence your claims, or they can be dismissed as fuckwittery. You know that. So cite your claims , or shut the fuck up!

  57. says

    Vijen:

    Can you disprove my claim?

    Um, you made the claim. It’s your job to prove it. Just like it’s the job of theists to prove their claim that god exists.
    Going back to your statement:

    Could a machine be conscious in the sense that we observe other people to be conscious? This question is poorly posed, since it implicitly assumes that consciousness is a property of people/bodies/computers/brains/minds/whatever, whereas all of these are actually properties of consciousness.

    Can you provide evidence to support your claim?

  58. anteprepro says

    Vijen’s deliverin’ the evasive crankery, right on schedule.

    Crip Dyke

    Just so you know, I’m ignoring your comments until you finally pony up that $730k you owe me. I am not amused that you show your face around here without satisfying that debt.

    Oh, do you have a link to where that happened? It sounds entertaining.

  59. says

    @ Ichthyic
    See PaulBC @ 5. 30% wasn’t part of his proposal of the test per se (nor the 5 min), but rather a prediction on how well the test would be handled after 50 years. It don’t get the disparagement of the judges though. The idea that there should be special qualifications somewhat seems to undermine the premise to me.

  60. consciousness razor says

    PaulBC:

    I think humans might have more in common with naturally evolved extraterrestrials than the AIs they themselves create

    That is possible. The ethics of dealing with artificial intelligence is a pretty tough issue; but really, people generally aren’t too great when it comes to non-human animals either. Being “naturally evolved” really doesn’t have a thing to do with it.

    (I alluded to the Lockean right to life, but does the AI care as much about survival as the products of nature do?).

    Why wouldn’t it? We’re stipulating that by definition it would be self-aware, intelligent, and so forth. Whatever form that might take, how is that not implying it cares about itself and its own survival in the way we do? It seems to me like that comes with the territory of being aware or having a first-person perspective. Rocks and pencil sharpeners and thermostats don’t care about survival because they don’t have any experiences of anything. If you do experience the world and yourself, then you automatically have something to care about. It’s possible different things would be meaningful to them, but that isn’t the sort of thing you can get rid of without tossing out the whole thing.

    So my point is that if you did recognize these elements, though it may be a very different intelligent, you could still reduce the system to parts that were obviously not self-aware when taken alone. If the system was highly sequential, like a Turing machine, this strains my credulity, because I can see that each time snapshot is just a single symbolic operation carried out on a repository of non-computing symbolic storage. But the fact that it strains my credulity, doesn’t mean it isn’t so. In fact (as I said above) the limits of my intuition are more likely to be the explanation than some argument over why all appearance to the contrary, it is not a conscious entity.

    I don’t know if I understand your issue here. Brains have lots of different parts running in parallel. When you’re “conscious” of something, parts of the system are representing certain properties of the global state of the system. It’s making a map of what happens, not giving complete description of every micro-detail. This kind of operation allows various parts of the brain to use that information in a new way that they wouldn’t be able to get through other (unconscious) pathways: they have access to a sort of “higher-level” representation of other parts put into context with each other, with some of the “unimportant” bits weeded out (but those signals aren’t unused, they just aren’t directly involved in experience). So being aware, having an experience, being a “self,” just means that this “self” is a particular kind of map or model or representation of the brain to itself. If you can do that, then you’re conscious. So if my description is even remotely accurate, then to me, it doesn’t resemble a Turing machine in what seem to be some fairly important ways.

    Is that basically what you’re saying, that a Turing-machine style of computer is insufficient? Or is it something else, like no computer of any design whatsoever could be conscious because they’d all need to be Turing machines? It might not be such a strong claim — maybe that it’s just “mysterious” or whatever. I don’t really know. At one point, it’s also about how it “strains credulity” that it’s made of parts that aren’t conscious, but I don’t see how that’s an issue at all, considering that we already know human brains are exactly like that. At yet another point, you say something about “discrete logic.” Is it about neurons/synapses being analogue? Why couldn’t we make an analogue machine then, if that’s what we need to do? Is it all of the above? I’m trying to be charitable here, but my problem is that I just don’t get which issue(s) you’re raising.

  61. says

    2nd half ramble of my 64 wasn’t meant to be directed at Ichthyic, just train of thought.

  62. says

    The evidence requested is readily available, but you’re looking in the wrong place. In the case of Crip Dyke’s claim, no-one has any subjective experience of its veracity; whereas everyone (individually) has subjective experience of the truth of my claim.

  63. says

    cr @ 65

    Why wouldn’t it? We’re stipulating that by definition it would be self-aware, intelligent, and so forth. Whatever form that might take, how is that not implying it cares about itself and its own survival in the way we do?

    Self-preservation isn’t a trait that needs much in the way of intelligence. Evolutionarily it is easy to explain. I don’t see why and AI would have it unless it would be programmed to have it or some positive feedback from which it could arise. Especially since self-preservation is either lost or overridden in no small number of highly intelligent beings.

  64. consciousness razor says

    The evidence requested is readily available, but you’re looking in the wrong place. In the case of Crip Dyke’s claim, no-one has any subjective experience of its veracity; whereas everyone (individually) has subjective experience of the truth of my claim.

    I don’t. Argument refuted.

    Fuck, that was easy, and I can barely even tell what the fuck your argument is. Physical objects are properties of consciousness? Nope. Try again?

  65. Ichthyic says

    30% wasn’t part of his proposal of the test per se (nor the 5 min), but rather a prediction on how well the test would be handled after 50 years.

    ah.

    I don’t understand why a double blind standard statistical test cannot be done.

    if the idea is to actually test an AI for human response levels, it certainly can be done better than this.

    Turing’s predictions are weak, and I frankly don’t even understand their relevance.

  66. Ichthyic says

    has subjective experience of the truth of my claim.

    what the fuck does that even mean?

  67. consciousness razor says

    Try again?

    Also, explain whether you’re a solipsist or some other kind of idealist. It would help to know which rock I should start kicking.

  68. consciousness razor says

    what the fuck does that even mean?

    You experience that it’s true that stuff is made of experience. How do you know it’s true? Because you experience it! What if you don’t? Yes you do!

    It’s apparently a form of presuppositional idealism, but I’ve never been able to figure out which kind. Perhaps we are all parts of Vijen, arguing with each other, but Vijen knows which parts of Vijen are right. Or perhaps it’s something even more nonsensical.

  69. Nerd of Redhead, Dances OM Trolls says

    The evidence requested is readily available, but you’re looking in the wrong place.

    Until you tell us where it actually exists, with citations, you are full of bullshit as ever. You shouldn’t even think about posting until you can and will back up every claim you make with scientific evidence from the peer reviewed scientific literature. Understand?

  70. Ichthyic says

    …just to be clear, that’s a combination of presuppositional realism and justificatory idealism, right?

    cause if so, you’ve hit the magic middle ground!

    :)

  71. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    @ichthyic, #71:

    It means that there are 2 kinds of claims.

    a) 1 kind of claim must always be proved
    b) Another kind of claim must never be proved, though it might be disproved.

    His writing also gives us insight into the distinguishing factors used for sorting claims into a or b.

    In the case of Crip Dyke’s claim, no-one has any subjective experience of its veracity; whereas everyone (individually) has subjective experience of the truth of my claim.

    Since I clearly do have a subjective experience of my claim’s veracity, and since without evidence vijen **further** claims that “everyone (individually) has subjective experience of the truth of my claim” [and he knows that because he’s spoken to literally everyone? I don’t remember that conversation] we now have a complete depiction of Vijen’s heuristic:

    If the claim is not made by me and I do not agree with it, it is category a.
    If the claim is made by me and/or I agree with it, it is category b.

    Checkmate Evidenceists!

  72. Ichthyic says

    so, when a religionaut tells me that I’m not an atheist, because there really are no atheists, only people angry at god…

    can we say that’s an example of presuppositional idealism?

    I would like to very much…

  73. consciousness razor says

    so, when a religionaut tells me that I’m not an atheist, because there really are no atheists, only people angry at god…

    can we say that’s an example of presuppositional idealism?

    I would like to very much…

    It’s presuppositionalism (a rhetorical strategy posing as an epistemology). But it’s not necessarily idealism (nonsense posing as metaphysics), in the sense that you are monist and think the one thing that fundamentally exists is a mind. The vast majority of those religionauts (stolen!) are dualists. All sorts of that nonsense is good, but they can’t go for that.

  74. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    but they can’t go for that.

    No, no. No can do.

  75. says

    Vijen:

    The evidence requested is readily available, but you’re looking in the wrong place.

    That’s not how it works. Either support your claim with links to the evidence or STFU. It’s no one’s job to do YOUR work.

    In the case of Crip Dyke’s claim, no-one has any subjective experience of its veracity; whereas everyone (individually) has subjective experience of the truth of my claim.

    How about you actually respond to the claim instead of this evasive non answering bullshit. Or, you can STFU.

  76. consciousness razor says

    I … I … I’ll do anything
    that you want me to
    Yeah, I … I … I’ll do almost anything
    that you want me to
    But I can’t go for that

  77. says

    Each of us is free to ignore our true nature – we enjoy pretending that we are localized, separate, and limited conscious beings – but the reality is always available. This game of misidentification can continue indefinitely, but if you can be open to the possibility that you aren’t who you think you are, then you will start to see the joke.

    The only way to know is subjective: objects lack epistemological access. And brains and minds are objects, so who is it that really knows?

  78. PaulBC says

    consciousness razor@65
    “That is possible. The ethics of dealing with artificial intelligence is a pretty tough issue; but really, people generally aren’t too great when it comes to non-human animals either. Being “naturally evolved” really doesn’t have a thing to do with it.”

    So it’s clear, I didn’t mean that being naturally evolved gives them privileged ethical status, just that it might be easier for humans to identify with them. It’s true that people do terrible things to animals, but we also feel empathy. People assume, quite reasonably, that a dog is driven by hunger or wants to play when it shows analogous behavior to a human with these wants. Our relationship to farm animals is more conflicted, but most of the mistreatment is explained by self-interest and hypocrisy rather than an inability to comprehend animals as conscious and feeling pain.

    So my main point is that a conversation with a genuine AI may be nothing like a Turing test, because the AI could be highly intelligent but so obviously non-human that it would never be mistaken for one even if the AI tried very hard. (Could be true of naturally evolved extraterrestials too, not to mention AIs produced by extraterrestrials; it would be fun to find out, but that is not something I expect in my lifetime.)

    “Why wouldn’t it [have a self-preservation drive]?”

    I agree with D@68. I would instead ask “Why would it?” It’s clear how evolution would produce such a drive. It could also come in other ways, but it’s not obvious. How would you predict what the first AI actually wants? I’m also not saying that the absence of such a drive would abnegate human responsibility for the safety of the AI, just that it’s not obvious it would feel that way.

    My main point was that probably some of our basic assumptions of what another sentient being is like are very specific to humans (and other living things on earth). I believe there would be a way to conclude that an AI was self-aware (namely that it could talk to you and tell you that it was) but I take the point that if I tried to explain that I am feeling a lot of pain because I just dropped a brick on my foot, I would at best get a kind of abstract appreciation of what I had told it.

    [Parts about a TM and the argument from incredulity.]

    First, my main point is that the argument from incredulity is invalid, so probably anything else I say, if I’m unclear, can just be ignored as secondary.

    Second, I intuitively associate consciousness with the instantaneous perception of a complex thought. (I am here. I experience my surroundings I have a past that I remember and a future I can guess at.) I can picture the simultaneously firing neurons of my brain as the locus of that perception. But when I think of a Turing machine, changing exactly one symbol at a time, it’s clear that the locus of consciousness is neither the list of mostly unchanging symbols, nor the finite control. So where is it? (Some very unintuitive thing spread out over a time span, no doubt, and of course “where” and “when” are both kind of silly questions.)

    It’s totally fair if you don’t find this an interesting question. As I said, the argument from incredulity is not valid. But my original point was that the Chinese room is a fairly cumbersome form of the argument from incredulity with unnecessary details, so why not just use a Turing machine?

  79. Snoof says

    Oh, hey, it’s Vijen! Hi Vijen!

    For anyone who hasn’t seen Vijen before, their basic mode of argument is to insist that their own subjective experiences while meditating are an accurate description of reality, and other people’s subjective experiences while meditating, intoxicated, doing scientific experiments, thinking about logic, living their life or pretty much anything else are correct if-and-only-if they agree with Vijen’s experiences. Explaining how to distinguish dreams and hallucinations from deep insights into the nature of reality is not forthcoming.

    Please note that Vijen won’t actually admit to any of this for at least a couple of dozen posts, preferring to be pointlessly cryptic, “ask questions” and hint without actually saying anything. I presume xe’s getting off on playing More-Enlightened-Than-Thou, since I can’t think of any other reason.

    Overall, a mildly-entertaining chewtoy but ultimately unsubstantial. Two stars out of a possible five, would only be trolled again were I severely bored. Which I am.

  80. Snoof says

    Tony! The Fucking Queer Shoop! @ 89

    You’re welcome. This isn’t Vijen’s first round in the shark-tank, and it’s only fair everyone else know what to expect.

  81. says

    If such foolishness were not so prevalent, it would be embarrassing to witness this parade of pseudoscientific twaddle posing as a genuine enquiry into the nature of consciousness. Look in!

  82. consciousness razor says

    I agree with D@68. I would instead ask “Why would it?” It’s clear how evolution would produce such a drive. It could also come in other ways, but it’s not obvious.

    I don’t see what the form of “production” has to do with it. I don’t think consciousness is adaptive simply because it provides a “drive” for self-preservation. It’s adaptive because you can do all sorts of thinking you couldn’t do otherwise. But all sorts of species exhibit behavior we could characterize as a “drive” or a survival instinct, even if they don’t show any signs of consciousness. That sort of behavior is all you really need if that’s the only concern, not “caring” about yourself in some sophisticated sense of being self-aware and having some understanding of your feelings about yourself. But if it is aware and does feel, no matter what else you want to say about this thing, how is it not already there?

    Suppose somebody just invented you, and you have no ancestors which were selected for in some way because of their adaptive behaviors. No matter how you might have come about, if you are aware of yourself and your feelings, what does it even mean to say that (maybe) you don’t feel the significance of your own existence? How could you not? I keep asking because I want a real answer to that, not “maybe so, but it isn’t obvious.” Why is this such a jaw-droppingly fascinating subject for so many people, even inspiring so many religious and ethical notions, that consciousness is such an extraordinary thing that we need to completely rework our entire concepts of ethics and epistemology and aesthetics and metaphysics just to try to capture what all of this means; yet at the same time, it wouldn’t be significant to itself as a conscious being if it wasn’t a member of an evolved species? It wouldn’t feel? It wouldn’t find itself mysterious? It wouldn’t want to go on living, for any reason at all, at the very least to try to understand itself a little more, because that’s crying out for some kind of explanation? I just don’t buy any of that. I don’t even see how you think you get from A to B.

    How would you predict what the first AI actually wants?

    I often can’t predict what a human actually wants. I often can’t even predict what I will actually want. So is that really supposed to be the question?

  83. PaulBC says

    ‘I keep asking because I want a real answer to that, not “maybe so, but it isn’t obvious.”’

    I’d like one too, but I’m not going to pretend to have one.

    I think this is getting too bogged down in details. I only meant that there might be significant differences between the inner dialogue of an AI and my own inner dialogue as a human. I suggested a few specific differences, but none of them are non-negotiable.

  84. says

    Vijen:

    If such foolishness were not so prevalent, it would be embarrassing to witness this parade of pseudoscientific twaddle posing as a genuine enquiry into the nature of consciousness. Look in!

    I *am* lookin’. You’re the one tossing out pseudoscientific bafflegab.

  85. consciousness razor says

    I only meant that there might be significant differences between the inner dialogue of an AI and my own inner dialogue as a human.

    Well, that I completely agree with, as I tried to say in my first response. It’s not just possible, but I’m pretty damned sure our own two inner dialogues are significantly different right now, without having any major differences in how we’re built or our evolutionary history or whatever.

  86. tynk says

    I don’t think consciousness is adaptive simply because it provides a “drive” for self-preservation.

    Conscience is adaptive, because it is. The same reason we exist. Because we do.

    Many universes could have existed, and in almost all of them we could not. We exist because we can exist. There is a probability, no matter how minute that a universe exists with sentient beings, thus one does.

    Back to AI, it’s definitions are not upon pre-generated responses to expected queries. True AI is dependent upon adaptation, the creation of ideas outside of expected perimeters.

  87. richcon says

    The Turing Test’s rules insist that questions be completely unconstrained to avoid programs cheating by merely preparing scripts in advance. By pretending to be a 13-year-old immigrant with limited English, the machine’s engineers added their own artificial constraints to the conversation. They changed the rules of the test and therefore faked their way into a “win”.

    In other wards, they Kirked the Kobayashi Maru of artificial intelligence tests.

  88. Ichthyic says

    Each of us is free to ignore our true nature

    Ooh! Ooh! free astrology reading!

    My sign is scorpio. Tell me what my true nature (TM) is!

  89. Ichthyic says

    If such foolishness were not so prevalent, it would be embarrassing to witness this parade of pseudoscientific twaddle posing as a genuine enquiry into the nature of consciousness. Look in!

    IOW… keep an open mind?

    …yeah.

  90. chigau (違う) says

    I was born in a Year of the Sheep
    Artistic, calm, reserved, happy, kind
    moi?
    私?
    bwahahahaha

  91. Amphiox says

    Conscience is adaptive, because it is.

    No, you have to demonstrate that with evidence.

    “This feature is an adaption” is not the proper null hypothesis, for any biological feature.

  92. Nick Gotts says

    Vijen’s also a follower of that egregious fraud, exploiter of the stupid, antisemite, homophobe and collector of Rolls-RoycesOsho or Bhagwan Shree Rajneesh.

  93. Nick Gotts says

    unless you go down to the level of fundamental particles – and in fact, not even then, because at that level it’s not a discrete system – me

    Couldn’t help but note, but that might be wrong. Planck time and Planck length and everything. I’m really too ignorant of modern physics to know. – Enlightenment Liberal@50

    I should have found a better word than “discrete”. What I meant was that you can’t, at that level, define an exact boundary between the brain and its environment, and elementary particles are flowing in and out of the brain all the time, so it can’t be assigned a unique computational structure. Even at more molar levels, it’s not clear what would count as a computation within a pre-specified structure (which is how abstract “machines” such as finite-state machines or Turing machines, and designed physical computers such as von Neumann machines, are conceptualised), and what as a change in the machine, since the brain’s information processing is affected by blood flow and blood chemistry, and it is constantly rewiring itself.

  94. Anri says

    Oh, Vijen is playing the “you all know that god exists, deep down, you just have to open your souls to experience him” card? The “I’m right because you all feel it an people disagreeing with me are just being stubborn/close-minded/angry/argleblarge” total bullshit?

    Feel free to look in so hard you fall in, Vijen.

  95. consciousness razor says

    “This feature is an adaption” is not the proper null hypothesis, for any biological feature.

    Also, you don’t just get any old “null hypothesis” you want out of anthropic reasoning. It can be a pretty shitty replacement for actually thinking, and it doesn’t have much use in this case.

  96. A Masked Avenger says

    More to the point, Penrose attributes the phenomenon of consciousness to quantum mechanics.

    Yea. This position is just a baby-step away from Deepak Chopra.

    Not exactly. Penrose is a crank, and I think he is trying to open a crack to inject his woo. But as EnlightenmentLiberal said more than once in the thread, “quantum mechanics” is basically a proxy for a true random number generator, on the assumption that some quantum phenomena are truly random–ignoring for now the philosophical debate whether “random” is a genuine thing at all, or the scientific debate whether quantum physics actually requires randomness.

    The desire to inject an unpredictable element is somewhat understandable at least. Consider: it’s possible to build a purely mechanical, Turing-complete computing engine, as Babbage demonstrated in principle. So if it’s possible in principle to simulate a human brain with a digital computer, then it’s also possible, in principle, to simulate a human brain with nothing but wheels, cogs and levers (albeit possibly using a machine larger than the moon, say).

    Some folks are disturbed at the idea that their brains are truly equivalent to a machine, in the literal gears-and-levers sense. But they accept the assumption of methodological naturalism, so they want a naturalistic explanation of the possibility that they are more than a clockwork mechanism. The only obvious possibility is to insert a random number generator. I’m not immune to hoping there’s a bit of me that can’t be captured by a digital simulation, though I haven’t the foggiest whether it’s so.

    Note that the brain itself is surely not digital. Neurons fire-or-don’t-fire, which is pretty digitalish, but they also operate asynchronously. It seems likely to me that the system overall is chaotic, which means that any simulation is doomed to fail that differs in any way from the initial conditions. So it’s possible that we are unique snowflakes even if we are, in principle, duplicable: the duplicate might need to be accurate down to the resolution of Planck’s constant.

  97. PaulBC says

    “But as EnlightenmentLiberal said more than once in the thread, “quantum mechanics” is basically a proxy for a true random number generator,”

    I think a pseudorandom source will be equivalent for purposes of AI. The main thing is that the pseudorandom sequence shouldn’t be correlated with the input, but that’s very easy to achieve. There are many randomized algorithms (including, for instance, quicksort) that work just as well when a random number generator is used instead of an actual random source. Randomization is mainly needed for efficiency, rather than increasing computational power, and in many cases be replaced by exhaustive enumeration at some loss in efficiency (efficiency isn’t very relevant in this discussion).

    Penrose does not treat quantum mechanics as a proxy for randomness, but actually does believe that it is tied into consciousness directly. My take on this is that he is just picking the most mysterious component to hide the equivalent of a soul, but I’m sure he would insist otherwise.

  98. consciousness razor says

    Some folks are disturbed at the idea that their brains are truly equivalent to a machine, in the literal gears-and-levers sense. But they accept the assumption of methodological naturalism, so they want a naturalistic explanation of the possibility that they are more than a clockwork mechanism. The only obvious possibility is to insert a random number generator. I’m not immune to hoping there’s a bit of me that can’t be captured by a digital simulation, though I haven’t the foggiest whether it’s so.

    I don’t get why anything “random” would alleviate this sort of tension. It would be less disturbing to these people if they thought that we are equivalent to something “mechanical” (even though our brains aren’t themselves working “mechanically”) but also that it’s random? Randomness doesn’t seem especially … uh… comforting to me, to say the least. I mean, your description of their motivations/inclinations may be right (at least for some), but it doesn’t feel like this is really going to be sufficient for them in the end, if they thought about it a little more. It seems more like this is a handy place where they believe they can just stop thinking, because it appears to require no further explanation and appears to leave open all sorts of wooish possibilities that they actually wanted (e.g., free will) even though it doesn’t.

  99. PaulBC says

    Another thing about quantum mechanics as a proxy for randomness: if that’s all it was, there would not be the current interest in quantum computation. Quantum events are quite different from normal or stochastic sequences, because they can simultaneously test many possibilities at once and be made to output only those that satisfy desired conditions. This is significant mainly from the standpoint of efficiency, since it can still be replaced with exhaustive search, but it is inaccurate to think of quantum as merely introducing coin flips into a calculation.

    I don’t really think the brain functions as a quantum computer, but I don’t think it can be ruled out. The burden of proof is on anyone who wants to claim it is true, rather than the other way around.

  100. Snoof says

    consciousness razor @ 112

    I don’t get why anything “random” would alleviate this sort of tension. It would be less disturbing to these people if they thought that we are equivalent to something “mechanical” (even though our brains aren’t themselves working “mechanically”) but also that it’s random? Randomness doesn’t seem especially … uh… comforting to me, to say the least.

    Exactly. If we’ve got an RNG (quantum or not) hooked into our cognitive architecture, that still doesn’t guarantee “free will”, it just means another possibility is “meat-robots with stochastic behaviour”.

  101. A Masked Avenger says

    consciousness razor, #112:

    Randomness doesn’t seem especially … uh… comforting to me, to say the least. I mean, your description of their motivations/inclinations may be right (at least for some), but it doesn’t feel like this is really going to be sufficient for them in the end, if they thought about it a little more.

    Agreed. For some it’s probably sufficiently comforting to know that it’s not merely deterministic. I think I’m loosely in that group. I just don’t like thinking that if you put the right coins in the right slot, and turn the right crank, I’ll reliably produce the desired reaction. (A sufficiently complex chaotic system is as satisfactory to me as a random one, though. As long as you get an unpleasant surprise once in a while when you’re sticking coins in me and turning my cranks. E.g., the fact that you ARE sticking coins in me is an additional datum that alters my reactions to otherwise similar coin-insertion events.)

    Some probably do imagine that their “soul” is hiding in the spaces between quanta, expressing itself through tilting the probabilities.

    Others are likely in between. Having a dice cup in their heads wouldn’t be very comforting, but they don’t need an actual ghost in the machine either. They’re content with being non-mechanistic, without thinking too hard about its philosophical implications.

  102. A Masked Avenger says

    Snoof, #114:

    Exactly. If we’ve got an RNG (quantum or not) hooked into our cognitive architecture, that still doesn’t guarantee “free will”, it just means another possibility is “meat-robots with stochastic behaviour”.

    That particular problem was solved for me while I was still a theist grappling with the implications of omnipotence. I.e., if God knows all, then sie knows all the future effects of every butterfly’s flap. But if sie is also omnipotent, sie can alter any butterfly’s flap in any arbitrary way, producing effects that sie can predict infallibly. That being the case, the Calvinists seem closest to correct about free will–it’s at best an illusion.

    Nevertheless, I believe in “free will.” My solution is to observe that my decision making is indistinguishable to me from a conscious, free choice; therefore I will proceed accordingly. Apart from chitchat over grog, I give approximately zero shits whether my choices are illusory, just as I normally give zero shits whether radioactive decay is, after all, truly random.

    (As above, though, actually knowing that I can be accurately simulated by a clockwork mechanism would be a hell of a come-down.)

  103. PaulBC says

    I agree that randomness doesn’t resolve issues of free will in a meaningful way. It’s not obvious that mind body dualism helps much either. Whatever entity represents our will is still bound by prior experience. The part that we conventionally attribute to “will” is the part that goes on in our brain that cannot easily be predicted without carrying out essentially all the same steps our brain would have to do. So, e.g.:

    I’m stuck in a reinforced concrete cell, I can be said to be there against my will since it’s easy to predict the unlikelihood of my escape based on my conditions. No amount of thought will allow me to choose to be outside the cell.

    Less drastically, if I was handed a violin and asked to play, my failure to produce anything other than screeching would not be attributed to free will but to the obvious consequences of never having learned.

    Conversely, if I eat at a restaurant and leave without paying the bill, assuming I am mentally fit and aware I’m doing this, that I was raised with an ordinary understanding of law, property, and social expectations, then that is free will. True, some initial conditions, some randomness, and some interaction with my environment got me to that point, but none of this is really very interesting, because you’d have to go through the steps in my head to figure out why I made that decision. It’s so far removed from the ultimate cause that by convention we attribute it to the proximate cause.

    Only my opinions, but that’s how I resolve free will, which I don’t think is an especially interesting question, and anyway doesn’t really change whether we’re talking deterministic, random, quantum, or spooky. The basic philosophical (non) question remains. We take moral responsibility for the products of our own brains. Period.

    The interesting thing to me is where consciousness comes from, because it does seem to be real, but it’s not clear to me why it’s a necessary consequence of reasoning about self. I’m inclined to think it is, and that you don’t need quantum mechanics or anything else to explain it, but it is something that leaves me wondering.

  104. EnlightenmentLiberal says

    A compatibilist was here. You can have your mechanical processes which may be deterministic, and you can have your free will too.