You know I’m a bit sour on the whole artificial intelligence thing. It’s not that I think natural intelligences are anything more than natural constructions, or that I think building a machine that thinks is impossible — it’s that most of the stories from AI researchers sound like jokes. Jon Ronson takes a tour of the state of the art in chatbots, which is entertaining and revealing.
Chatbots are kind of the lowest of the low, the over-hyped fruit decaying at the base of the tree. They aren’t even particularly interesting. What you’ve got is basically a program that tries to parse spoken language, and then picks lines from a script that sort of correspond to whatever the interlocutor is talking about. There is no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.
It’s most obvious when these chatbots hit the wall of something that they couldn’t interpret — all of a sudden you get a flurry of excuses. An abrupt change of subject, ‘I’m just a 15 year old boy’, ‘sorry, I missed that, I was daydreaming’, all lies, all more revealing of the literary skills of the programmer (usually pretty low), and not at all the product of the machine trying to model the world around it.
Which would be OK if the investigators recognized that they were just spawning more bastard children of Eliza, but no…some of their rationalizations are delusional.
David Hanson is a believer in the tipping-point theory of robot consciousness. Right now, he says, Zeno is "still a long way from human-level intellect, like one to two decades away, at a crude guess. He learns in ways crudely analogous to a child. He maps new facts into a dense network of associations and then treats these as theories that are strengthened or weakened by experience." Hanson’s plan, he says, is to keep piling more and more information into Zeno until, hopefully, "he may awaken—gaining autonomous, creative, self-reinventing consciousness. At this point, the intelligence will light ‘on fire.’ He may start to evolve spontaneously and unpredictably, producing surprising results, totally self-determined…. We keep tinkering in the quest for the right software formula to light that fire."
Aargh, no. Programming in associations is not how consciousness is going to arise. What you need to work on is a general mechanism for making associations and rules. The model has to be something like a baby. Have you noticed that babies do not immediately start parroting their parents’ speech and reciting grammatically correct sentences? They flail about, they’re surprised when they bump some object and it moves, they notice that suckling makes their tummy full, and they begin to construct mental models about how the world works. I’ll be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech.
Then maybe I’ll believe this speculation about an emergent consciousness. Minds aren’t going to be produced by a sufficiently large info dump, but by developing general heuristics for interpreting complex information.
numerobis says
“We’ll just put in more facts and then it’ll become intelligent, in 10 or 20 years” — when I started caring about AI about 20 years ago, it was noted that AI had been 10-20 years away for about 40 years already, and that just adding more facts was clearly not the way to go.
There’s lots of actually interesting stuff going on in robotics and in machine learning, which gets some press. But the press also likes to cover the “just put in more facts” stories.
Dark Jaguar says
Chat bots seem to be based on the “chinese box” theory of language. All the bot needs to do, according to their thinking, is find JUST the right responses to ANY possible input, and bam, it’s intelligent! Most chat bots (there are projects for some exceptions mind you) ignore CONTEXT. There IS no perfect “response” for even ONE input. Context changes how you respond to certain inputs. For example, “How is the weather?” can’t have just one answer, and has to respond with the actual weather outside. Even AI bot designers understand that questions like that and “what time is it?” already require their bot to “cheat” and feed in an outside string, but they don’t take the hard lesson that this sort of context dependent response is true of a huge number of possible inputs, not just those obvious ones.
How to stump a chat bot:
Step 1. Talk for a bit.
Step 2. Ask it what you were just talking about.
I have never once talked to a single chat bot that could remember what we were just talking about.
I love AI projects, but this sort of AI really skews towards the “artificial” end of AI. I’m much more interested, as of late, in “insect swarm emergent behavior” AI studies. THOSE have my attention, due to how very simple rules in single bots add up to very complicated behavior in larger swarms of them.
cervantes says
Yes. To have an animal-like mind, you need a body. That seems obvious.
laurentweppe says
I’ll be impressed when computers start to unionize and Foxconn announce they had to double their employees paychecks because the iphones 17 were threatening to start a porn-browsing strike in solidarity with their builders.
nomadiq says
Indeed. I think the best way to describe with this sort of research is as follows:
You wont succeed by programing systems to have rules to respond to human interactions. You wont succeed by programing systems to learn to respond more correctly to human interactions. You need to program a system so it has the capacity to learn what the human experience is.
How can a square box ever really understand the phrase “my whole life is up-side-down” without it knowing the experience of living a life and sharing the actual physical experience of head-butt inversion? How could it learn that meaning from interactions with others without being able to correct use those shared experiences all humans have? Boxes can’t know the realities of human existence without a body because our conscious thoughts are so dependent on a certain physical reality. Put the human brain in the body of a completely different organism and you would get a completely different consciousness. Put a human brain in a box and it would atrophy.
Kagehi says
Cyc.. A project that then turned into OpenCyc, which ends up being not much better than a chat bot. Why? Well, turns out, when you develop associations yourself, you a) slow down, and b) run out of memory. lol But, yeah, there are smarter systems out there, like Cyc, as originally designed. “Working” versions, I suspect, revert to a baseline DB of associations, and no way for the engine to continue to construct new ones, from context, without being hand fed.
The design, originally, behind it worked like this:
1. Build a baseline of associations to start from.
2. Make sure that these links are statistical – i.e., they are not fixed, but can drift closer, or farther away from, being considered accurate, by the engine, as new data becomes available.
3. Allow it to parse sentences, then try to build associations from context.
4. Go back and tweak those, to confirm, or deny, that they are associated, or that the association is otherwise correct.
5. If no context could be constructed (unlikely, since they can be as basic as, “Its an object of some kind”, create a new category to build from.
This is **not** how chat bots work, or what is being described in that paragraph. The former is a very limited means to define new associations, and they have to be hand fed. The latter… seems to be the same, but maybe a bit more complex, but there is nothing to imply that it can, or could, or would, construct its own associations, instead of being hand fed them.
And, yeah, even from the standpoint of doing something with one, where it can serve a function, the result plain sucks, and not just because the only ones online, for general use, which doesn’t require you own server, are all basically Alice/Eliza types, with a massive DB, which their own website barfs on, if you download it, edit out all the stuff it doesn’t need, or you don’t want, then try to re-upload the new data (file is so big the server rejects it).
That said, there are some less delusional projects out there which “may” result in something at least not totally stupid, at some point, maybe… But, its not going to be one that has been locked in ice, so it “behaves” according to a preset response, no matter how the response engine’s DB was produced in the first place.
AMM says
OP:
Whether humans have some sort of “pre-programmed knowledge of language” and what that might be (or what is the minimum that must be there) is actually a topic of discussion in linguistics. I don’t know where things are now, but I recall that Chomsky argued that people must have some pre-programmed notions of grammar (sort of a meta-grammar) in order to know where to begin. (I’m not sure how he fits things like ASL into his schema.)
.
Actually, I think that it’s obvious that something has to be preprogrammed, at least the idea of communication, and human-style communication at that. And the idea of meaning. After all, it would be “logical” to interpret the noises the creatures around you make as being just noises, like the grunts people make at the gym or the sound of waves on a rocky shore. Most animals do without language (in the human sense) entirely. But human babies require people talking to them (or at least doing something that looks like talking) and responding to them in order to develop. Just feeding and holding and hugging is not enough.
Matthew Trevor says
I dunno, after following #GamerGate there doesn’t seem to be much difference between a chatbot and your average twitter troll.
This has been the claim of AI pretty much since its inception.
doubter says
I still can’t work out how I, a complex self-replicating set of chemical reactions, have subjective experience. Oh, and don’t try to tell me it’s an illusion. Because if it is, then what is experiencing the illusion?
consciousness razor says
doubter, neither can anyone else. That’s kind of the idea here: we haven’t yet worked it out, the talking mannequins notwithstanding.
A good response, but remember that people sometimes mean an “illusion” is just different than it seems, not that it doesn’t exist. But in that case, it is their job to clarify that and say what they think the difference is.
twas brillig (stevem) says
Here’s my pathetic attempt to derail: This OP was inspired, in a delayed fashion, to the news that a chatbot has: — PASSED the Turing test! And there was much dispute over that news. The “author” stated that to pass, the chatbot simply reverted to tween-speak (i.e. imitating the patterns of a young teenager).
To re-rail: Chatbots (children of ELIZA) are too simple to even be considered for the Turing Test. Intelligence is an “emergent phenomenon” of this incredibly complex bit of biology, that we call “brain”. I gotta believe that Intelligence can only result from building a complex algorithm, with no attempt to mimic “intelligence”, then intelligence will _emerge_ from that complicated mechanism.
But this is all just fanciful confabulation, I have no inkling of how to do such a thing; other than handwavium. Good Luck you who can get beyond my handwavium non-sense.
frog says
Meanwhile, outsourced helpdesk employees are forced to follow scripts that fail a Turing test.
milo says
It seems that not even all pairs of humans who can communicate with one another would be able to chat in a way satisfying to either – and it’s typically preferable to chat with someone who you already know than someone you don’t. Clearly, there’s a lot more than just being able to communicate involved in chatting, and it would at least seem to need a persistent experience of the world.
However, I think projects like IBM’s Watson are more promising, and they also seek to interpret natural language, which creates some significant overlap with a chatbot – and I wouldn’t be surprised if, someday, platforms purposed with answering questions have to emulate conversation to facilitate follow-up questions and clarification. It’s just that they wouldn’t be trying to act like a human.
A momentary lapse... says
So they are basically operating on “any sufficiently advanced Markov chain is indistinguishable from sentience”? Hmmm.
=8)-DX says
Yeah, chatbots aren’t IA at all. The most promising venue I think is things like Polyworld(from 2007) where Virgil Griffith used evolutionary algorithms to create brain structures for tiny virtual creatures interacting with a 2D world (including visual inputs, size, carnivorous/herbivorous food sorces, energy, mating, etc.) The resulting simple simulated neural networks resulted in a number of behavioural patterns: avoiding edges, chasing food, regular migration between food sources, eating one’s offspring, etc.
That’s where it’s at. What should really help for human-interacting robots is the integration of text/object/sound recognition in a more streamlined way.
James Fehlinger says
> What you’ve got is basically a program that tries to parse spoken
> language, and then picks lines from a script that sort of correspond
> to whatever the interlocutor is talking about. . . just regurgitations
> of scripted output in response to the provocation of language input.
>
> It’s most obvious when these chatbots hit the wall of something that
> they couldn’t interpret. . .
So IBM is apparently re-purposing Watson to comb through scientific
literature and advise drug companies on promising lines of research:
http://www.theregister.co.uk/2014/08/28/ibm_watson_scientific_research_analysis/
You know, if Watson is playing Jeopardy, and gets an answer wrong
(sometimes hilariously wrong), it’s usually pretty obvious to just about
everyone present.
But if a Watson-based expert system is recommending lines of research
to a drug company, who the hell is going to know if the software
is leading a research department on a wild goose chase?
And even if some canny researcher at the pharma outfit does twig
to the fact, who the hell are you gonna call at IBM to get the system
fixed? IBM is a computer company, not a drug company, so how
many domain experts in various fields are they going to retain
to keep their expert systems working? Or, is somebody at the
drug company going to have to use an “SDK” to update Watson’s
programming?
But hey — if it works, more power (and money) to ’em!
Robin Pilger says
[Response # 205764095] And this differs from most conversations with people how?
ashleybell says
Could be the limits of my imagination… But how can you produce any kind of intellegence worth even calling intellegence without being able to create an ‘object’ capable of ‘caring about outcomes’?
Elladan says
Reporting on chatbots is basically just noise, but I don’t think it’s right to dismiss them as “Not AI.” They’re a form of AI, just a well understood and uninteresting one.
AI is a fairly large field with a lot of different ideas in it, and a chatbot is essentially an amateurish form of what is usually called an expert system. When I call up my bank, I’m talking to an expert system. At least, until I mash 0 and say “operator” enough times.
The problem here is that people rightly don’t consider the software running their bank’s phone system to be a very interesting form of AI — even though it solves a lot of really interesting computational problems, like speech recognition — but somehow a news article pops up every other day about chatbots. Chatbots are kind of like the bank’s phone system except that unlike the bank, they don’t do anything useful and don’t solve any hard problems.
But back to AI: expert systems are useful.
I mean, I yell questions at my phone all the time, and Google tends to answer in a meaningful way.
In the sense that AI is about programming machines to do intellectual things that previously we needed people for, it’s a success.
It’s just a limited success. But so is a seeing eye dog: with training, it can provide a great service to people, but it’s still a dog (and to be clear, a dog is vastly more intelligent than a computer program).
MattP (must mock his crappy brain) says
Lingodroids have sort of started that by inventing their own language for localization and cooperation via playing various games with each other. Pretty cool research, but they still had some preexisting knowledge of the basic communication and rules required for playing the games to create their language/location/time databases.
Marcus Ranum says
To have an animal-like mind, you need a body.
Or a really good simulation of one.
I’ve long thought that the inputs of a body are critical to consciousness. Because a body’s inputs act on the mind as both a source of interruptions, and external timing. One of the things that chat-bots don’t do very well is the de-synchrony of natural conversation. People that are talking are constantly sending eachother “I am listening” or “I want to speak” cues with their faces and bodies — chatbots completely omit that, to their detriment. And we constant interrupt ourselves with our internal timing as our limbs “want” to move (blood flow) or our bowels grumble or we need to pee or we think “pizza!” — that’s all input, and very important input, into the overall consciousness. At the very least you can think of it as a sort of random number generator that occasionally perturbs something by triggering thoughts which can then cascade into our consciousness. Since our models of brain activity (and presumably, then consciousness, as an emergent property of it) are based on triggering cascades of activity, our bodies literally act as a heartbeat and a random number generator; most chatbots have states they can get into where they roll a metaphorical pair of dice but we embodied don’t need that because we’ve got a whole sensorium. As I type this I interrupt myself because a breeze makes something fall off my desk – did that change what I was thinking? Like it or not our thoughts are formed by inputs such as “my bladder says I need to finish this comment pretty soon, so stop rattling along”… The outside work stimulates and synchronizes us, and our bodies are both input and filter for those interrupts and inputs.
My guess is that if we ever figure out what “consciousness” is, we’ll discover that a) it’s a matter of degree and b) it evolved out of our brain’s self-and-body monitoring loop. I apologize if I’m using computing analogies (I hate when people who talk about brains talk about them as if they a computer; they’re more like an internet) but there’s a strong component to our consciousness that appears to consist of status-monitoring. You can experience this if you do something like txt while you’re driving at 70mph: part of your consciousness gets absorbed in one task while part does the other, and there’s a sense that there’s a part sort of monitoring both. One of the characteristics of our consciousness is that we appear to be able to subdivide it when we need to; while we are conversing with a person some of us are barely listening to what they are saying because we are thinking of our next point, while others are completely focused, and the amount to which we are focus depends on our engagement in the conversation as well as our bladder’s fill-status, etc.
Nick Gotts says
Don’t other Pharyngulites do this when cornered by the pub/neighbourhood/office/staffroom… bore?
Nick Gotts says
I see Roger Pilger@17 got there before me!
Nick Gotts says
…and I see it’s Robin</I) Pilger. Sorry.
Robin Pilger says
Getting my name wrong…no worries. Botching the close tag…unforgivable.
Marcus Ranum says
That’d make a pretty nice Tshirt …
davidnangle says
Surely, a first step towards real intelligence is that the software must be able to correctly write code to add to itself in order to display desired new capabilities. Is there such a thing as self-modifying code, now?
robro says
davidnangle @#27 — Self-modifying code? Memory may be failing me (it often does) but I seem to recall this being discussed in programming circles 30 years ago. I believe it was considered a bug at that time. I’m not sure about the current state of affairs, although I’m reasonably certain that modifiable code in web apps would be considered a serious security issue.
Crimson Clupeidae says
davidnangle: Not that I’m aware of, but I’m not even close to an expert in the field. I know that much lower level code has been written whose only goal was to reproduce copies of itself, and randomly flip a bit. That simulated evolution much more closely than expected, even getting computer viruses that had much smaller code, but could only reproduce by latching onto other code. :)
Not sure if there’s a way to do that with a chatbot, but I’d like to see someone try.
fakeusername says
You know, given that you hate it so much when engineers make poorly-informed comments about areas of biology that they know little about, I’m surprised to see you making such claims about AI research based on a press-regurgitated summary of one particular area of a diverse field, which I doubt that you’ve studied in any depth.
I’m not an AI researcher, but I do have an MSc in CS and took a few AI-related courses, which is enough to tell you this: AI is a very diverse field and most researchers focus on specific areas of it. Some study natural language processing and produce things like chat-bots, voice recognition, and text compression algorithms. Some study graph searches and decision trees, and produce things like chess-bots. Some study classifiers. Some study emergent behaviour. Some study pattern recognition. Some study constraint satisfaction. There are probably a dozen other fields that I’ve never heard of. AI is about more than just writing a program that learns like a human, acts like a human, and is otherwise indistinguishable from a human in a box.
robro says
That’s a fairly steep requirement. Everything about a computer is programmed so it’s not clear how that would even be possible. In any case, it’s not clear that babies begin at a zero point. I understand that whether we are born tabla rasa is debatable, but the notion that the brain comes with structures for learning language isn’t far fetched. A robot would need such an ability programmed into it. Would that disqualify it?
Still, keyword look up seems like a fairly crude approach to simulating natural conversation. I’m not current on the research but I’m fairly certain that natural language processing research is ahead of this. A few years ago I had an opportunity to work with engineers developing technologies based on Latent Semantic Mapping. LSM and similar research probably suggests interesting directions for developing robots capable of human-like conversation.
Incidentally, as I understand it, keyword look up is basically what Siri is doing. It was described to me by someone who would know as a “database.” But then, Siri is designed to help customers accomplish specific tasks, not engage in natural language conversation.
One last thought in the AI domain: Yesterday as I drove to a meeting, I was followed by a person whose hand motions attracted my attention in the mirror. Those hands were putting in ear buds, mussing hair, etc…not holding the steering wheel. A knee driver was following me, and too close I’ll add. It occurred to me that intelligent cars capable of driving themselves could easily be superior to the current situation where cars are being operated by people who aren’t really paying attention to the activity. Later, I followed a car with signs on the back declaring itself a “Google Self-Driving” car. As I passed, I saw that there was someone at the wheel. I wondered if that made it more unsafe.
consciousness razor says
Right. If we assume the field’s own terms, this chatbot stuff would fall under the broad heading of “AI.” But reasonably well-informed people who don’t have a horse in this race would not consider it “intelligent” in the slightest. So we apparently have “AI” which isn’t “intelligent,” oddly enough, although I don’t think anyone would have a reason to doubt it’s artificial. Of course, AI doesn’t need to be “sentient” — as you yourself said, plenty of it already has no connection to that project whatsoever.
So, despite the claims of the cranks in the article, these things do not even attempt a first step at sentience (even if they do approach “intelligence” of some kind). There is nothing about their designs which could conceivably be relevant to generating experiences of some kind, except of course for the (intelligent, sentient) people who are attempting to have a conversation with them (and even failing do that). I think that’s the issue.
PZ used the incorrect nomenclature (AI instead of artificial sentience), which means he effectively said all other AI is boring. I doubt he meant it that way. Besides, the guy just wants his jetpack and his flying car and his robot maid who really feels deep down that her job sucks. Is that really too much to ask?
robb says
in ten or twenty years we are going to have fusion powered AI.
=8)-DX says
@davidnangle #27
From what I read a few years back this is a staple of modern viruses – not a completely random evolutionary rewriting, but deliberate changes to their own code on copy to put virus detecting software of the “scent”. Which is why modern virus software uses heuristics to identify possible threats: mainly of course “is that program identifyable by my list”, but also “is that a program that is trying to rewrite memory/system files/doing things it shouldn’t be doing?”
Christopher says
Yep. Has been since LISP, which is the second oldest high level programming language to survive an a perennial favorite language for AI researchers.
Code rewriting is so well developed that they have whole languages dedicated to it (https://code.google.com/p/pure-lang/wiki/Rewriting) and their very own conference (http://rewriting.loria.fr/rta/).
The downside to automated code rewriting is that the resulting code winds up so ugly that it is a huge pain in the ass to find out what the hell it is doing. Just like biology.
“Artificial Intelligence” means many things to computer scientists, but rarely if ever means “artificial sentience” which seems to be what all non-CS-geeks take the term to mean.
Eamon Knight says
Actually, that sounds like more than a few evangelists and creationists I’ve run across…..
Crimson Clupeidae says
Eamon Knight@ 36:
And MRAs, and Intelligent Design propenentsists, and …..
The Vicar (via Freethoughtblogs) says
Interestingly, and I think tellingly, Northwestern University’s Computer Science department has been a bastion of AI enthusiasts for decades. About a year back, I saw an interview with one of the professors there, in which he basically said “AI research has been so fruitless in terms of actually improving anything that I really think we would have been better off just researching user interfaces instead”.
Christopher says
Yeah, programs that score well on the Turing Test usually impersonate people that have the intelligence level of badly programmed computer code (tweens, evangelists, creationists, MRAs, etc).
Christopher says
I work with a bunch of remote sensing geeks. All the terabytes upon terabytes of data gathered from air and space would be mostly useless if we didn’t have AI techniques like support vector machines, neural nets, random forests, etc.
We now have cars that drive themselves and phones that automatically answer verbal questions. None of that would be possible without AI.
=8)-DX says
@The Vicar #38
My brother did his PhD in computer science on self-organising maps and neural networks. These technologies are used today in image/speach/video recognition in a multitude of fields. OK, trying to simulate human brains hasn’t had that many practical uses, but a lot of the “side-research” is in direct use today.
Word-recognition algorithms for instance make use of basic learning, image-maps and neural networks. A case in point: the “Recaptcha” widget that is used on many websites shows one known word and another unknown one and this input goes back to the servers to help reading scans of old books.
Christopher says
One of the better definitions of AI:
We have advanced AI research to such a degree that we can now make machines do things that cannot be done using only the intelligence of humans: good luck using a hyperspectral image of an apple to determine if it has been contaminated with e.coli just using your eyes and brain.
David Chapman says
But Prof, you don’t even appear to believe in your own consciousness; so I’m not sure what it’s going to take to convince you that a computer possesses it. You wrote in the earlier post linked above….
My italics.
Consciousness ( as in sentience: the ability to possess subjective states, such as pain & pleasure and the more neutral sorts of sentience in between these two extremes ) is about the most non-illusory thing it’s possible to conceive of. An “illusion of awareness” is an oxymoron, like opaque transparency or luminous darkness. ( I appreciate that you might have meant “an illusion of awareness of my self, which actually ignores the totality of the actual activities of my self, including everything going on in my body and everything going on in my brain to control everything going on in my body.” If so, please clarify. )
Or perhaps you mean something else by “consciousness” in this present post other than the pleasure-pain-awareness thing. Some people seem to talk as if they do. But this word belongs to this phenomenon, and the phenomenon is too important to be treated in this linguistically slapdash manner. Even if you don’t believe in it.
a_ray_in_dilbert_space says
David Chapman,
I think that the “illusion of consciousness” is not an inappropriate term. Our reality is constructed for us by our brains. True, it must have enough in common with the external world to allow us to survive. However, much we claim to be conscious of exists only in our brains.
Christopher says
The fundamental problem is one of definitions:
Artificial intelligence =/= artificial conciousness
much like
evolution =/= abiogensis
PZ rightly makes fun of creobots who say “you can’t fully explain the start of life, therefore evolution is wrong,” yet he goes on to say that chatbots aren’t an example of artificial conciousness, therefore they aren’t AI. He might be right on the first claim, but is totally wrong on the second.
Corey Yanofsky says
Those who are interested in the actual history of AI prediction can find it reviewed in mind-numbing detail here.
A little birdy told me that due to popular demand, IBM now basically markets everything it does as “Watson” irrespective of whether it has anything to do with the Jeopardy-playing computer.
David Chapman says
It’s difficult to know exactly what you mean when you don’t specify what it is that we claim to be conscious of, that’s only in our brains. But I can’t think of any meaning that I could agree with; I mean that would disprove what I’m saying. A colour, for example, exists in our brains, and it’s quite sensible to think that it doesn’t actually exist in the thing that we’re looking at ( Or at least that’s a widely held view. ) That the brain assigns a certain experience to a certain frequency of light. But the fact that it doesn’t exist in what we’re looking at, doesn’t mean that it doesn’t exist. It definitely does.
a_ray_in_dilbert_space says
David Chapman: “That the brain assigns a certain experience to a certain frequency of light. But the fact that it doesn’t exist in what we’re looking at, doesn’t mean that it doesn’t exist. It definitely does.”
Does it? I dream in color. Where in the external world does the light exist that corresponds to that color I am perceiving? There is no requirement that there be a one-to-one correspondence between our perceptions and what exists in the real world. All that is required is that the discrepancies not kill us…too often.
David Chapman says
You don’t have my meaning, I should have expressed myself more precisely. I wasn’t talking about the existence of that kind of light, but of the existence of the phenomenon of blue itself. I should have said:
That the brain assigns a certain experience to a certain frequency of light. But the fact that this particular kind of consciousness that we call blue doesn’t exist in what we’re looking at, doesn’t mean that it doesn’t exist.
It definitely does.
To put it another way, if something only exists in our brains, so what? If it exists, it exists. The question of whether it corresponds to something else somewhere else in addition to that is irrelevant.
CJO says
When we say something exists, typically that means it occupies a specified location and has an extension in space. (Disregarding quantum-scale phenomena.)
Where is this “this particular kind of consciousness that we call blue”? For that matter, where, exactly, is your self, your sense of awareness? Is it enough to just say “distributed throughout my brain”? Does it help to say “distributed mostly across certain regions of the pre-frontal and frontal cortices”?
These aren’t “gotcha” questions, I’m curious about this stuff and I can’t say I know the answers.
Christopher says
Questions like these are what will make identifying artificial conciousness so hard.
I have no doubt that within my lifetime a computer program will pass the Turing Test when compared to college educated humans in their native language.
I seriously doubt that there will be an artificial conciousness to emerge before I die.
But, how could you tell that a program that a program that honestly beat the Turing Test isn’t concious? That’s probably why Turing posited the Turing Test. When another human tells me that they think, they feel, they know of their existance, they are sentient, I believe them because I am also a human that experiences those things. But if a computer program designed to beat the Turing Test tells me these things, I assume that it is lying until proven otherwise. The problem lies in how to prove otherwise.
Ichthyic says
nah. not buying this. as constructed, it is not an oxymoron.
that you are trying to make it one is your argument, but it does not start as one.
Owlmirror says
@CJO:
(and, presumable, some duration in time)
I’ve been thinking about this definition, and I wonder how good it is. Never mind whether it’s applicable to abstractions like “number”. Does space or time exist (by that definition), given that space and time are what things exist in?
There may be philosophers of language and/or existentialism that have tackled the question, but I am unaware of them
Dalillama, Schmott Guy says
I think it’s worth distinguishing in these types of discussion between Pseudo-Intelligence* (PI)and Machine Intelligence*(MI). PI is what chatbots, expert systems, videogame ‘AIs’ etc exhibit; arbitrarily complex preprogrammed decision trees and response algorithms that can simulate sapience (often poorly) in limited contexts. MI is a hypothetical sapient electromechanical construct; like human (biological) intelligence, it seems likely that each instance of MI will be intimately tied to a particular physical structure, although it’s plausible that such an intelligence might be better able to adapt to significant structural changes than ours is; it’s hard to say, because so far no one’s made radical structural changes** to a biological intelligence either, so it’s hard to say how well we would or wouldn’t adapt.
carvantes#3
But not necessarily an animal body; what you need is a way to a) perceive and b) interact with your environment, along with the right kind of environment to interact with. A computer that has cameras (Or an equivalent sensorium, such as a radar or similar), microphones, robot manipulators, and either a mobile casing or mobile remote drones is probably animal-like enough to be getting on with, but I have trouble seeing any way to get intelligence out of something that hasn’t got those minimum qualifications.
* Not my terms. PI is from Neal Stephenson’s novel The Diamond Age, while MI is from Marvin Minsky and Harry Harrison’s The Turing Option (The latter is a very interesting exploration of MI concepts, although somewhat dated in many ways. I recommend it (At least I’m pretty sure I do; I haven’t read it in decades, so I can’t swear to the quality anymore)).
** By which I mean, e.g. adding extra functioning limbs, gills, addional sensoria, etc.
mothra says
Not being in the field of AI, I like Robert Heinlein’s definition of AI (actually awareness): “when the computer asks, ‘What’s in it for me?’.”
Kagato says
Cervantes @3:
Depends on your definition of ‘body’, I guess. As Dalillama @54 says, perception and interaction should be sufficient to allow for an intelligence appropriate to its environment. If you want your intelligence to understand the world we live in, I’d also add “presence”, such that it can be aware of its place in the world relative to its surroundings. However, I dispute mobility or manipulation being requirements for developing intelligence. (Helpful, sure, but not necessary.)
nomadic @5:
We already know this isn’t the case, because there are already plenty of people in the world as counter-examples. People born with physical disabilities (missing or malformed limbs, damaged senses etc) have a different experience of the world than is typical, but they don’t develop with unrecognisable consciousness. People have been severely paralysed from birth, with no mobility; their interactions with the world might be limited to basically sight, sound and speech, but they can still develop typical human consciousness.
As long as the brain isn’t completely isolated from perceiving and interacting with its environment, recognisable consciousness is possible.
Kagato says
I think a necessary component for developing proper AI sentience (I’m not even talking human-like sentience) is going to be emotion, or some close analogue.
Without emotion, or at the very least something like instinct, what is the motivating factor behind the AI? What will cause it to make decisions or initiate a course of action undirected by an outside controller? I’m not saying “computers need to learn how to love” or anything loopy like that; but the AI should have a sense (at some level) of desirable vs undesirable sensations, situations and outcomes.
—
I once had a colleague who was working on AI using neural nets. (I however was not, so forgive any inaccuracy in this summary.) Typically this model of neural net is trained with a set of inputs and expected outputs, which teaches the NN to approximate the correct output given similar inputs. His approach used standard neural nets with two additions: a small memory buffer, and a single-value emotional state (on a scale of “happy” to “sad”). That (plus the code to integrate it, of course) was enough for his NNs to learn from unstructured input, when given appropriate “reward” or “punishment” feedback. The NN would tend toward consistent output that resulted in reward, and away from output that resulted in punishment.
This resulted in some pretty impressive emergent behaviour. For example, an NN representing a “bug” on a 2D field dotted with “food”. The inputs might cover an arc of vision where it can “see” food and a hunger value that steadily decreases; the outputs are turn left, turn right and go forward. Low hunger results in an increasing punishment, seeing food a tiny reward, and eating food when hungry a big reward. It would start with essentially noise for output, twitching about randomly. Without being explicitly trained to seek out and eat food, it would very rapidly learn to identify food, and as hunger kicked in, run off and eat it. Add in obstacles, and it would learn to avoid them. You could even teach some arbitrary behaviours by manually rewarding or punishing (eg. to avoid large clusters of food, despite otherwise being beneficial).
Interestingly, if a NN received nothing but negative feedback for an extended period of time (meaning nothing it was trying was improving the situation), it could fall into a sort of “depression”. If the food-hunting AI couldn’t find its way past obstacles, it might bounce around wildly for a while looking for an exit, then ultimately give up and bumble about in little circles before stopping altogether. You could get a lot of personality out of a surprisingly small set of variables.
2kittehs says
And for all that, chatbots are still more interesting than trolls. Hell, even spambots are sometimes more interesting than trolls.
David Chapman says
“Existing in our brains” was Ray’s expression, (#44 ) and I replied in his terms purely in the interests of communicating simply. He seemed to be supporting the odd idea that conscious phenomena are illusory; so he was presumably talking about illusions happening in the brain, whereas it was that illusory aspect of things I was questioning; whatever it is, it’s
really happening. Location was a side issue, so to speak.
I think the expression was right, however. One thing we can be certain of is that consciouness, including our experience of such ephemera as colours, is physical ; it’s influence causally effects our behaviour and our world. This implies that our contemporary laws of physics are incomplete in this regard of course but it’s true nevertheless; otherwise we wouldn’t be able to be having this discussion. Therefore we can say it is in the brain insofar as its causal impact is felt there, and the sensory data from the body impinge on it there;* which is why I had no qualms about talking about it being in the brain. ( Talking about the relationship of consciousness to the brain makes it sound like I advocate some kind of Dualism, but this is just an artefact of language. I don’t think Dualism yes, or no? is a useful argument. )
It’s worth mentioning here that Richard Feynman, on hearing that the ancient Greeks thought that the seat of consciousness was in the heart, not the brain, tried to move his subjective sense of the location of his conscious self down into his chest. As I recall, he gradually managed to move it down his neck over a period of days, and then got it to his heart. To which I can only say: ‘Cool.’
( *Unless there is something intervening; but we can reject that for now in the interests of keeping things simple. )
Elladan says
It seems to me that AI discussions I’ve been a part of in the past invariably devolved into talk about philosophy of mind, with various arguments going back and forth about whether a machine could grok or not, qualia, and other magic.
It seems like kind of a strange place to go. I mean, I imagine any AI researcher would be happy to admit that we don’t understand how the human brain works except at the grossest level, so I’d imagine a proper answer to questions about consciousness and the like is just: “Cognitive science hasn’t gotten there yet. Now check out this nifty computer vision algorithm!”
I rather liked Daniel Dennet’s talk on the subject of consciousness. One thing I’ve noticed whenever I’ve tried to talk about consciousness in the past is that it’s very similar to the idea of the soul: it’s something that some people have adamant unshakable ideas about, but can’t really point to or define well enough for me to understand.
I mean, seriously. What is it? Is it contained in a particular clump of neurons? An emergent property of cerebral cortex? Magic? I know that I don’t have any idea [it’s not magic], so I don’t know why I’d be able to have an argument about it at all.