What does it even mean to pass the mirror test?


The mirror test is a well known indicator for some degree of self-awareness: surreptitiously mark an animal’s face, show it a mirror, and see if it recognizes that the reflected image is of itself by whether it reaches up to touch or remove the mark. We see that behavior and infer that the animal has some knowledge of itself and can recognize that the mirror image is not another animal.

But now robots are being specifically programmed to pass the mirror test.

Ow. It makes my brain hurt.

So this is a computer that has no other indicators of consciousness or awareness or autonomous “thought” (whatever that means…my brain is hurting again), and is being coded to respond to a specific kind of visual input with a specific response…to literally pass the mirror test by rote. Does that really count as passing?

I think that all it actually accomplishes is to subvert the mirror test. It’s always been a proxy for a more sophisticated cognitive ability, the maintenance of a sophisticated mental map of the world around us that includes an entity we call “self”, and I don’t think that training a visual processing task to identify a specific shape unique to the robot design counts.

I’d also like to see what happens if two identical robots are made and put in the same room. To recognize “self” you also have to have a concept of “other”.

Comments

  1. says

    Does that really count as passing?

    Yes.

    It shows the mirror test doesn’t necessarily measure what people thought it did. Perhaps it measures “self awareness” but then, yes, by that measure robots have “self awareness.” It seems to me more that the problem is not having a good definition of “self awareness” to begin with. Perhaps it’s just a label for a purely subjective phenomenon. I think I have it, personally, but maybe I was just programmed by my parents jiggling me up and down while I saw myself in the mirror and realized “the kid in the mirror is also jiggling and screaming, just like me! oh…”

  2. says

    Well, passing the mirror test. But the thing we’re trying to measure is self-awareness. Is the robot self-aware? I don’t think so.

  3. says

    Doesn’t this really cut to the heart of the problem? If we don’t have a definition of “self-awareness” that we can reliably measure against, then I guess we can’t even tell if we’re measuring it (whatever it is) rather than something else. I hate to say it, but this does sound like one of those places where philosophers/epistemologists have a point.

    I guess it’s a problem similar to the whole “free will” debate. The mirror test is (sort of) measuring one attribute of “self awareness” that is clearly measurable. The claim is, of course, “there’s more to it than that.” To which the rejoinder has to be, “OK, but what?”

  4. Cuttlefish says

    In a sense, robots and computers may be more self-aware than we are. My computer’s “downloading–15 seconds remaining” is a more accurate report of its functioning than, say, introspectionist accounts of conscious thought. We have very little access to our processes of thinking (lacking sensory neurons in the brain, we cannot feel ourselves think in the same way that we feel ourselves run), only to a subset of the outcomes of those processes. We readily deny real influences on our behavior and thinking, and make up reasons that are not actually reasons.

    To paraphrase Glen @#2, it is as if we are the ones who have simply been programmed to say “I am self-aware”.

  5. Blondin says

    When someone invents a robot that goes on a quest for the blue fairy so it can become a ‘real’ boy, then we’ll have self-aware robots.

  6. says

    Cuttlefish writes:
    In a sense, robots and computers may be more self-aware than we are. My computer’s “downloading–15 seconds remaining” is a more accurate report of its functioning than, say, introspectionist accounts of conscious thought.

    Consider a maintenance feedback loop that has gotten so complicated that it’s unmanageable and (because it’s self-organized out of the totality of sensors it has available to it, regardless of their quality, it’s unpredictable) – the computer in the robot, according to one of my robot-loving buddies who plays in the DARPA challenge – is pretty similar: self-organizing maintenance loop that dispatches inputs and generates outputs. It’s possible that the qualitative difference is that the maintenance loop runs in parallel in a meat robot and in sequence in a silicon robot. The actual input devices (cameras, lidar, lasers, touch sensors, tire pressure sensors) offer sometimes simple values and other times report a rolled-up status that’s a result of their own complex loops (“hard drive: bad sector remapped, continue about your business”)

    What the mirror test seems to be measuring is that the visual systems of the device make a correct inference about spacial location and reflectivity of surfaces.

    Obviously, a blindfolded person is not “self aware.”

  7. blf says

    Hang on, I thought it was the mirrors who were being tested. For being self-aware. Better known as magical. It is called The Mirror Test.

    So how do you test a mirror for being an Fairy Godmother or Wicked Witch or Grand Vizier or whatever magic model?

    On a slightly more serious note: Are there alleged–self-awareness tests which use sense(s?) other than vision?

  8. Enkidum says

    @Marcus #5:

    I don’t think the problem is that we don’t have a clear definition. Indeed, nothing in science has a clear definition before we know what it is, because we don’t know what it is. Imagine saying to Maxwell that he needed a clear definition of electrons, or to Darwin that he needed a clear definition of species. Speaking as a kind-of philosopher / mostly scientist, I think that one thing you can do to most effectively shut down scientific progress is to insist on definitions prior to discoveries. In these cases where science is moving into hitherto unexplored territory, being sloppy and vague is not a bug, it’s a feature.

  9. Kevin Anthoney says

    I’ve never been convinced that self-awareness is a meaningful concept. Perhaps if there was a test to show that something was aware but not self-aware I might change my mind.

    As far as I’m concerned, the mirror test simply shows that some animals are smart enough to figure out how mirrors work. Or, in the case of the robots, the programmers are smart enough to know how mirrors work.

  10. says

    Enkidum writes:
    ( )
    Yeah, you’re right. I misspoke. We don’t need a definition. It’s really the lack of a theory; then we can start to do tests. If there are enough theories that can be tested then we can say that a creature meeting enough tests of those theories gets the label.

    So if we look at the mirror test, we might say that a certain response to a mirror is generally present in self-aware creatures, along with a host of other tests, then yeah eventually we can say something appears to be self-aware. If we had a theory of self-awareness that consisted entirely of the mirror test, then blind creatures (and vampires, per hyperdeath!) cannot be self-aware.

  11. says

    Or, in the case of the robots, the programmers are smart enough to know how mirrors work.

    Might we say that the programmers “taught” the robot to know how mirrors work? Then we have to consider the question of whether experience with mirrors taught us how they work, too. We were presumably not born knowing mirrors, nor were other animals.

    The mirror test is pretty interesting; I tried it on my dogs (noseprints on the glass) and my horse (“ZOMG! AUGH!” runrunrunrun…) Has anyone tried it on babies? I’m sure Piaget would have if the test had been common back then. Anyone with a wee experimental subject handy? ;)

  12. janicot says

    Take this comment with a grain of salt — it’s from a complete layman.

    I’ve always thought the mirror test shows as much or more about our own biases than about the subjects.

    I wonder how a dog would react to a smell mirror (if we could figure out how to make one)? It seems to me that pretty much each creature — even different people — will have its own perception of its world based on its own tools for acquiring and processing the information presented to it.

  13. says

    I’m reminded of a genetic algorithm test I read about a few months ago. The selection criteria was set to favor chip configurations that broadcast a sinusoidal radio signal. They intended for the chip to calculate the sine wave and broadcast it. What they got, however, was a very simple chip that used a length of wire as an antennae for picking up an existing sine wave signal that was in the lab environment and then relayed that sine wave to get a good selection score. They got a parasite instead of a producer.

    I’m kind of wondering about the mirror test with cats. Whenever my family got a new kitten, they’d stare down their reflection for a while. Then they’d ignore mirrors. I always wondered if they realized it was their reflection or if they just decided that if mirror cat leaves them alone, they’ll leave mirror cat alone.

  14. pembroke529 says

    This sounds like the shenanigans that the video graphics drivers programmers would do back in the day. They would optimize the graphics drivers to pass the existing video tests, which didn’t necessarily indicate that it was a superior video chip/software.

  15. jaybee says

    We see that behavior and infer that the animal has some knowledge of itself and can recognize that the mirror image is not another animal.

    My dogs wouldn’t be interested in a mark on their forehead because they are entirely uninterested when they get the chance to see themselves in a mirror. There is no indication that they think they are looking at another animal either. Either they recognize the mirror as uninteresting reflection of themselves (self aware?) or they are so confused by it they just prefer not to study it.

  16. Amphiox says

    On a slightly more serious note: Are there alleged–self-awareness tests which use sense(s?) other than vision?

    Not a test per se, but what the mirror test does visually is somewhat analogous to the ability to recognize one’s own name.

    So the observation that dolphins in the wild appear to use unique identifiers for themselves that might be names was considered a possible piece of evidence that they are self-aware.

  17. Amphiox says

    I should add and also use one’s own name spontaneously to refer to oneself to that. You’d need both the recognition and the self-use, I think, since we can train lots of animals to recognize/respond to a name we give them.

  18. The Lorax says

    Before we have artificial intelligence, we will have a sophisticated series of adaptable computer programs that mimic the behavior of artificial intelligence but are ultimately the result of algorithmic methods and rote code.

    Essentially, we will have artificial artificial intelligence.

    If something mimics human behavior by brute force, so much so that you cannot tell whether it is alive, is it alive?

  19. busterggi says

    Does the Mirror Test have any check built in to determine whether the test subject gives a damn about what it looks like?

    Seriously, does it? Because I don’t stop to check my appearance every time I see myself in a mirror, most of the time I pay no attention. Then again my socks match mostly due to luck.

  20. amstrad says

    Passing the mirror test is simply _evidence_ of self awareness. Nobody is going to declare an agent self aware based on one test.

  21. amstrad says

    Or to take it one step further, what if I create an agent that can effectively fake all possible tests of self awareness that you can come up with. What is the difference between that agent and a truly self aware agent. (Hence the Turing Test).

  22. unclefrogy says

    I understand that the “mirror test” tries to show if the subject is self aware but does it really show self-awareness?
    It shows that the subject like someone said above knows how the mirror works yes. Not passing may not show that the subject does not know how a mirror works but that the subject is not very interested in the mirror nor what it shows them.

    that a dog would not “notice” something on their body if the only cue was a visual one does not surprise me. They do not seem to care about that kind of stuff much any way.
    uncle frogy

  23. bromion says

    I have a bone to pick with you, PZ. Whenever you come across a terrible news article misrepresenting biology, you criticize the article and show why it misses the point. When you read a terrible news article involving engineering, you seem to take it at face value and criticize the science. The bbc article provided virtually no information about the project, just a few quotes and some nonsense speculation about the mirror test. This has little to do with what’s going on in the project, which involves making use of mirrors in spatial reasoning and manipulation. As often happens, the news media takes the unimportant side story and runs with an aggrandized version of that, rather than a description of the actual work being done (which, no doubt, is too specific to be interesting to the general public).

  24. says

    Has anyone tried it on babies? I’m sure Piaget would have if the test had been common back then. Anyone with a wee experimental subject handy? ;)

    It’s one of those developmental milestones we test for. It’s on the Rossetti Infant Toddler Language Development Scale, somewhere between 12 and 18 months.

  25. grumpyoldfart says

    My guess is that the robot will have a white “face” and the spot will have a contrasting colour (bright red perhaps). The robot will “see” the spot, calculate its position and point to it.

    What about painting the whole face bright red? I wouldn’t be surprised if the robotic “self awareness” disappears – until it is reprogrammed.

    So not really self-aware.

  26. infraredeyes says

    We don’t need a definition. It’s really the lack of a theory; then we can start to do tests.

    To expand on what Enkidum said about definitions, I suggest that there is a perfectly valid stage preceding theory in many sciences. Call it the “lets try some stuff and see what happens” stage. After the fact, it’s usually possible to gin up some kind of a theory and claim that you were testing it, of course. Now, robotics may be past that stage; I don’t know enough about it to say. But flailing around trying to make enough sense out of something to even begin on a theory…that, in itself, is an aspect of science.

  27. Amphiox says

    Seriously, does it? Because I don’t stop to check my appearance every time I see myself in a mirror, most of the time I pay no attention.

    I do not think that ignoring the mirror/reflection constitutes a failure of the mirror test though. The test proceeds until consistent reactions are elicited. If the animal reacts to the reflection in the way it reacts to another animal, with say a threat of dominance display, then it fails. If it reacts in a manner that suggests it recognizes the reflection as representing itself, then if passes.

  28. Amphiox says

    But flailing around trying to make enough sense out of something to even begin on a theory…that, in itself, is an aspect of science.

    It’s basically what we did before we developed the scientific method.

    So perhaps its more accurately called “pre-science”.

    Or perhaps whether or not it should be considered science depends on later intention. If the intent is to flail about in order to gather preliminary data to formulate a hypothesis and begin the scientific method, then it is science. If the intent is to actually figure something out or get something done, then it isn’t science.

  29. Amphiox says

    If something mimics human behavior by brute force, so much so that you cannot tell whether it is alive, is it alive?

    Only if the human behaviors it mimics include self-replication! :)

    But it might be considered intelligent.

  30. Barkeron says

    In a sense, robots and computers may be more self-aware than we are. My computer’s “downloading–15 seconds remaining” is a more accurate report of its functioning than, say, introspectionist accounts of conscious thought.

    Erm, not exactly. Why do you think spinners and consorts are becoming ever more common?

    Computers are on the same level as insects in terms of complexity of circuitry, if that. Wouldn’t be surprised if it turned out current “AI” is so crude that even insects have a superior sense of self.

  31. karpad says

    Cool, this seems to be a near literal application of the Chinese Room experiment.

    But then, the Turing test hasn’t ever really concerned itself with true awareness, only the appearance thereof. Which makes sense, seeing as we can’t really prove if a human is self-aware or just autonomously acting like it.

  32. Amphiox says

    Computers are on the same level as insects in terms of complexity of circuitry, if that.

    The source of this little canard is typically people who know quite a bit about computers, but not a whole lot about insects, and in comes from severely underestimating what insects are actually capable of. Computers right now are only just starting to mimic parts of insect capability. If you’re talking about what insects can do as a whole, it’s not even close.

  33. Musca Domestica says

    busterggi

    Does the Mirror Test have any check built in to determine whether the test subject gives a damn about what it looks like?

    Seriously, does it? Because I don’t stop to check my appearance every time I see myself in a mirror, most of the time I pay no attention. Then again my socks match mostly due to luck.

    If you pass a mirror and see a smudge on your forehead, will you not stop and at least take a second look? (Unless you’re in the middle of a painting job, and aware of the normalness of having smudges.) I’m thinking that the recognition of something being off with yourself, and the “need” to fix it, are an important part of the test.

  34. frog says

    Musca Domestica: “If you pass a mirror and see a smudge on your forehead, will you not stop and at least take a second look?”

    –>Yes, but I’m a human. We care about this sort of thing. Does a cat or dog care what it looks like? Do bulldogs mope because they’re not as pretty as the Afghan hound down the street?

    Before we can decide if a test is an accurate measure of a thing, we need to be sure the test is relevantly designed. Animals may or may not have self-awareness, but if we don’t know their value system, we can’t accurately measure something about them by imposing a value system.

    And, frankly, I know a hell of a lot of humans who will wipe a smudge off their head, but I wouldn’t necessarily define them as “self-aware.” Many politicians, for example.

  35. frog says

    Amphiox: “If the animal reacts to the reflection in the way it reacts to another animal, with say a threat of dominance display, then it fails. If it reacts in a manner that suggests it recognizes the reflection as representing itself, then if passes.”

    –>This is interesting.

    My cat is hideously territorial. If another animal–including a human–comes into the house, the cat generally attacks it. He is an aggressive little fucker, as the scars on my shins can attest. (He’s calmed down with regard to me, but still. We think there may be something chemically wrong with him.)

    Yet he doesn’t attack his own reflection. In fact, he ignores his reflection. I can hold him up in front of the mirror and he’ll make eye-contact with mirror-me, but if I hold him close to the mirror, he usually turns his head rather than look at his reflection. Once or twice he’s looked at himself, but then turned away after a few seconds. He doesn’t bristle. He doesn’t get aggressive.

    Hmm. I’ll have to put a spot on his head and try this again.

    (And now I’m curious what he thinks of mirror-me. Does he think I’m in the mirror and he forgets who is holding him? Does he think there are two of me? Or does he understand how a mirror works?)

  36. ChasCPeterson says

    Passing the mirror test is simply _evidence_ of self awareness.

    yep.
    And more importantly, not passing the mirror test is not evidemce of a lack of self awareness.

    It’s a fairly high bar:

    Animals that have passed the mirror test include [only]:

    Humans – Humans tend to fail the mirror test until they are about 18 months old, or what psychoanalysts call the “mirror stage”.
    All great apes:
    Bonobos
    Chimpanzees
    Orangutans
    Gorillas – Initially it was thought that gorillas did not pass the test, but there are now well-documented reports of gorillas (such as Koko) passing the test.
    Bottlenose dolphins
    Orcas
    Elephants
    European Magpies

    period.

  37. says

    Chas:

    And more importantly, not passing the mirror test is not evidemce of a lack of self awareness.

    The mirror test is a bad one for rats, given their lousy eyesight. They go much more by scent and whisker feel.

  38. Amphiox says

    Yes, but I’m a human. We care about this sort of thing. Does a cat or dog care what it looks like? Do bulldogs mope because they’re not as pretty as the Afghan hound down the street?

    I think it is not so much whether or not an animal cares what it looks like, but whether or not it cares that it looks different.

    So a positive reaction to seeing the smudge/spot/dot on itself in the mirror suggests that it has a concept of self with an idea of the appearance of self which is normally without that smudge/spot/dot, and it notices the change and is interested in it.

    (It is reasonable to assume that most animals ARE interested in novelty in their environment at least some of the time. The selective disadvantage of never being so interested seems pretty high).

    Basically, the mirror test is a high specificity/low sensitivity type screening test. A positive result suggests the possibility of self-awareness. But a negative result doesn’t actually rule it out.

    Even a lot of the species that pass the mirror test don’t pass it right away. Their first reaction to the mirror might be to ignore it, or even outright failure via threat display/etc. They often have to take time to learn how the mirror works before figuring out that the reflection is their own. (This is true of human infants too.)

    And unfortunately we haven’t figured out a more accurate gold standard test to proceed on to afterwards.

  39. Amphiox says

    Animals that have passed the mirror test include [only]:

    Some cephalopods (I think) have been tested and so far have failed.

    We have to remember that when we celebrate cephalopod intelligence that it is done so in comparison with the intelligence exhibited by other invertebrates. When compared against vertebrates, they only slot into the middle of the range. Maybe just slightly above the average fish and similar to the average amphibian and reptile, but definitely below the average bird or mammal.

  40. ethicsgradient says

    You might enjoy this (excerpt, or book):

    But then, so are the kinder, gentler motives. How would you design a robot to obey Asimov’s injunction never to allow a human being to come to harm through inaction? Michael Frayn’s 1965 novel The Tin Men is set in a robotics laboratory, and the engineers in the Ethics Wing, Macintosh, Goldwasser, and Sinson, are testing the altruism of their robots. They have taken a bit too literally the hypothetical dilemma in every moral philosophy textbook in which two people are in a lifeboat built for one and both will die unless one bails out. So they place each robot in a raft with another occupant, lower the raft into a tank, and observe what happens.

    [The] first attempt, Samaritan I, had pushed itself overboard with great alacrity, but it had gone overboard to save anything which happened to be next to it on the raft, from seven stone of lima beans to twelve stone of wet seaweed. After many weeks of stubborn argument Macintosh had conceded that the lack of discrimination was unsatisfactory, and he had abandoned Samaritan I and developed Samaritan II, which would sacrifice itself only for an organism at least as complicated as itself.

    The raft stopped, revolving slowly, a few inches above the water. “Drop it,” cried Macintosh.

    The raft hit the water with a sharp report. Sinson and Samaritan sat perfectly still. Gradually the raft settled in the water, until a thin tide began to wash over the top of it. At once Samaritan leaned forward and seized Sinson’s head. In four neat movements it measured the size of his skull, then paused, computing. Then, with a decisive click, it rolled sideways off the raft and sank without hesitation to the bottom of the tank.

    But as the Samaritan II robots came to behave like the moral agents in the philosophy books, it became less and less clear that they were really moral at all. Macintosh explained why he did not simply tie a rope around the self-sacrificing robot to make it easier to retrieve: “I don’t want it to know that it’s going to be saved. It would invalidate its decision to sacrifice itself…. So, every now and then I leave one of them in instead of fishing it out. To show the others I mean business. I’ve written off two this week.” Working out what it would take to program goodness into a robot shows not only how much machinery it takes to be good but how slippery the concept of goodness is to start with.

    http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/howthemindworks.htm

    But the fun really starts when they put 2 identical robots on the same raft …

  41. says

    In the article it seems less that the goal of the programming is just to pass the mirror test, and more that the goal is to create a robot that can recognize self vs. other, using the mirror test as a benchmark. I didn’t pick up that Nico was being programmed specifically to pass just the test.

    It does raise the question of what it even means to pass it, though. A being whose only function is to see and detect itself can pass, but a fully sentient yet blind one can’t.

    And I have to second Marcus@5 and Cuttlefish@6. How do we even determine what is ‘Self-aware’? Clearly we know that we (as individuals) are, but from the outside the only ‘proof’ we have for the self awareness of others is by breakable benchmarks and assumptions like “I am self aware, and I am human; therefor all humans are self aware”.

    And then of course you have solopsists who could claim that only they are self aware, and nobody else is. You can tell them “But I’m self aware!” and from their perspective they have no more reason to believe that you aren’t just reacting to ‘programming’ to say so, than we have reason to believe that a robot isn’t just reacting to being programmed to pass the mirror test. (Bar the news article on it anyways…)

    The only way to actually convince such a person that you are indeed self aware would be to let them experience your actual thought processes. Unfortunately that’s not possible and we’re all stuck in our own minds, so our only real method is things like the mirror test.

    Which really, to me, makes the question of self-awareness seem silly. The goalposts can be moved back and forth by whoever wants to prove whatever. “I can’t get inside your own mind and experience your self-awareness therefor you have none” vs. “It was programmed to pass the test and is therefor self aware” and everything in between.

    Personally with my own interest in AI, I’d put the ability to learn and adapt beyond programming as a better goal than self-awareness, but I digress.

  42. varys says

    You’re right, it does subvert the test. However, until someone comes up with a better test, there’s not much else to do.

  43. says

    Obviously there is a lot more to being a person than recognizing yourself in the mirror, but building a robot to do that is not a trivial technological accomplishment. Also the Robot didn’t pass the full mirror test. I would point out that there isn’t any single test that qualifies a robot as conscious. Also when we really do figure out what is happening inside the head of say a newborn baby, we may find that it is not as impressive as we might think and that to really be a person takes some additional development and possibly language ability.

  44. carlie says

    and more that the goal is to create a robot that can recognize self vs. other, using the mirror test as a benchmark.

    Qbo does that. It analyzes whether the nose flash pattern on the robot it sees matches its own (and is therefore self in a mirror) or is different, and therefore another robot. That’s still rote programming, though.

  45. says

    I’d suggest that the notion of self-awareness is somewhat similar to the notion of a soul. We humans have enormously complex visual processing circuitry which when confronted with a mirror results in electrical patterns in the brain which we’re calling self-awareness.

    There’s nothing mystical about this. It’s just signal processing. Exactly the same stuff that the robot is doing. Perhaps the robot is less adept at it, or produces the correct output with less fidelity or whatever but that’s irrelevant. Qualitatively there’s nothing magic going on in either system and to ascribe it some nebulous fluff seems less than sceptical at best.

  46. says

    As for dogs, given that my dog completely ignores its own reflection in the mirror and yet is extraordinarily interested in other dogs, my conclusion would be that it definitely recognises itself in the mirror.

  47. Christoph Burschka says

    The mirror test, and any other test for a specific behavior, sounds useful only for recognizing self-aware animals, not self-aware AI. The Turing test also doesn’t seem to be as useful as once thought: Chatterbots are now getting good enough to fool some people.

    Not sure at what point an AI can be confirmed self-aware, since any given behavior can be hard-coded. Maybe it is futile to set any criteria beyond “it’ll surprise us, and we’ll know it when we see it”.

  48. Amphiox says

    All specie are unique but humans are uniqier.

    If all species are unique, then the manner in which they are unique, is unique.

    And thus, the manner in which humans are unique, is unique.

    Can the manner in which humans are unique be said to possess more value, or more interest, than the manner in which other species are unique?

    Well, we are human, and we are entitled to be more interested about aspects of ourselves than of others, if we so choose.

    Should we so choose?

    That, we’ll have to choose.

  49. huntstoddard says

    I think dogs, cats, horses and other animals in that range of intellect just view mirror images as “features of the world,” at a similar level to how they regard imperfect reflections in water or glass. Within their concept of the world, they know that their actions will be result in certain feedback from other objects, like, for instance, the piecemeal and warped reflection a dog has of itself in a shinny car hubcap. The perfect fidelity of a mirror reflection is just an extreme example. You can also speculate about what a cat thinks about TV images, or what a dog thinks about cars. Why doesn’t a dog think a car, which moves, makes noise, “sleeps,” is alive? For one reason or another it doesn’t, since it’s quite happy to urinate on it. I think animals just accept these things as features of their reality. I have little doubt that a dog would readily accept teleportation once it had experienced it a few times. After a few times, it would just go “Oh, my owner just reappeared in the living room. Maybe he’s give me some food!” In many ways I think framing it as “world concept” is very inaccurate. It’s more like the lack of conception. They just don’t think much about these things.

  50. says

    It shows the mirror test was incomplete.

    Robots should also be able to recognize that looking at a glass with an equal robot behind it is not itself.

  51. joed says

    This Sapolsky guy really has a lot to say about this very article and the Uniqueness Of Humans
    He studies baboons and others. And seems to know what he is taking about. He mentions the Wellesley effect as being strictly human.

  52. says

    “Ceci n’est pas une pipe.” The concept of consciousness isn’t consciousness. Magritte’s painting of a pipe was useless for smoking tobacco, and all concepts of consciousness are useless for apprehending reality.

    The mirror test correlates with our expectations about awareness in second parties, but I’m still only guessing that you are aware. Of course, this guess provides the foundation of all society, but that doesn’t make it true.

    Many enlightened beings have related their experience of a vastly expanded identity; indeed, they report perceiving everyone else as equally enlightened. From their accounts, a universal and unitary consciousness in which “I” somehow partake, albeit incompletely, is conceivable as an alternative explanation for all the apparently separate conscious entities which we encounter.

    Unfortunately, to you and me, this is also just another conception of consciousness.

    PZ’s recent interest in the nature of consciousness encourages me to wonder if he might be becoming interested in meditation. I recommend Dynamic.

  53. says

    Does the Mirror Test have any check built in to determine whether the test subject gives a damn about what it looks like?

    In one of his popular books, Donald Griffin talks about his work with cottontop tamarins. He says that when he would introduce the mirror to them, they would do things like going into various poses in front of it, as if to see themselves from different angles. But he tried the classic mirror test (put them to sleep, mark their face [in this case, he did so by dying their distinctive white topknot green!] and see if they react when they wake up and see themself in the mirror); however, they did nothing unusual at all. So should he conclude that they don’t recognize themselves? One day, a tamarin loafing about in a room with a mirror had a bit of banana clinging to his face next to his mouth. He remained unaware of the food for a while, then he looked into the mirror and immediately licked it off. This gave Griffin the idea to test what the monkeys do or don’t pay attention to. He anesthetized them and put a green mark on a clearly visible part of their arm — they ignored it utterly. So… no, they don’t react to seeing a mark in the mirror, but it could be that they just don’t care. I don’t know if Griffin has ever published this in a paper. Cottontop tamarins are not on the list of species that pass the classic mirror test, but maybe that just means that the criteria are too narrow.

  54. ChasCPeterson says

    Donald Griffin talks about his work with cottontop tamarins.

    That was Hauser‘s work, not Griffin’s. It was not all that straightforward as it turned out.
    And in an intersting article in which Carl Zimmer interviews several people about another controversial article claiming that rhesus monkeys pass, he points out:

    The Harvard primatologist Marc Hauser claimed in 1995 that the cotton-top tamarin could pass the mirror test, but that paper was one of several that Harvard now claims were tainted by Hauser’s misconduct.

  55. Jeffrey G Johnson says

    I would agree with your assessment if all they are doing is a minimal hack or kludge that appears to pass the mirror test as an external check list item.

    But if instead what they are doing is programming in the general ability to actually have an understanding of its shape and spatial arrangement, and the ability to recognize itself as distinguishable from other shapes, I would think this is an important and worthwhile step toward implementing improved self awareness in robots and other machines.

    I am a programmer, and there is a big difference between making a program appear to pass a test to the casual observer in a limited context as a checklist feature, or actually fully implementing a generalized ability that, by the way, also passes the test. The former case would be stupid, as you point out, but the latter case could have real significance for the artificial intelligence of machines.

  56. khms says

    If you pass a mirror and see a smudge on your forehead, will you not stop and at least take a second look?

    Odds are, I’ll not see that smudge. I seem to have trained myself to ignore mirrors most of the time. I have to make a conscious (ha!) decision to look.

    I wonder if that is related at all to the fact that in my world, people don’t have eye color – that is, I typically have not the slightest idea what that color is. I have enough trouble with hair color. All that stuff is so unimportant! Hmm.

  57. blf says

    As for dogs, given that my dog completely ignores its own reflection in the mirror and yet is extraordinarily interested in other dogs, my conclusion would be that it definitely recognises itself in the mirror.

    Or is simply failing to smell another dog.