The mirror test is a well known indicator for some degree of self-awareness: surreptitiously mark an animal’s face, show it a mirror, and see if it recognizes that the reflected image is of itself by whether it reaches up to touch or remove the mark. We see that behavior and infer that the animal has some knowledge of itself and can recognize that the mirror image is not another animal.
Ow. It makes my brain hurt.
So this is a computer that has no other indicators of consciousness or awareness or autonomous “thought” (whatever that means…my brain is hurting again), and is being coded to respond to a specific kind of visual input with a specific response…to literally pass the mirror test by rote. Does that really count as passing?
I think that all it actually accomplishes is to subvert the mirror test. It’s always been a proxy for a more sophisticated cognitive ability, the maintenance of a sophisticated mental map of the world around us that includes an entity we call “self”, and I don’t think that training a visual processing task to identify a specific shape unique to the robot design counts.
I’d also like to see what happens if two identical robots are made and put in the same room. To recognize “self” you also have to have a concept of “other”.