Observation and inference


As a scientist who interacts a lot with the general public, I am often asked to explain phenomena that lay people have observed. I used to take those observations at face value and was often stumped at coming up with an explanation because of the inconsistent elements the observations seemed to contain. But I have found from experience that what people tell me they ‘saw’ is not purely raw observational data but that when you go back and actually repeat the situation, the observations are different from what was originally reported and that much of the paradoxical elements go away.

This raises an important point. In investigating and explaining any phenomenon, we first have to check if what we saw was ‘real’.

The problem is that our brain’s first reaction is dominated by what Daniel Kahnemann in his excellent book Thinking Fast and Slow (2011) calls ‘System 1’ thinking. What happens is that when people ‘see’ something, their brain immediately kicks in and they try to makes sense of what they saw by subtly shaping the data to fit into a plausible narrative. It is this manipulated data that they are convinced they saw and which they then report later. Reporting it cements the distorted version even further into their memory making them even more convinced of its truth. Magicians use this feature of our brains to fool us into thinking that we saw something that was more amazing than it really was.

For example, in response to my post on the Ponzo illusion, commenter Molly said that she had assumed that the moon’s larger size on the horizon was due to the atmosphere acting as a magnifying lens. In this case, she knew that the moon was not actually bigger but she thought that the image of the moon had actually been magnified before she saw it. Hence Molly looked for the cause outside her head and came up with a plausible explanation.

If she had reported to me that she had observed that the image of the moon was more magnified on the horizon than at its zenith and asked me the reason for it, and I had accepted her observations at face value (after all, why would she lie?), I likely would have come up with an explanation similar to hers. But the image of the moon was not actually magnified (as can be confirmed by a simple test) and the effect lay inside her head, by her brain intervening to do some processing even before she become conscious of what she saw.

This interplay of outside stimuli and brain manipulation happens all the time and it takes quite a lot of effort to distinguish between actual observations and subsequent inferences. Sometimes it may be impossible, if the manipulation is hardwired into the brain as is the case with the Ponzo illusion. This is why two people’s recollections of a conversation or event that they both took part in can be quite different and why eyewitness accounts of crimes are notoriously unreliable unless the witness immediately jots down notes of what they saw before their brains can massage the information too much in trying to make sense of it. This is also why journalists should take notes of events and interviews to prevent later distortions creeping in.

Once you are more aware of your brain’s ability to unconsciously manipulate inputs and memories in its effort to create coherence and to tie in with prior beliefs and knowledge, you become better able to take precautions to prevent it, though I doubt that we are ever totally successful.

Comments

  1. Steve LaBonne says

    As briefly mentioned in the post, this is a huge problem for the criminal justice system (in the US at least), which is only just beginning to take on board what psychologists have known for years about the unreliability of human perception and memory. Most of the wrongful convictions that have been reversed after DNA testing of previously untested evidence were the result of relying on what turned out to be disastrously flawed eyewitness testimony. Many police departments also still refuse to change lineup procedures that have been experimentally demonstrated to be likely to create false identifications.

  2. Henry Gale says

    System 1 thinking seems like the same mechanic that may be in play when dreaming.

    Hobson (I think) theorized that when sleeping the lower brain fires and images are produced. The higher brain takes these images and tries to stitch together a narrative.

    It is that stitching together of images that seems to be similar to what was described as System 1 thinking.

  3. Kevin says

    Totally, and its amazing how easy it is to influence someone and not have them know that they are influenced.

    I think the lineup per se has less to do with this than people just picking the person that most looks like the person who committed the crime. The problem comes when the officer administering the lineup knows which one is the suspect. If the officer gives the witness any positive feedback, a clap, words of encouragement, a smile, it will artificially increase their confidence in their ID. The problem comes latter when they are on the stand and asked if they have been influenced in anyway, some people will say no, but they are just as influenced as the people who say yes, so they have a false confidence that is likely to be misconstrued as accuracy by the jury. The solution is obvious, the person talking to the witness should not be involved in the case.

    Also, its easy to rationalize misses from the lineup. The first two witnesses failed to match the guy so they must not have had a good view. The third matches the guy although hesitantly, the police officer validates the hit in a minor way and then at trial they have a witness that is confident that the suspect is the criminal because of the before mentioned effect. The result is even more scary, everyone is oblivious that an error even occurred.

Leave a Reply

Your email address will not be published. Required fields are marked *