Although I have been sort-of following the news of the shooting of the teenager Trayvon Martin by George Zimmerman in Florida (who can avoid it?), I have not written anything about it so far. Part of the reason is that there does not seem to be much point in adding my voice to a case that so dominates the media and for which I have no information to contribute.
The second reason is that huge uproars suddenly erupt about something or other and just as quickly shift to something new. The media, especially cable news with its need to fill 24 hours of airtime, tend to be like dogs chasing squirrels, rushing from one thing to another, and the rest of us get dragged along by the leash. We had the Rush Limbaugh episode, the Joseph Kony boomlet, and now the Trayvon Martin case. Next week it will be something else that we are supposed to be tremendously upset about.
But the main reason that I did not comment is that past experience has taught me that one has to very cautious about forming an opinion on cases like this based on the first reports that emerge. It is almost always the case that initial reports are highly incomplete or are offered by people who may not be totally objective or are even completely wrong. One has to be especially skeptical of initial accounts if they are released by authorities such as the police or governments that are themselves involved in the incident. But skepticism is warranted even when the incidents do not involve official authorities and in cases where there are a lot of eyewitnesses but in this case, the events took place in a dark and isolated area, compounding the problem.
But people quickly take the initial stories that emerge as the complete narrative and jump to conclusions as to who is right and who is wrong, and then take highly visible and vocal stands on what should be done. Once those positions have been publicly staked out and lines drawn and sides formed, it becomes hard for people to become more nuanced as more information inevitably emerges that challenges the straightforward early narrative.
It is because of this that I try not to form an early judgment, even though the temptation to do so is very strong. But why is that desire so powerful that almost all of us succumb to it so easily? Why do we tend to form strong opinions so quickly on the basis of so little information and yet are so certain about the rightness of our conclusions?
In his 2011 book Thinking Fast and Slow, psychologist Daniel Kahneman (who won the 2002 Nobel Memorial prize for Economics) describes the processes by which we make judgments, assess risks, take chances, and the like. He says that it is convenient to think of our brains as having two ‘characters’ that he calls System 1 and System 2. He says that “System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control” while “System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.” (p. 20)
System 1 works at lightning fast speed in arriving at conclusions about practically anything we encounter. It has emerged because, evolutionarily, it is advantageous to be able to quickly judge situations and take action, especially when it might be dangerous. But the methods System 1 uses are not the ones that are likely to give us the best results, unless the situation is one in which we have deep and expert knowledge gained from prior experience in that area. The problem is that it works on a WYSIATI (What You See Is All There Is) basis, where it assumes that all the evidence that is readily at hand is all there is and is sufficient to make a judgment. It then combines that evidence with any other ‘evidence’ that it can recall from memory, along with any ideological biases or prejudices that spring to mind.
The catch is that this process gives much greater weightage to things that are easily recalled, which means to events that are highly impactful and emotionally charged or receive the most publicity, even though they may not be at all representative of the full set of data that should be used. System 1 seems to think that if something can be easily recalled, then it must be representative of the general case, though that is often not true. In crimes that have racial overtones for example, people’s judgment will be swayed by their memories of O. J. Simpson, Tawana Brawley, Rodney King, the Central Park jogger, and the like. (Those events sprang to my mind immediately in connection with the Trayvon Martin case even though there is no reason to think that the current case is anything like them.) System 1 then uses this highly biased data set to construct a story about what might have happened in the current situation. It then uses the plausibility of the story it has itself constructed to judge whether it is likely true or not.
System 2, on the other hand, is more analytical and tries to deliberately construct alternative hypothesis as to what might have happened and actively seeks out new information and more relevant data to see if a better judgment can be arrived at by weighing the evidence more appropriately. The problem is that using System 2 requires a lot of work but the brain is lazy and likes to conserve energy and is quite willing to let the speedy System 1 take charge and run things as much as possible. System 2 does not get called into action unless forced to do so, either by willful effort or because System 1 cannot quickly come up with a plausible story or it is immediately challenged by a contradictory evidence or when people are forced by circumstances to confront all the evidence, such as when they are serve on a committee or jury or because the problem is one which System 1 is not equipped to handle. This is why having someone play a ‘devil’s advocate’ role in such situations is useful.
In the Trayvon Martin case, the initial facts were that a young unarmed black man walking in a residential neighborhood at night was shot dead by a Hispanic white neighborhood watch person who had suspicions that Martin did not belong there and had some criminal intent. Using just this data, it is easy to construct two stories that are plausible, depending on one’s ideological presuppositions. One is that of Zimmerman as a prejudiced person who was itching to be a crime fighting Dirty Harry shooting an innocent person merely because the latter fitted his racial stereotype of a criminal. The other narrative is that Zimmerman, although mistaken in his judgment of Martin, had good reason to be suspicious of him and was not guilty of racial malice, and may even have been defending himself from attack.
Either of these narratives may or may not eventually turn out to be largely true but once people picked one early, it became the one to be defended even if they had no idea if it was true or not. People on both sides have now dug in their heels and refuse to moderate their early strong stands since that would involve a loss of face or admitting they might be wrong. Instead they highlight evidence that supports their case and try to poke holes in anything that does not agree with it. This is an example of the danger Sherlock Holmes warned about to Dr. Watson in A Scandal in Bohemia when he said, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”
This is why in such situations, rather than quickly deciding guilt and innocence, the only call for action that makes sense is for a full, open, and transparent investigation of all the facts that can then be weighed carefully. In other words, System 2 needs to be brought in. Carrying out the due processes of law may not be emotionally satisfying to satisfy our System 1 needs for quick answers, but we have got to get into the habit of doing so to prevent every tragedy of this sort becoming a full-fledged circus with two competing sides seeking to ‘win’ their case in the media, because then truth becomes the ultimate loser.
Kahneman makes the following recommendation of how to do so (p. 417):
What can be done about biases? How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable.
The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2… Unfortunately, this sensible procedure is least likely to be applied when it is needed most. We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available, and cognitive illusions are generally more difficult to recognize than perceptual illusions. The voice of reason may be much fainter than the loud and clear voice of an erroneous intuition, and questioning your intuitions is unpleasant when you face the stress of a big decision.
Most hot-button political issues are cognitive minefields. We need to learn how to recognize them so that whenever we encounter one, we immediately recognize the need to slow down and engage System 2.