FBB Mounts Crisis Response for Syrian Refugees »« Pat Robertson May Sue Documentary Makers

Political Considerations Even Undermine Math Ability

Chris Mooney reports on a new study that I don’t find surprising in the least. The study, called Motivated Numeracy and Enlightened Self-Government, finds that our political beliefs can even undermine our ability to do basic math when we encounter data that does not fit those beliefs. It’s really quite a clever study, as Chris explains:

The study, by Yale law professor Dan Kahan and his colleagues, has an ingenious design. At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their “numeracy,” that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a “new cream for treating skin rashes.” But in other cases, the study was described as involving the effectiveness of “a law banning private citizens from carrying concealed handguns in public.”

The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What’s more, it turns out that highly numerate liberals and conservatives were even more—not less—susceptible to letting politics skew their reasoning than were those with less mathematical ability.

The math here is really pretty basic. In the skin cream version of the study participants were asked about, there were, of course, two groups — those who used the skin cream and those who did not. In the group that used the skin cream, 223 had the rash get better and 75 had the rash get worse; in the group that did not use the skin cream, 107 had the rash get better and 21 had the rash get worse (they actuallly changed the labels around for different groups of people). And they were asked whether those results supported one of two conclusions, that those who use the skin cream were more likely to get better or worse. By having everyone answer that question, which did not involve a political question on which they would have formed an opinion, they got a baseline for each participant’s ability to analyze this basic numeracy question and answer the question correctly. And then they used the same numbers, but made the study about gun control.

Not surprisingly, Kahan’s study found that the more numerate you are, the more likely you are to get the answer to this “skin cream” problem right. Moreover, it found no substantial difference between highly numerate Democrats and highly numerate Republicans in this regard. The better members of both political groups were at math, the better they were at solving the skin cream problem.

But now take the same basic study design and data, and simply label it differently. Rather than reading about a skin cream study, half of Kahan’s research subjects were asked to determine the effectiveness of laws “banning private citizens from carrying concealed handguns in public.” Accordingly, these respondents were presented not with data about rashes and whether they got better or worse, but rather with data about cities that had or hadn’t passed concealed carry bans, and whether crime in these cities had or had not decreased…

So how did people fare on the handgun version of the problem? They performed quite differently than on the skin cream version, and strong political patterns emerged in the results—especially among people who are good at mathematical reasoning. Most strikingly, highly numerate liberal Democrats did almost perfectly when the right answer was that the concealed weapons ban does indeed work to decrease crime (version C of the experiment)—an outcome that favors their pro-gun-control predilections. But they did much worse when the correct answer was that crime increases in cities that enact the ban (version D of the experiment).

The opposite was true for highly numerate conservative Republicans: They did just great when the right answer was that the ban didn’t work (version D), but poorly when the right answer was that it did (version C).

This is a fascinating but not at all surprising result. When we have a firmly held belief about an issue, our ability to think rationally about it is quite often reduced — and the more passionately we hold that belief to be true, the less rational we are likely to be when evaluating evidence that might suggest that belief to be false. Confirmation bias, motivated reasoning, defensiveness and self-justification all undermine our ability to be objective in such situations. And the smarter we are, counter-intuitively, the worse we are likely to be at it.

Kevin Drum reacts to these results:

On the other hand, the effect size is pretty stunning. There’s a huge difference in the rate at which people did the math correctly depending on whether they liked the answer they got. I’d like to see some follow-ups with more subjects and different questions, but it sure looks as if we’d probably see the same dismal effect.

How big a deal is this? In one sense, it’s even worse than it looks. Aside from being able to tell that one number is bigger than another, this is literally about the easiest possible data analysis problem you can pose. If ideologues actively turn off their minds even for something this simple, there’s really no chance of changing their minds with anything even modestly more sophisticated. This is something that most of us pretty much knew already, but it’s a little chilling to see it so glaringly confirmed…

We believe what we want to believe, and neither facts nor evidence ever changes that much. Welcome to planet Earth.

And this is not limited to politics, of course. We will tend to do the same thing with our religious beliefs (or beliefs about religion) and even in our personal relationships. It also suggests, as Mooney notes, that it isn’t simply a matter of educating people or showing them the facts to make them think more rationally about an issue:

For study author Kahan, these results are a fairly strong refutation of what is called the “deficit model” in the field of science and technology studies—the idea that if people just had more knowledge, or more reasoning ability, then they would be better able to come to consensus with scientists and experts on issues like climate change, evolution, the safety of vaccines, and pretty much anything else involving science or data (for instance, whether concealed weapons bans work). Kahan’s data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy. “If the people who have the greatest capacities are the ones most prone to this, that’s reason to believe that the problem isn’t some kind of deficit in comprehension,” Kahan explained in an interview.

In short, ideology usually trumps rationality — and the smarter we think ourselves to be, the more likely we are to fall victim to it.

Comments

  1. raven says

    We will tend to do the same thing with our religious beliefs (or beliefs about religion) and even in our personal relationships. It also suggests, as Mooney notes, that it isn’t simply a matter of educating people or showing them the facts to make them think more rationally about an issue:

    Comitment bias.

    AKA Fundie xian Induced Cognitive Impairment

  2. Reginald Selkirk says

    And this is not limited to politics, of course. We will tend to do the same thing with our religious beliefs …

    No kidding. This is commonplace in the Creationist literature. I am currently reading & reviewing a book by a Creationist who actually had a faculty job in genetics.

  3. Trebuchet says

    Phyllis Schlaffley’s spawn Andy has a degree in electrical engineering from an Ivy League University but somehow thinks that imaginary numbers are some sort of liberal plot. As is the theory of relativity.

  4. says

    Don’t fall into the trap of thinking that this is limited to fundie Christians. It isn’t. Atheists do it too, all the time. And it may be even harder for us to overcome it because we st art with such a strong presumption that we’re being rational at all times.

  5. machintelligence says

    Kahan’s data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy.

    This analysis is confounding scientific literacy and numeracy. The two may be correlated but they are hardly identical.
    It could be the case that those with knowledge of the scientific method were the ones that got the right answer in both cases, while those who were only high in numeracy were the ones who ignored the unfavorable results.

  6. demonhauntedworld says

    Their math isn’t as straightforward as it seems. Use Fisher’s exact test, and the result isn’t (quite) significant if you use a two-tailed test, but it is significant if you use a one-tailed test:
    http://graphpad.com/quickcalcs/contingency1.cfm

    Maybe they could show that your political beliefs would lead you to choose a one-tailed over a two-tailed test, depending on the situation.

  7. eigenperson says

    This is a terrible study.

    The skin cream case is presented as an experiment. It’s not explicitly stated that it is a randomized trial, but it is strongly implied. As such, you can conclude that if people in the treatment group are more likely to get better than those in the placebo group, then the treatment probably does something.

    The gun control case is presented as an observational study. Confounding factors are all over the place. If you make any firm conclusion based on that (fictional) study, you’re an idiot. But the experimenters forced people to draw a conclusion. If you force people to act like idiots, then you can’t be surprised when they seem to show some cognitive weaknesses — you’re FORCING them to do so.

  8. Artor says

    I’ll take this study as a data point to support the maxim that 76.3% of all statistics are made up on the spot.

  9. francesc says

    @1, @2, what the study says is that it works for both “sides”. According to this study, would a creationist come here saying “see? atheists may not be as rational as they think” we would be witnesses, for the first time, of a creationist statement that doesn’t missquote a study.
    Anyway, I can see “Psychology Department” and “Law school” in the credits, so I’m already skeptical of the study and therefore, biased against its conclusions.
    The study seems to also show that, althought both are biased, republicans are more biased than democrats. Of course, a possible explanation would be that democrats are less convinced about gun-control than republicans against it.
    Finally, as the author’s point, it seems a suspension of the numerical skills. I hope that, once someone indicated them the faulty reasoning, they would notice it and accept the correct one.

    P.D: in the tests it’s assumed, but not made explicit, that the result was either an improving or a deterioration of the condition. If we don’t assume that, the result of the test would be ambiguous. With 223/700/75 vs 107/870/21 the conclusion is that the cream makes people both, more probable to get better and more probable to get worst at the same time

  10. alanuk says

    Andy should try connecting a series resonant circuit across the mains then getting hold of the terminals of the capacitor. He would find that imaginary numbers are real. He would actually be killed by an infinite, negative, imaginary, alternating current, voltage.

    I have a degree in Electrica Engineering too.

  11. francesc says

    @7 Come on! It would be significant if the second group would have been 107/20! I know, for a really scientifical analysis you would need to increase the population, but it is enough for a psycological one (pun intended). You can say that the case supports one of the conclusions, although you can’t be sure and publish the results, and more research is needed.
    I suspect that the case has a far better significance than the study itself

  12. francesc says

    Imaginary numbers are a mathematical construction wich fits some physical events and thus, are useful to compute expected outcomes. I wouldn’t say even that real numbers are “real” in a physical sense.
    Degree in maths here

  13. raven says

    This seems to be a subspecies of commitment bias.

    But all commitments are not equal.

    1. The earth is flat.

    2. The earth is round.

    How much cognitive effort you need to put into defending your belief or position depends inversely on how closely it matches reality.

    The Invisible Sky Fairy that does nothing is deeply concerned with your sex life. versus,

    The Invisible Sky Fairy that does nothing is imaginary.

  14. says

    As a EE, Andy S. may not realize that ‘j’ stands for the imaginary unit, as opposed to the ‘i’ used by mathematicians [/snark].

    Also: Maybe it’s just a desire to say “But I’m not like that!”, but I have to agree with @8. While therapeutic trials ain’t necessarily simple things, we *know* that social issues are *always* fraught with complexity, confounders, and just general clutter. It may not be rational to outright reject the gun study, but it shouldn’t be accepted either without further work. And smart people are more likely to realize that.

  15. Moggie says

    I’m not entirely convinced that this study has shown that “political considerations undermine math ability”. An alternative possibility is that people can’t follow instructions. Participants were asked to choose which result was supported by the figures, but it’s possible that many of them instead answered the question not asked: after reading these figures, do you believe that concealed carry reduces crime, or increases crime?

    It would be interesting to see whether you would get the same result if you added wording to strongly emphasise that the participant should answer solely on the basis of the figures presented, regardless of any existing knowledge or gut feeling.

    Of course, we have a tendency to cherry-pick the studies which support our preconceptions. That’s a problem. But we also know that some experiments have methodological problems and are ideologically driven (*cough* Regnerus *cough*). So, when presented with unsupported figures which contradict other studies, it’s not unreasonable to think “this smells like bullshit to me”. Tell the participants “this may be bullshit, but tell us what it says anyway”, and see what happens.

  16. eric says

    @7: the math might not be straightforward, but if (statistically) people are choosing one method when the results appear to agree with their preconceptions and another when they don’t, that’s a serious problem. If the smart people are doing that more than the innumerate people, that’s a really big problem, because it means the experts that the government and politicals pay to provide policy advice are more likely to get things wrong than the schlubs they don’t pay.

    @8: No, you’re wrong for at least three reasons. First, people were asked to come to a conclusion about what the data they were given says. Numerate people should be able to perform that task without worrying about what confounding factors the data missed.
    Second, it doesn’t appear any of the participants made your complaint. None of the participants went to the study overseers and said “hey, I can’t answer this question because your data is crap and doesn’t tell me anything about confounding factors.”
    Third, if it had just been a problem of forcing a cognitive weakness on people, then the results would not correlate with poltical preconception. If you give 1,100 people unintelligible data and tell them to make a conclusion from it, you would expect the yes/no answers to be fairly randomly distributed. The results here show a strongly non-random distribution. So you can’t chalk the results up to mere GIGO.

  17. otrame says

    Ed is correct to warn that we ALL tend to do this. The things to take from this is 1) don’t be so smugly superior–you do it too and 2) the big advantage of science is that it is structured to get around that tendency (though it does not do so perfectly).

    My epigram, which I will one day put on a cross-stitch sampler: Sooner or later cognitive dissonance makes fools of us all.

  18. abb3w says

    @6, machineintelligence

    This analysis is confounding scientific literacy and numeracy. The two may be correlated but they are hardly identical.

    Put simply… no. The freely availble technical paper that Ed linked to (meaning people commenting have only [EXPLETIVE DELETED] excuses for ignorance on any details it gives) fairly clearly indicates the question is phrased as a math problem about probability. Similarly, the scale of numeracy seems to be the one developed in (doi:10.1002/bdm.1751); ignorance there is a hair more justified, as it’s in a subscription required journal, but all the questions seem pretty clearly about understanding probability rather than science.

    I suppose you could argue the measure of Numeracy is specifically limited to Numeracy regarding probability rather than some more general mathematical aptitude, but that seems a pointless quibble.

    @7, demonhauntedworld:

    Their math isn’t as straightforward as it seems. Use Fisher’s exact test, and the result isn’t (quite) significant if you use a two-tailed test, but it is significant if you use a one-tailed test

    Whether the results reach the scientific “Dan you reactionary ass” level of obviousness is a separate question from whether “People who used the skin cream were more likely to get better than those who didn’t” and “People who used the skin cream were more likely to get worse than those who didn’t”… which is what was asked.

    This, however, might be a factor in the tendency to post-facto rationalization for the high-Numeracy choosing the mathematically wrong answer.

    @8, eigenperson:

    The gun control case is presented as an observational study. Confounding factors are all over the place.

    The potential for unconsidered confounds may indicate the sample is biased, and thus challenge causation; it does not, however, change the basic question of correlation regarding whether "cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime" or instead "cities that enacted a ban on carrying concealed handguns were more likely to have an increase in crime".

    @13, francesc:

    Imaginary numbers are a mathematical construction wich fits some physical events and thus, are useful to compute expected outcomes. I wouldn’t say even that real numbers are “real” in a physical sense.

    Furthermore, most “real” numbers arguably have worse physical correspondence than the square-root-of-negative-one does. (No math degree, just a geek.)

  19. eigenperson says

    #18 Eric:

    First, people were asked to come to a conclusion about what the data they were given says. Numerate people should be able to perform that task without worrying about what confounding factors the data missed.

    People were asked to answer the question “What result does the study support,” and the only options were to say that the study supported one conclusion, or the opposite one. A reasonable answer might be “This study is too badly done to support any conclusion.” Another reasonable answer might be “Given the proposed mechanisms of action of gun bans on violence, I would expect that any [positive/negative] effect, if it existed, would be much larger than the effect observed here. Therefore, this study does not suggest that such an effect exists.” In the absence of the ability to provide one of those answers, people may choose the available option that they consider to be closest to the answer they would have given. A numerate person would, if anything, be more likely to give one of the alternative answers.

    Second, it doesn’t appear any of the participants made your complaint. None of the participants went to the study overseers and said “hey, I can’t answer this question because your data is crap and doesn’t tell me anything about confounding factors.”

    I don’t know why you say “it doesn’t appear” that this happened. To me, it does not appear that the authors have provided any information about subjects’ complaints, so it seems to me that we don’t know what the subjects may or may not have complained of.

    Third, if it had just been a problem of forcing a cognitive weakness on people, then the results would not correlate with poltical preconception. If you give 1,100 people unintelligible data and tell them to make a conclusion from it, you would expect the yes/no answers to be fairly randomly distributed. The results here show a strongly non-random distribution. So you can’t chalk the results up to mere GIGO.

    On the contrary, I absolutely would expect it to correlate with political preconception. It is a perfectly reasonable hypothesis that if the study itself doesn’t actually support either view, people are much more likely to project their own opinions on it.

    In any case, the fact is that the authors failed to ensure that the two fictitious studies differed only in the variable they were testing (subject matter), and they don’t seem to take this weakness into account in drawing their conclusions.

  20. eigenperson says

    #20 abb3w:

    The potential for unconsidered confounds may indicate the sample is biased, and thus challenge causation; it does not, however, change the basic question of correlation regarding whether “cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime” or instead “cities that enacted a ban on carrying concealed handguns were more likely to have an increase in crime”.

    You’re assuming that participants interpreted the answer choices literally.

    This is unlikely, because the presentation of the fictitious study explicitly introduced it as an attempt to determine causation (“Government officials, subjects were told, were ‘unsure whether the law will be more likely to decrease crime by reducing the number of people carrying weapons or increase crime by making it harder for law-abiding citizens to defend themselves from violent criminals.’ To address this question, researchers had divided cities into two groups…”). This invites subjects to judge the study on the basis of whether it supported one causative effect or the other. Even though the subsequent question can be read literally as being about correlation, the subjects had just been primed to interpret it as being about causation.

  21. sezme says

    @20, abb3w:
    most “real” numbers arguably have worse physical correspondence than the square-root-of-negative-one does.

    I have no idea what your trying to assert with this statement. Please elucidate.

  22. francesc says

    Update: I’ve implied before that the test could be statistically wrong, meaning that the correlation could be not significant enough. Although I didn’t have the raw data I did some simpler tests with the numbers I could extract and it seems consistent.
    At least, we can extract the conclusion that similar numerically skilled republicans are more probable to get the right answer than democrats, when the right answer is “gun-ban increases violence” and that democrats are more probably getting the right answer when it is “gun-ban decreases violence”.
    So… I will correct myself and say that yes, the test (assuming it is well-designed) shows that political affiliation affects mathematical reasoning

  23. eric says

    @21:

    People were asked to answer the question “What result does the study support,” and the only options were to say that the study supported one conclusion, or the opposite one. A reasonable answer might be “This study is too badly done to support any conclusion.” Another reasonable answer might be “Given the proposed mechanisms of action of gun bans on violence, I would expect that any [positive/negative] effect, if it existed, would be much larger than the effect observed here. Therefore, this study does not suggest that such an effect exists.”

    This study design has been used before. Previous results support the conclusion that the ability to work other basic math problems is a strong predictor of a person’s ability to derive the ‘correct’ answer from these 2×2 matrices…when the topic is abstract. The citation for this is referenced in the paper. So you’re just plain wrong: study participants do not get hung up on the paucity of data or on the potential effect of missing confounding data. They treat the experimenter’s request it as exactly what it is: a request to do some math in an abstract problem in a study that they’re participating in. They do not, contra your implication, consider factors beyond the data that is given when trying to answer the question (as long as the topic is abstract).

    It is a perfectly reasonable hypothesis that if the study itself doesn’t actually support either view, people are much more likely to project their own opinions on it.

    If that’s what was going on, participants would be projecting their opinions on to the skin cream studies as often as they are projecting their opinions on to the gun control studies. Because the skin cream version of the test doesn’t support the conclusions any better than the gun control version. But they don’t do that. The results of the control are pretty clear: when it’s about skin cream, people solve it like an abstract math test problem.

    Put another way, the participants only seemed to consider the data flaws you point out when doing that helps them mantain a preconceived belief in the face of data that challenges it. When the data “as-is” is on a subject they don’t care about, they work the problem as an abstract math problem and succeed at about the rate you’d expect based on their math skills. When the data “as-is” is on a subject they care about and supports their position, they “succeed” at rates higher than one would expect given their math skills. When the data “as-is” is on a subject they care about and undermines their belief, they fail to treat it as an abstract problem and succeed at rates lower than one would expect given their math skills.

    And lastly, probably most surprisingly, the people who were better at solving abstract math problems were more likely to fail to see the problem as a an abstract math problem when the data given challenged their preconceived beliefs.

  24. abb3w says

    @23, sezme

    I have no idea what your trying to assert with this statement. Please elucidate.

    The square root of negative one has a physical correspondence; in particular (if I correctly recall classes from 20 years back), for some sorts of electrical engineering stuff, like viewing a capacitor as a screwy sort of resistor.

    However, most (all but countably many of the uncountably many) real numbers are incomputable — and thus, either lack physical correspondence, or (for pretty complicated reasons) rule out the possibility of even partial resolution to Hume’s problem of induction, which is necessary that the concept of “physical correspondence” be meaningful.

    @22, eigenperson

    Even though the subsequent question can be read literally as being about correlation, the subjects had just been primed to interpret it as being about causation.

    True. However, cue “correlation is not correlation” and associated alt text joke from XKCD.

    @24, eric

    And lastly, probably most surprisingly, the people who were better at solving abstract math problems were more likely to fail to see the problem as a an abstract math problem when the data given challenged their preconceived beliefs.

    Indeed.

Leave a Reply