Oh no! We may have to throw out the Dunning-Kruger Effect?


But it’s so intuitive! The idea that the less people know, the more they have an unwarranted confidence that they know more than they do, seems to explain so much. There is now evidence that the Dunning-Kruger Effect is an artifact.

The two papers, by Dr. Ed Nuhfer and colleagues, argued that the Dunning-Kruger effect could be replicated by using random data. “We all then believed the [1999] paper was valid,” Dr. Nuhfer told me via email. “The reasoning and argument just made so much sense. We never set out to disprove it; we were even fans of that paper.” In Dr. Nuhfer’s own papers, which used both computer-generated data and results from actual people undergoing a science literacy test, his team disproved the claim that most people that are unskilled are unaware of it (“a small number are: we saw about 5-6% that fit that in our data”) and instead showed that both experts and novices underestimate and overestimate their skills with the same frequency. “It’s just that experts do that over a narrower range,” he wrote to me.

Then I have to rethink who it applies to. We’re so used to pointing at stupid people doing stupid things and explaining it as Dunning-Kruger in action, and it’s not.

The most important mistake people make about the Dunning-Kruger effect, according to Dr. Dunning, has to do with who falls victim to it. “The effect is about us, not them,” he wrote to me. “The lesson of the effect was always about how we should be humble and cautious about ourselves.” The Dunning-Kruger effect is not about dumb people. It’s mostly about all of us when it comes to things we are not very competent at.

Wait wait wait. So I may have been a victim of the Dunning-Kruger Effect when I thought I knew what the Dunning-Kruger Effect was about? Dang. Well, that was a good solid punch in the balls to start my morning. But then, it’s always good to rethink your assumptions and reconsider your ideas, so thank you very much may I have another?

Are there dumb people who do not realize they are dumb? Sure, but that was never what the Dunning-Kruger effect was about. Are there people who are very confident and arrogant in their ignorance? Absolutely, but here too, Dunning and Kruger did not measure confidence or arrogance back in 1999. There are other effects known to psychologists, like the overconfidence bias and the better-than-average bias (where most car drivers believe themselves to be well above average, which makes no mathematical sense), so if the Dunning-Kruger effect is convincingly shown to be nothing but a mirage, it does not mean the human brain is spotless. And if researchers continue to believe in the effect in the face of weighty criticism, this is not a paradoxical example of the Dunning-Kruger effect. In the original classic experiments, students received no feedback when making their self-assessment. It is fair to say researchers are in a different position now.

Wait, what, so maybe I’m not afflicted with Dunning-Kruger? OK, I need to get out of the house and take a walk now.

Comments

  1. says

    There are other effects known to psychologists, like the overconfidence bias and the better-than-average bias (where most car drivers believe themselves to be well above average, which makes no mathematical sense)

    The better-than-average bias is often cited as supporting the Dunning-Kruger effect. It probably won’t replicate, either, as it was probably based on biased samples. If you think a bit about the problem you’d encounter coming up with an unbiased sample for a BAE study, you’ll see what I mean, i.e.: “car drivers” are a self-selected sample already.

  2. Artor says

    “His team disproved the claim that most people that are unskilled are unaware of it…”
    I wasn’t aware that the hypothesis claimed MOST people exhibited the trait. But in my experience, I have known some people who clearly fit the description, and sometimes I have done it myself. “How hard can this thing be? I’ll probably ace it. Oh, really, really hard, it turns out…” And of course, we have President Dunning-Kruger himself, who almost seems like he set out to prove the concept deliberately.

  3. Matt G says

    I tend to prefer to use sophisticated terminology, but maybe we should just go back to calling them stupid.

  4. larrylyons says

    That is only one study. Before reaching any conclusion there has to be other data supporting that finding. It is only one possible snapshot of the relationship within the population. To explain, I’m going to have to get somewhat technical here. Essentially what individual study statistics do is provide a estimator of the actual relationship within the population. However due to a variety of factors, including sampling variation, range restriction and test (un)reliability the individual study results will randomly vary from the actual relationship within the population

    While these factors are typically taken into account in conducting Meta analysis, they also have a substantial impact on replication studies. To give an example you can do a Monte Carlo run with the population relationship between two variables arbitrarily set to mu – .35. — (note to self – I’ll have to set up a web demo of this one day in R). From that constructed population, draw 25 random samples of varying sizes without replacement, and calculate the relationship between the variables of interest in each study. The results of the individual estimates will vary from .50 by a random amount, simply due to sample size alone.

    That is only one factor. Using your study example, another study could try and replicate those results. Invariably it will randomly vary from the first study result, due to how their construct of expertise is measured as compared to a replication study, or the original studies by Dunning and Kruger, or how close the different test measured in the study were administered.

    So when I see something like your headline ” We may have to throw out the Dunning-Kruger Effect?” I have to take it with more than just a grain of salt. I’ll consider the results important (and tossing the D-K Effect), when 2 or 3 well conducted large N experiments using closely related measures find the same results.

  5. says

    When I looked into the Dunning Kruger effect years ago, I found the same stuff. Skepchick looked into it as well and independently came to similar conclusions.

    The Dunning-Kruger effect is classic pop psychology. It takes for granted the conclusions of a single paper that was never broadly accepted by psychologists. And my understanding is that Dunning, Kruger, and colleagues continue to defend their theory, but even if you take their side, the thing they’re actually defending does not resemble the popular understanding.

    In retrospect, it’s ironic how often the Dunning-Kruger effect was cited by people in the atheist/skeptical movements.

  6. says

    @Marcus Ranum #1,
    Speaking as a person who cannot drive, it strikes me as entirely possible that most car drivers are in fact above average at driving cars.

    @larrylyons #4,
    It’s not just one study disagreeing with Dunning-Kruger. If you click through, it cites three separate papers from 2002, 2016, and 2017. Also check out the Skepchick article I linked in #5 cites another one from 2020. There are probably more we haven’t spotted.

    And you’re misunderstanding what these papers are actually saying. They’re not bringing in new data to disconfirm the Dunning-Kruger effect. They’re disputing the interpretation, using numerical simulations to show that the original results could be a statistical artifact.

  7. OptimalCynic says

    I’m ok with giving up the Dunning-Kruger effect, but I will never ever stop calling bitcoins Dunning-Krugerrands

  8. davidc1 says

    Pah ,i looked that Dunning -Kruger bloke upon the interweb ,he doesn’t know as much as he thinks he does.

  9. billyum says

    Never having read the original paper, and it is still paywalled, I did take a look at the abstract. The abstract contains its own misinterpretation. To quote it:

    “People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.”

    That’s the conclusion, i.e., suggestion.

    Next:

    “Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error.”

    Note that the participants were not asked to distinguish accuracy from error. Instead they were asked to estimate how well they did by comparison with others, i.e., not their ability, but their relative ability. Everybody thought that they were better than average. To what extent is that a metacognitive error? To address their ability to distinguish accuracy from error, the right question is to ask the person to estimate their score on the test.

  10. garnetstar says

    “It’s just that experts do that (overestimate their skills) over a narrower range.”

    Except for physicists, who do that over everything.

  11. eddavies says

    “where most car drivers believe themselves to be well above average, which makes no mathematical sense”

    It’s entirely possible for most people to be above average. Most people have an above-average number of legs, for example. Being “well above average” is a bit harder but you’d need to define things a bit more carefully before you said it makes no mathematical sense, particularly if you admit it might not be a normal distribution.

  12. vucodlak says

    Good fucking riddance.

    This touches on one of my pet peeves- that a significant proportion of the population has way too much self-esteem/self-confidence. How much stupid trouble has our species gotten into throughout our history because the people responsible for it didn’t have the sense to doubt themselves once in a while? For starters, every godsdamned crusade, and most wars, could have been avoided if everyone involved just stopped and honestly said to themselves “maybe I’m wrong, maybe I shouldn’t do this.”

    When I was a kid, the big thing was to teach children to have immense self-esteem, to bolster their self-confidence by telling them they could do anything if they just believed in themselves. Plenty of us found out the hard way that that is complete bullshit, and that was bad enough, but the “lucky” ones weren’t served any better. A lot of those confident people who were lucky enough to find success start believing that everything they touch turns to gold, and that’s how we wind up with people like Donald Trump. The lucky-who-will-never-acknowledge-their-luck love him, because he’s apotheosis of their kind, and the bitter-and-resentful love him because he’s the apotheosis of their kind, too.

    Children should be taught to question themselves (and authority), to doubt themselves, and to know that failing at things they try doesn’t make them failures as human beings. They should be taught to respect themselves, not to esteem themselves. Respect requires honesty- esteem does not.

    Honest interrogation of one’s own abilities is how one avoids falling victim to the phenomenon described in the Dunning-Krueger hypothesis.

  13. says

    Actually there’s a rather tricky epistemological issue here. The reason random data replicates the effect, in simple terms, is that people, whether real or computer generated, who score very low on the test have much more room to overestimate their performance. But that doesn’t negate the fact that they did, indeed, tend to overestimate their performance, whereas people who scored higher were much less likely to do so. The observation is still true.

  14. says

    The Dunning-Kruger effect could be broader, not just about intelligence but also ignorance, trust and gullibility. Have you ever believed something your friends told you because they were your friends, and later found out it was untrue? And because it was someone you trusted, you didn’t try to verify it? When enough trust and belief is built up in a person or position of authority, it can easily be abused, people willing to swallow and repeat lies they know aren’t true, vis-a-vis the Asch Conformity Experiment.

    If I told you it rained here today, you’d probably believe me and not check because it’s an everyday thing, plus it’s not critical whether it’s true or not. When I posted about an earthquake a few weeks ago, I could have been lying but you could check the USGS and other sources to verify it. But I couldn’t get away with an outrageous lie because people reading don’t know or trust me enough.

  15. blf says

    where most car drivers believe themselves to be well above average

    That reminds me of an incident many many yonks ago when I got a (justified) traffic ticket and opted to attend a “class” (plus, if memory serves me right, a small fine). One of the first questions the “instructor” asked was for each person to self-evaluate their own driving skills. Most-to-all of the people preceding me — about 1/3rd of the people present — said “above average” or “average” and none explained their reasoning. My answer (paraphrasing): “equipment above average (citing model of car), skill obviously dubious (citing presence at the ‘class’).” Some of the 2/3rds(-ish) remaining indicated “poor” or similar (whether or not that was due to me changing the tone of the answers is unknown (to me)). Two notable exceptions stand out in my memory: An individual who (as later incidents in the “class” indicated) was, well, exceptionally stupid; and an elderly person who said they’d never ever before been ticketed or cited (in something like 40 years, they got a round of applause). Nonetheless, rather few people in this “class” for drivers ticketed for violations said or implied they were “below average”.

    I use scare-quoted “class” and “instructor” since, in my opinion, neither is too-accurate: The “class” was mostly hectoring by an “instructor” incapable of explaining anything. As one example, one of the first things the “instructor” said is they watched people arrive and were not impressed at the skills shown by anyone that morning. I then piped up and pointed out I had bicycled to the class, seemed to be the only one who had (and therefore should be easy to remember), and asked, specifically, what I did wrong… The “instructor” ignored me.

  16. bcw bcw says

    Clearly an incompetent paper with the authors so clueless as to be unable to understand how bad it is.

    But I don’t understand how the skilled/unskilled error can be described as equivalent when the widths of the ranges are described as different.

  17. John Morales says

    We’re so used to pointing at stupid people doing stupid things and explaining it as Dunning-Kruger in action, and it’s not.

    Always bugged me that people focused on that aspect, and not the other (experts tend to underestimate their ability).

    blf:

    … skill obviously dubious (citing presence at the ‘class’)

    Skill at not getting caught is not the same as skill at driving, though the latter sure helps the former.
    That is, were a professional race car driver to get pinged for breaking road rules, it’d not because they lack skill at driving.

    So, bad justification.

  18. says

    instead showed that both experts and novices underestimate and overestimate their skills with the same frequency. “It’s just that experts do that over a narrower range,” he wrote to me.

    So, the conclusion is that everyone is susceptible to this effect, but it affects the novices more.
    Wasn’t that already what it was? I mean, I kinda assumed we were all already clear on the fact that this is a general effect and we can all fall prey to it, if we’re not careful. It was never as simple as “stupid people don’t know how stupid they are”. That was the funny headline, not the actual point.
    It’s great that this is being clarified further, but I’m worried that apparently it wasn’t already clear. I thought it was.

  19. Numenaster, whose eyes are up here says

    “tests of humor, grammar, and logic ”

    Anyone but me surprised to see that the original paper assessed humor and grammar? We only hear about the results for the logic part.

    Humor is pretty far from being an objective topic. I question the value of anyone’s self-assessment of their ability to do humor, and also their ability to compare to others.

  20. Alex Gordon says

    Isn’t one problem of Dunning Kruger who it makes the bad.guys? Anyone might be wrong; it’s just the ones who knew that (call it modesty or Dunning Kruger) seem more culpable when they are?
    Thus those inclined to step forward stone in hand when only without fault were called would indeed be top of the queue outside the kingdom of heaven. Or maybe the Almighty was testing us – to see if we could work out why the proud seem to have inherited what the meek might have thought was coming to them?

Leave a Reply