Re-evaluating the Milgram experiment


One of the most famous experiments in psychology is that of Stanley Milgram who in 1962 purported to show how a surprisingly high percentage (65%) of ordinary people could be persuaded by authority figures to inflict extremely high levels of pain on others, well beyond what one might expect normal people to do. I wrote about this back in 2008.

Now Cory Doctorow says that a new book Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments by Gina Perry suggests that Milgram may have fudged some of his conclusions, throwing doubt on what might have been learned from it.

After examining the original tapes of Milgram’s experiments and interviewing the surviving subjects and researchers, Perry concludes that Milgram’s experimenters didn’t stick to a set script (as has always been reported), but rather wheedled and nagged the subjects into turning up the shock dial. What’s more, it seems that a substantial fraction of the subjects realized that there were no actual shocks, seeing through the ruse — they were also recorded as people who were willing to shock strangers to death on the say-so of a man in a labcoat.

If all Milgram had done was fudge his account of the dehoaxing process, his findings could still be completely valid. But Perry also caught Milgram cooking his data. In his articles, Milgram stressed the uniformity of his procedures, hoping to appear as scientific as possible. By his account, each time a subject protested or expressed doubt about continuing, the experimenter would employ a set series of four counter-prompts… But on the audiotapes in the Yale archives, Perry heard Milgram’s experimenter improvising, roaming further and further off script, coaxing or, depending on your point of view, coercing participants into continuing. Inconsistency in the standards meant that the line between obedience and disobedience was shifting from subject to subject, and from variation to variation—and that the famous 65 percent compliance rate had less to do with human nature than with arbitrary semantic distinctions.

The field of psychology has recently been reeling from repeated revelations of experiments that could not be replicated or were downright fraudulent. This latest report is not going to help the field in its attempt to rehabilitate itself.

But apart from the damage to psychology, a repudiation of the Milgram conclusions may be a good thing in that it restores some faith in the ability of people to resist pressure by those in authority to inflict harm on others. The original Milgram results were deeply discouraging on that score.

Comments

  1. Enkidum says

    The other big mess is the Stanford Prison Experiment, which suffers from so many methodological flaws it’s basically toilet paper. One of the big ones being a massive self-selection effect: those who are willing to apply for a study of prison tend to score very highly on measures of aggressiveness, authoritarianism, narcissism, and machiavellianism, and low on social empathy (http://www.ncbi.nlm.nih.gov/pubmed/17440210). And it’s completely unclear what criteria were used to determine who would be a guard and who would be a prisoner, but it’s certain that nothing resembling random assignment was used.

    So generalizing from that tiny sample to the populace as a whole? Bad idea.

  2. smrnda says

    I’ve seen quite a few experiments which could not be replicated, but the problem is the bad conclusion becomes common knowledge and the correction reaches a far smaller audience.

    I’m thinking the field needs harder review standards – that experimental results should not get published until they’ve been replicated independently a few times.

  3. psweet says

    “I’m thinking the field needs harder review standards – that experimental results should not get published until they’ve been replicated independently a few times.”

    Sounds good, but there’s a problem — who does the replication, and if the work’s not published, how do they know to do so?

  4. Enkidum says

    The solution (or A solution) I think is to make replication a viable method of publication, which it isn’t at present, although several journals are talking about it. In physics it’s an acceptable use of your time to try and replicate people’s interesting results, which is why cold fusion was so quickly debunked. But in psychology you can’t get anything out of it unless you have a new positive finding, which is a stupid way of organizing a scientific discipline.

    (I’m a psychologist by training. And profession.)

  5. unbound says

    Not that the experiment didn’t have flaws, but I just saw a program recently (in the last 2-3 weeks) that replicated the same results…

  6. Glenn says

    The Milgram experiment retains its credibility in that 65% of the population of the USA were not disturbed by the Trayvon Martin verdict and neither were a similar number found to be disturbed by the My Lai atrocity.

    Need anyone come forth and claim that the population was “wheedled and nagged” into taking that position of indifference? Wheedling and nagging appears to me to be a real life simulation of exposure to propagandistic “news” sources, and even if it occurred it does not discredit the study in the least since it so closely parallels real life.

  7. psweet says

    Enkidum @4: I completely agree — saying that replication is a necessary part of science and then making it impossible to publish a replication is a logical error that I would think a field full of psychologists might notice!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>