Go look for the big old mess


Jonah Lehrer. Recycling old material aka self-plagiarism. Not correcting mistakes. A culture that fosters and rewards such things. Science writing. Carl Zimmer draws them all together in order to talk about the difficulty of doing good science writing and why it’s important to keep it in mind.

I was willing to cut Lehrer some slack at first, but as the additional evidence came in, I wondered if I was making excuses for him. The breaking point came when I read about how he had warped a story about a memory prodigy, claiming that he had memorized all of Dante’s Inferno instead of just the first few lines. When someone noted the error, Lehrer blamed it on his editor, but kept on using the enhanced version of the story in his own blog and on Radiolab (which later had to correct their podcast). It’s easy to slip up with facts, but we have an obligation to admit when we’re wrong and not make the same mistake again. It would have been bad enough that Lehrer distorted the facts and continued to do so after having the facts pointed out to him. But he was also willing to damage other people’s reputations along the way. That’s when I signed off.

Really. Don’t mess with Radiolab.

The problem, Zimmer goes on, isn’t (as silly generalizations would have it) that all popular writing about neuroscience is crappy self-help, but “the trouble that arises when a science writer reduces complex science to a glib lesson.” Take Lehrer’s 2010 New Yorker article “The Decline Effect and the Scientific Method” for instance.

For years, a lot of scientists and science writers alike have grown concerned that flashy studies often turn out to be wrong. But Lehrer leaped to a flashy conclusion that science itself is hopelessly flawed.

That makes for great copy (29,000 people liked the story on Facebook), for which I’m sure his editors were grateful. But Lehrer himself didn’t believe what he was writing. If scientific studies were fundamentally unreliable, then why did he continue to publish articles and a book full of emphatic claims about how the brain works–all based on those same supposedly unreliable studies?

My guess is that it’s because both “work” so the fact that they contradict each other is beside the point.

The reality is more complicated. After Lehrer’s piece came out, the Columba statistician Andrew Gelman was asked what he thought of it. “My answer is Yes, there is something wrong with the scientific method,” he wrote–adding (and this is crucial)–”if this method is defined as running experiments and doing data analysis in a patternless way and then reporting, as true, results that pass a statistical significance threshold.”

In other words, this is not a matter about which we should simply issue Milan-Kundera-like utterances, like Lehrer does in his article: “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” In fact, this is a matter of statistical power, experimental design, posterior Bayesian distributions, and other decidely unsexy issues (Gelman explains the gory details in this American Scientist article [pdf]).

I love the Milan Kundera line; all the more because I hate glib pronouncements like Lehrer’s. It’s so easy to say things like that.

Zimmer goes on to explain the impossibility of explaining that kind of complexity in a 1500 word piece, and what is to be done about it.

Writers can either tackle this dilemma with eyes wide open, or they can look for a way to cut corners and pretend that the dilemma doesn’t exist. And readers can improve things too. When you find yourself captivated by someone talking to you about science in a way that makes you feel like everything’s wonderfully clear and simple (and conforms to your own way of looking at the world), turn away and go look for the big old mess.

 

Comments

  1. Stacy says

    Ah, “The Decline Effect and the Scientific Method.”

    I remember that. Lehrer used old ESP experiments as evidence for the Decline Effect. He mentioned that after the initial round of experiments were finished and published, scientists tried to replicate the original experiments and failed to get the same positive results. But Lehrer didn’t mention the fact that people examining the original experiments found extremely sloppy protocols. I thought at the time that was odd; he just threw the difference between original and subsequent results out there, attributable to the mysterious “Decline Effect.”

    (Ophelia, you might possibly be interested in knowing that the person who posted that article to a skeptics email list we both participate in, and who defended Lehrer to me when I expressed my skepticism, was Greg, he of the recent Facebook post.)

Leave a Reply

Your email address will not be published. Required fields are marked *