Is this how an innocent man responds?


I already wrote about the data-faking scandal with Jonathan Pruitt, but the one thing I was missing was any explanation from Pruitt himself. Science just covered the matter, and got a statement from him.

At first, he was in the fray tweeting—but no longer. “There are so many voices and they are so loud and diverse, there’s no way to address it.” Instead, he says he’s focusing on his fieldwork, setting insect traps across the South Pacific before and after cyclones hit to learn how different species are affected by these tremendous storms. Last year, he reported on work in which he collected data on spiders before and after a U.S. hurricane. It’s one of the papers now being scrutinized.

That’s right, there are more papers under investigation, and he’s collecting more data that will have to be carefully scrutinized. What he ought to be doing, if he’s innocent, is working to validate his previous work, not flying off to the South Pacific. His career is in dire peril, and he knows it. Instead, he seems to have resigned himself to being caught and his future is bleak.

Pruitt says he has no expectations that he will be able to continue in behavioral ecology, saying he knows he has lost the trust of his colleagues about his data. But these cyclone data will be useful no matter what happens, he says. “If I’m on fire and my longevity is [short], I will bequeath them to another researcher.” He is concerned, however, that as each retraction happens, even innocuous mistakes in his data or experiments will be cause for more retractions. It’s a worry that Dingemanse shares. Such careful inspection of data will often turn up something, no matter how well collected and compiled, he says. “If you looked at my data [this way], you might also come up with causes for concern,” Dingemanse says.

What? No. I’ve got a pile of data I’m sorting through right now, and I’d happily let anyone look at it. It’s just tables of counts of spider species in various locations, but I’ve got a paper trail — all the on-site notes for each site — and the numbers are honestly recorded. I have no fear that it can be misinterpreted.

Also, the colleagues who made this discovery have a vested interest in not seeing causes for concern, since they’ve had to retract published work. It has cost them to report the problems. You know they tested the heck out of the data set before making that difficult decision.

Also, there’s this little tell.

Simmons has spent the past 3 days poring over the 11 papers Pruitt has written for his journal, going back to a data repository now mandated by his journal and others to check raw data. Yet he laments that the initial hashtag—#Pruittgate—is too damming and thinks “we need to, as much as we can, avoid a witch hunt.”

Jeez. The “witch hunt” accusation has become as predictable and useless as the “-gate” suffix.

Comments

  1. anthrosciguy says

    By all means, don’t do a witch hunt. Do a hunt for a crooked and/or incompetent scientist instead.

  2. Artor says

    “Such careful inspection of data will often turn up something, no matter how well collected and compiled, he says.”

    It sounds to me like someone does a lot of sloppy work, and assumes all other scientists do the same. I’d say a careful inspection of his data is even more advisable after such a statement. Even if he didn’t deliberately falsify data, he doesn’t seem to have been very rigorous in seeing that is correct.

  3. jrkrideau says

    @ 2 Artor
    “Such careful inspection of data will often turn up something, no matter how well collected and compiled, he says.”

    At the nitpicking level this is almost certainly true. I do not know the research area but small errors can creep in: the number 2.58 is transcribed as 2.85 or a name is mispelt. However the problems that have been reported so far are far beyond minor errors. Duplicate blocks of data should not just appear.

    Even if he didn’t deliberately falsify data, he doesn’t seem to have been very rigorous in seeing that is correct.
    I think you are being rather kind here.

  4. nomdeplume says

    I cannot understand how a scientist can cheat at science. It is like a chess player cheating at chess. Or an artist forging art. Yes, you can make money. But how do you live with yourself?

  5. bvsp says

    I’m someone who’s having to retract a number of papers because of this. For a good overview of how this first came about, please read Kate Laskowski’s blog post :https://laskowskilab.faculty.ucdavis.edu/2020/01/29/retractions/.

    Here’s a blog post by Dan Bolnick about how we’re all trying to sort through everything: http://ecoevoevoeco.blogspot.com/2020/01/the-pruitt-retraction-storm-part-1.html

    As well as his story of it: http://ecoevoevoeco.blogspot.com/2020/01/the-pruitt-retraction-storm-part-2.html.

  6. MadHatter says

    After having spent the last year reexamining and reanalysing and re-QCing my data I am seriously irritated by the comment about “careful inspection…turning anything up”. If your data collection is so shoddy that a careful inspection makes it look like you falsified data than you shouldn’t be doing this.

    And any data he collects subsequent to this will be useless to his colleagues because they won’t be able to trust it. FFS

  7. jrkrideau says

    @ 8 bvsp
    My sincere sympathy and best wishes. It must be horrible getting caught in a shit storm like this.

    I have already read Kate Laskowski’s blog post and it is impressive. She seems to give a good explanation of some of the data problems.

  8. bvsp says

    @ PZ Myers – Yep, committee member and major early collaborator. It’s unbelievably frustrating as I’ve estimated that 2-3 years of productivity has been wasted because of this. Luckily lots of other collaborators have been wonderful in reaching out and offering strong letters whenever their needed. The advantage to this blowing up all at once is that the entire field is well aware of the underlying issue. So his collaborators shouldn’t have major reputational harm… although they will have to struggle with large parts of their CV disappearing. And some of them heavily based their current research programs on those ‘problematic’ papers, so they’re in a rather major lurch with having to redo foundational work.

  9. jack16 says

    Someone should create cheat detection software. Most cheating violates various statistical laws. (Sir Cyril Bert) The existence of such software would make cheating more dangerous and less “rewarding”.

    jack16