Encouraging reproducibility in science


There is a problem in the current science climate which seems to reward original and exciting new research more and seems to value whether the results are true less. I have written before about the problem of journals publishing papers where the results don’t hold up under subsequent examination and how difficult it is to get them to publish articles that contradict earlier ones.

This poses a danger to the self-correcting nature of science since wrong information can get entrenched, and, as Carl Zimmer points out, this can be quite harmful.

C. Glenn Begley, who spent a decade in charge of global cancer research at the biotech giant Amgen, recently dispatched 100 Amgen scientists to replicate 53 landmark experiments in cancer—the kind of experiments that lead pharmaceutical companies to sink millions of dollars to turn the results into a drug. In March Begley published the results: They failed to replicate 47 of them.

Zimmer reports on an attempt known as the Reproducibility Initiative that seeks to address this deficiency by providing researchers with a way to gain credibility by showing that their results have been independently reproduced. The Initiative’s website explains how it will work.

The Reproducibility Initiative is a new program to help scientists validate studies for publication or commercialization. Simply submit your study, and we’ll match you to one of our 1000+ expert providers for validation. Validations are conducted blind, on a fee-for-service basis.

Validated studies will receive a Certificate of Reproducibility acknowledging that their results have been independently reproduced as part of the Reproducibility Initiative. Researchers have the opportunity to publish the replicated results as an independent publication in the PLOS Reproducibility Collection, and can share their data via the figshare Reproducibility Collection repository.

In order to encourage researchers to submit their work to this Initiative, it would help if grants agencies required proof of reproducibility before funding is approved.

Unfortunately, the panel of experts does not at present seem to have expertise in the field of psychology, which is where the problem of false positives seems to be the most acute.

Comments

  1. Jared A says

    I am sorry to be harsh, but this is so naive.

    This is a pay service, no? In order to properly replicate a result the cost must be enormous. This may not seem like an insurmountable problem in the US, where funding is relatively high, but this is not so in less wealthy nations. Many researchers are already at a major disadvantage doing research because they can’t afford access to even mainstream journals. This seems like another level of punishment against researchers outside of the US, western/central Europe, and eastern Asia.

    I have an even bigger issue with this. It looks like this service is currently for pharmaceutical research. I can see how this might be feasible on the synthetic side because so much of the work is about sampling an enormous multidimensional space. Once a promising compound is identified, reproducing might mean just replicating those one or two experiments out of hundreds. It still may delay publication by a few weeks, but maybe some will find that reasonable (doubtful, in my opinion). But the idea of conducting a field trial for a drug twice is pretty crazy.

    And in practically every other field I am familiar with, I cannot see how you could ever contract out reproducing your results. Even if you could, I can think of many examples where I have known that results in my field were “wrong” where replicating the experiment would have given the same spurious result if you didn’t know what you were doing. It was frustratingly difficult/impossible to get the record straight in these cases because the people who cared were in one field, while the mistake was obvious only if you were an expert in a related field. Thus, “experts” tended to reproduce the wrong results, and the more rigorous results that debunked the status quo were difficult to publish because there was “no new science” (according to the editors). When you did publish many don’t pay attention because it contradicts what they believe.

    I think that this type of mistake in science is quite common. Having a kluged-on army of mercenaries to double check won’t fix it.

    In order to encourage researchers to submit their work to this Initiative, it would help if grants agencies required proof of reproducibility before funding is approved. Great. Let’s give the granting agencies even more ideas for draconian hurdles to jump through. Certainly having to describe in minute detail what grant each part of the paper was funded by is not enough. Anything to protect our PhD students from ever getting to see their advisors in the lab. You know, mentoring them.

  2. smrnda says

    I think I recall a psychology experiment done by Simon Baren-Cohen which allegedly showed that boy babies were more likely to look at mobiles and girl babies were more likely to look at pictures of faces. Others were not only unable to replicate the findings, but given the age of infants used they would not have had the ability to turn their heads on their own reliably. I just recall that the initial report was presented as some kind of conclusive ‘evidence’ that male and female brains were hardwired differently, but the lack of reproduceability was confined to academic journals. So in some cases, even when people follow up, there’s a huge problem with how journalists disseminate scientific knowledge to the public on top of that.

Leave a Reply

Your email address will not be published. Required fields are marked *