There is a problem in the current science climate which seems to reward original and exciting new research more and seems to value whether the results are true less. I have written before about the problem of journals publishing papers where the results don’t hold up under subsequent examination and how difficult it is to get them to publish articles that contradict earlier ones.
This poses a danger to the self-correcting nature of science since wrong information can get entrenched, and, as Carl Zimmer points out, this can be quite harmful.
C. Glenn Begley, who spent a decade in charge of global cancer research at the biotech giant Amgen, recently dispatched 100 Amgen scientists to replicate 53 landmark experiments in cancer—the kind of experiments that lead pharmaceutical companies to sink millions of dollars to turn the results into a drug. In March Begley published the results: They failed to replicate 47 of them.
Zimmer reports on an attempt known as the Reproducibility Initiative that seeks to address this deficiency by providing researchers with a way to gain credibility by showing that their results have been independently reproduced. The Initiative’s website explains how it will work.
The Reproducibility Initiative is a new program to help scientists validate studies for publication or commercialization. Simply submit your study, and we’ll match you to one of our 1000+ expert providers for validation. Validations are conducted blind, on a fee-for-service basis.
Validated studies will receive a Certificate of Reproducibility acknowledging that their results have been independently reproduced as part of the Reproducibility Initiative. Researchers have the opportunity to publish the replicated results as an independent publication in the PLOS Reproducibility Collection, and can share their data via the figshare Reproducibility Collection repository.
In order to encourage researchers to submit their work to this Initiative, it would help if grants agencies required proof of reproducibility before funding is approved.
Unfortunately, the panel of experts does not at present seem to have expertise in the field of psychology, which is where the problem of false positives seems to be the most acute.