Rooting out scientific fraud with cash rewards


The reward structure in American universities, especially in the sciences, puts a great deal of pressure on scientists. In order to get research grants, which are an important measure used in promotions and getting tenure, scientists need to publish a lot of papers and show that they are major findings. This has resulted in some of them rushing to print without performing due diligence to make sure that their results are robust and repeatable. In some cases, this is just sloppiness or allowing their prejudices to unduly guide the interpretation of results, though that is still dishonorable. In the more serious cases, fraud is involved, either by deliberately massaging data to get the required result or by actually manufacturing the data.

Science has long been based on trust because it takes a lot of effort to reproduce the works of others and the custom has been to build on the work of others, not check them. It is only when some anomaly turns up that people comb through the work to see what might have gone wrong. Because of the prevalence of recent scandals, there are now efforts underway to put in place mechanisms to root out problems, and one of them involves giving cash rewards to those who investigate and reveal such cases.

Scientific-misconduct accusations are leading to retractions of high-profile papers, forcing reckonings within fields and ending professorships, even presidencies. But there’s no telling how widespread errors are in research: As it is, they’re largely brought to light by unpaid volunteers.

A program launching this month is hoping to shake up that incentive structure. Backed by 250,000 Swiss francs, or roughly $285,000, in funding from the University of Bern, in Switzerland, it will pay reviewers to root out mistakes in influential papers, beginning with a handful in psychology. The more errors found, and the more severe they are, the more the sleuths stand to make.

“When I build my research on top of something that’s erroneous and I don’t know about it, that’s a cost because my research is built on false assumptions,” said Malte Elson, a psychologist at the University of Bern who is leading the new program with Ruben C. Arslan, a postdoctoral researcher at the University of Leipzig, in Germany.

About 20 percent of genetics papers are thought to contain errors introduced by Microsoft Excel, while an estimated one in four papers in general science journals have incorrect citations. Errors can be unintentional, but 2 percent of surveyed scientists admit to the more serious charges of fabricating or falsifying data. In just the last year, researchers at the Dana-Farber Cancer Institute, Harvard Medical School, Stanford University, and the University of Rochester, to name a few, have faced scrutiny over their work.

Over the next four years, the ERROR program — short for Estimating the Reliability and Robustness of Research — will aim to pay experts to scrutinize 100 widely cited papers that fit their technical or subject expertise. Psychology will be first up, but the organizers hope to branch out to other subjects, like economics, political science, and medicine.

Errors can take all forms, whether differences between how experiments were done versus reported, or discrepancies between analyses and conclusions. Some errors could be clear miscalculations, and others more subjective and context-dependent, the organizers acknowledge, so reviewers will be allowed to determine how to look for them. They’ll also be allowed to ask the authors for help in fact-checking. Each will generate a report of any errors found, which will eventually be posted publicly.

A crucial caveat: A paper will be reviewed only if its authors agree. That’s because without full access to the underlying data, code, and other materials, there will always be questions the reviewer cannot answer, Elson said. “At this point, many people will be skeptical, and they will maybe rightfully think they can only lose if they say yes to this — all they do is put their paper at risk,” he said.

On the other hand, the prospect of a reputational boost may attract participants. “People can then point to the error report that will be public and say, ‘They checked my work and it’s fine,’” Elson said.

It is sad that it has come to this.

The requirement that authors must agree to have their papers reviewed this way is problematic. It may be better if journals actually require authors to allow such examination if their paper is to be considered for publication.

Maybe just the fear that they will be examined may make researchers more careful about what they publish.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *