It’s Tuesday…that must mean it’s “Let’s point out flaws in the academic system!” day.
Here’s another example: some investigators did a study of the value of screening cancer patients for distress — they asked whether such screening actually contributed to patient’s feelings of well-being and willingness to follow medical recommendations, and whether it was cost-effective. Their answer was no on all counts. Kudos to the Journal of Clinical Oncology for publishing a negative result.
Raspberries to the Journal of Clinical Oncology for what they did next, though. They brought in a proponent of screening to write a dismissal of the study.
Hollingsworth and colleagues were surely disappointed to discover that their article was accompanied by a negative editorial commentary. They had not been alerted or given an opportunity to offer a rebuttal. Their manuscript had made it through peer review, only to get whomped by a major proponent of screening, Linda Carlson.
After some faint praise, Carlson tried to neutralize the negative finding
despite several strengths, major study design limitations may explain this result, temper interpretations, and inform further clinical implementation of screening for distress programs.
And if anyone tries to access Hollingworth’s article through Google Scholar or the Journal of Clinical Oncology website, they run smack into a paywall. Yet they can get through to Carlson’s commentary without obstruction and download a PDF for free. So, easier to access the trashing of the article than the article itself. Doubly unfair!
Why we need open access, reason #21035.
I also found it interesting that the critical opinion piece had references…but most were to the author, or lab groups that had published with the author. Signs of a circle-jerk in the citations are always a warning sign.
The opinion piece also talks at length about problems with the Hollingworth paper’s protocols. I think it’s important to point out such failings, but shouldn’t they be done by editors and reviewers before publication? And why nitpick at studies that disagree with you, while ignoring major methodological flaws in your own approach?
Try this experiment: Ignore what is said in abstracts of screening studies and instead check the results section carefully. You will see that there are actually lots of negative studies out there, but they have been spun into positive studies. This can easily be accomplished by authors ignoring results obtained for primary outcomes at pre-specified follow-up periods. They can hedge their bets and assess outcome with a full battery of measures at multiple timepoints and then choose the findings that make screening looked best. Or they can just ignore their actual results when writing abstracts and discussion sections.
Especially in their abstracts, articles report only the strongest results at the particular time point that make the study looked best. They emphasize unplanned subgroup analyses. Thus, they report that breast cancer patients did particularly well at 6 months, and ignore that was not true for 3 or 12 month follow up. Clever authors interested in getting published ignore other groups of cancer patients who did not benefit, even when their actual hypothesis had been that all patients would show an improvement and breast cancer patients had not been singled out ahead of time. With lots of opportunities to lump, split, and selectively report the data, such results can be obtained by chance, not fraud, but won’t replicate.
Oh, boy, another of my peeves: fishing with statistics, gaming with your data.