Those sneaky forms of academic bias…


It’s Tuesday…that must mean it’s “Let’s point out flaws in the academic system!” day.

Here’s another example: some investigators did a study of the value of screening cancer patients for distress — they asked whether such screening actually contributed to patient’s feelings of well-being and willingness to follow medical recommendations, and whether it was cost-effective. Their answer was no on all counts. Kudos to the Journal of Clinical Oncology for publishing a negative result.

Raspberries to the Journal of Clinical Oncology for what they did next, though. They brought in a proponent of screening to write a dismissal of the study.

Hollingsworth and colleagues were surely disappointed to discover that their article was accompanied by a negative editorial commentary. They had not been alerted or given an opportunity to offer a rebuttal. Their manuscript had made it through peer review, only to get whomped by a major proponent of screening, Linda Carlson.

After some faint praise, Carlson tried to neutralize the negative finding

despite several strengths, major study design limitations may explain this result, temper interpretations, and inform further clinical implementation of screening for distress programs.

And if anyone tries to access Hollingworth’s  article through Google Scholar or the Journal of Clinical Oncology website, they run smack into a paywall. Yet they can get through to Carlson’s commentary without obstruction and download a PDF for free.  So, easier to access the trashing of the article than the article itself. Doubly unfair!

Why we need open access, reason #21035.

I also found it interesting that the critical opinion piece had references…but most were to the author, or lab groups that had published with the author. Signs of a circle-jerk in the citations are always a warning sign.

The opinion piece also talks at length about problems with the Hollingworth paper’s protocols. I think it’s important to point out such failings, but shouldn’t they be done by editors and reviewers before publication? And why nitpick at studies that disagree with you, while ignoring major methodological flaws in your own approach?

Try this experiment: Ignore what is said in abstracts of screening studies and instead check the results section carefully. You will see that there are actually lots of negative studies out there, but they have been spun into positive studies. This can easily be accomplished by authors ignoring results obtained for primary outcomes at pre-specified follow-up periods. They can hedge their bets and assess outcome with a full battery of measures at multiple timepoints and then choose the findings that make screening looked best. Or they can just ignore their actual results when writing abstracts and discussion sections.

Especially in their abstracts, articles report only the strongest results at the particular time point that make the study looked best. They emphasize unplanned subgroup analyses. Thus, they report that breast cancer patients did particularly well at 6 months, and ignore that was not true for 3 or 12 month follow up. Clever authors interested in getting published ignore other groups of cancer patients who did not benefit, even when their actual hypothesis had been that all patients would show an improvement and breast cancer patients had not been singled out ahead of time. With lots of opportunities to lump, split, and selectively report the data, such results can be obtained by chance, not fraud, but won’t replicate.

Oh, boy, another of my peeves: fishing with statistics, gaming with your data.

Comments

  1. Raucous Indignation says

    An editorial rebuttal in a journal like JCO is just that, an editorial. We are trained to evaluate data as oncologists. Our field is very, VERY data driven. Most of our big questions have clear answers that were derived from high level clinical trial data. I, for one, and most of my colleagues know that an editorial is worth no more than the paper on which it is printed. And most of us read JCO online.

  2. Kevin Kehres says

    Oh yes, fun with numbers is one of my favorite sports.

    What is it with abstracts not matching the actual results of the paper, though? Don’t the abstracts go through the same peer-review process as the paper? Or is the abstract added later by the authors?

    Can’t tell you how many times I’ve downloaded a paper (or paid $30 for it) based on an interesting abstract, only to find that the actual data are shit in a rusty can.

  3. dianne says

    Most of our big questions have clear answers that were derived from high level clinical trial data.

    Sort of. On a good day. After you filter for trial design bias. Assuming no active fraud. Sorry, in a cynical mood today.

  4. Raucous Indignation says

    Yes, Dianne, you are correct in all those qualifications. But I was formally taught about those things. The critical appraisal of data should include acknowledgement of those uncertainties. Alas, I do not have the time to engage you in an extended discussion of epistemology.

  5. Kristof says

    My little story…
    Couple of years ago I did some research, nothing too fancy but not particularly bad. I could honestly say it was expanding knowledge on the subject by a tiny bit. I found a journal in an appropriate area which has even published research similar to mine, both in topic as well as level (although mine went into more detail and were covering slightly different aspects). I’ve sent my paper and… the response I got was… weird… It was something along the lines of “oh, that’s nice, but Very Important Professor Whatshisface has already published same (sic!) research nearly 15 years ago and they were absolutely awesome and your are actually total shit”. I thought that it is possible and spent good couple of days trying to find any of this guys papers for future reference and found… nothing on that particular subject! Even when he was working on the same plant species, he was not workin on same protein family. That made me think if my reviewer is not Very Important Professor himself, blowing his own trumpet, or – more likely – his brown nosing subordinate. Or a team actually working on similar subject, trying to get rid of competitors… Unfortunately my boss back then was total coward and told me not to challenge this review e.g. by asking for references. (In the end I published it in a different journal.)

  6. garnetstar says

    I feel for you, Kristof. I think you’re quite right on why you got that response.

    The three responses to work that’s new and/or threatening to researchers who are consumed by self-importance are 1) “I did that years ago”, 2) “You didn’t reference my extremely important prior work”, and/or 3) “Your work isn’t any good and isn’t worth publishing.”

    But where do these editors get off publishing a negative editorial without letting the authors respond in the same issue? Some guy pulled that on me once, and, though I shamed him into letting me publishing my rebuttal in the next issue, that seemingly-unrefuted editorial is still out there, and casual readers may not ever see the rebuttal.

    I asked the editor whether he would have pulled the same stunt on a male scientist, to which he replied “Of course I haven’t any gender bias, I would have done that to a male as well”, so I congratulated him on being an equal-opportunity bad editor.

  7. ragdish says

    As I make my foray into academic medicine, I was told to be prepared for pile-driving angry rebuttals in the editorials. And after a single rejection my paper got accepted in a peer reviewed journal and then holy crap!!! I have never been raked over the coals as such in my entire life. But does it matter even though I got published and if the rebuttal is total BS? Hell yes!! I was told that negative rebuttals can influence reviewers for subsequent grants. The ever so coveted golden fleece of awards is the RO1 funded by the NIH. Not that I even have a wing a prayer of getting that prize but negative reviews even if they are total BS, certainly don’t help.

    And for the record, I prefer the pile-driving rebuttals I get from FTB. You folks are such sweethearts.

  8. dianne says

    You folks are such sweethearts.

    Hey! We’re evil baby eating atheists, not sweethearts! And I’m jealous. I practically never get angry rebuttals to my articles.

  9. says

    On the topic of paywalls: Can someone explain to me why pre-print servers a la arxiv in the physics/astronomy community aren’t commonly used in the life sciences? I know of a few attempts at it, but I’ve never understood why it hasn’t caught on like wildfire.

  10. Rich Woods says

    I know of a few attempts at it, but I’ve never understood why it hasn’t caught on like wildfire.

    Biologists keep waiting for it to evolve, while physicists expect 90% of it to be superceded within their productive lifetime.

    Joking. Obviously.

    What?

  11. vereverum says

    @ shockna #9
    Just checked arxiv and quantitative biology has 791 2014 articles and just astrophysics (from a multitude of physics subgroups) has 6982. Part of the problem may be that of the kingdom issue expressed in PZ’s post today: Biology is a hard problem.

    The article points out that one of the things that has made tracking down the genetic cause of this disorder is academic competition. Lots of people are born with novel genetic disorders, and they go to their high-powered geneticist/MD, and they get parts or their entire genome sequenced, and then the sequence is kept private. This is now the doctor’s discovery: making it open knowledge would also make it likely that someone else would use it and publish it, and that they wouldn’t get credit for it. That doesn’t help patients, but it does help careers.

  12. blbt5 says

    Still surprised when an eminent scientist says during a presentation: “There’s a difference, but it’s not statistically significant”.