Not so fast

I like PLoS ONE. I know a lot of scientists don’t. I think there’s a niche for what my PhD advisor called ‘bricks’: papers that may not be groundbreaking, but present rigorous results that contribute to building a larger structure (he was, BTW, describing one of my papers, and it wasn’t a compliment). It’s also possible that research that may not be obviously important when it’s done turns out to have big implications that weren’t at first obvious. PLoS ONE explicitly aims to ignore the ‘impact’ of a paper in accepting or rejecting papers, focusing only on the rigor of the results:

PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership, who are the most qualified to determine what is of interest to them.

But a couple of recent developments are worrying. First, as I’ve previously written, a recent article reporting a phylogenetic tree for eukaryotes was published in a form that never should have survived peer review (“A cautionary tale on reading phylogenetic trees,” PLoS ONE responds“). The article contains numerous misinterpretations of the tree, unexplained contradictions in the inferred divergence times, and, most importantly, a choice of outgroup that pretty much invalidates all of the phylogenetic inferences.

I have contacted the editors by Twitter and by email, and so far I haven’t gotten much more than “we’re looking into it.” I am very interested to see what the journal does about this, because, as I said before,

The only thing that separates a high-volume, open access journal like PLoS ONE from the dark underbelly of scholarly publishing is a rigorous peer review process.

Now there’s a whole new reason to worry.

Okay, that line above is exactly how far I got before reading the paper I’m about to write about. I’m leaving it in as a caution against rushing to judgement.

From the abstract, I was ready to castigate PLoS ONE over favoritism bestowed by handling editors on their coauthors. After reading the paper, I don’t think it shows that at all.

The paper I’m talking about is by Emre Sarigöl, David Garcia, Ingo Scholtes, and Frank Schweitzer and appears in the journal Scientometrics. The abstract does seem pretty worrisome. Based on a sample of over 100,000 PLoS ONE papers,

Our analysis reveals (1) that editors handle papers co-authored by previous collaborators significantly more often than expected at random, and (2) that such prior co-author relations are significantly related to faster manuscript handling…Our findings show that, even when correcting for other factors like time, experience, and performance, prior co-authorship relations have a large and significant influence on manuscript handling times, speeding up the editorial decision on average by 19 days.

That sounds pretty bad (the original title for this post was “This is not good”). But the devil is in the details, and on more sober reflection I don’t think that their results show clear evidence of favoritism or bias.

The analyses are, to the best of my ability to judge something this far outside my wheelhouse, exceptionally thorough. They consider all kinds of possible confounding factors that I never would have thought to check and subject all the data to rigorous statistical tests. Their conclusions are scrupulously conservative, and I do think that they’ve convincingly shown what they claim to have shown. I don’t think, though, that any of this adds up to evidence of favoritism.

First of all, it’s important to note that the article doesn’t address the acceptance of manuscripts at all:

Unfortunately, using the publicly available data introduced above, we cannot investigate whether handling editors are more likely to accept submissions from previous co-authors than from other authors. This is due to the fact that we do not have data on rejected manuscripts.

What it does address is ‘handling time’, which the authors define as

the time span from initial submission to final acceptance.

Of course, bias in handling time would still be a problem. Everyone wants their papers published faster, and if editors were fast-tracking their friends’ manuscripts, there’s no question that this would be unfair. And it does seem clear that coauthors’ manuscripts are accepted faster:

Sarigöl Fig. 3

Figure 3 from Sarigöl et al. 2017. Kernel density plots (bandwidth 0.8) of the conditional distributions of W given D = 1 (red; the handling editor and authors co-authored one or more papers) and D > 1 (blue; the handling editor and authors have not co-authored a paper). There is a significant shift in medians of 19 days.

Nineteen days faster, on average (it’s actually a difference in medians, but ‘on median’ sounds weird). Among the many confounding factors the authors considered was that perhaps articles submitted by editors’ coauthors were of higher quality and therefore breeze through the review process.

It is tempting to attribute the fact that submissions from previous co-authors are accepted significantly faster to social biases or favoritism. However, a simple alternative explanation could be that these publications are accepted faster because they are, in some objective sense, better. A reason for this could be that handling editors, who are likely to be reputed and experienced scholars, are likely to have co-authored articles with other reputed scientists. As such, the conjectured bias could, in fact, be a quality bias rather than a social bias that is due to social relations between authors and editor

Of course ‘quality’ is difficult to measure, so they settled for some easier metrics, such as number of times cited and downloaded. It turns out that handling time does decrease with increased quality metrics, but not enough to explain the whole effect. Even controlling for article quality (or rather proxies for quality), handling time really is shorter when the editors and authors have co-authored one or more papers.

Editors handle the submissions of previous co-authors (D = 1) significantly faster, with a reduction of 19 days on average, as compared to the rest (D > 1). The reduced handling time for previous co-authors is a robust finding, even if controlled controlling for other factors, such as the quality of the submission, the experience of the editor, and the topical similarity. This means that the shorter handling time of the submissions of previous co-authors cannot be explained by the fact these submissions are of better quality or more related to the expertise of the handling editor.

So how is this not evidence of bias? The problem is with the concept of ‘handling time’. The authors define handling time as

…the time taken between the submission and acceptance of a manuscript.

See the problem? In the quote above, the authors say “Editors handle” manuscripts by collaborators faster, but editors aren’t the only people who contribute to handling time. Look back at Figure 3: the median handling times are around 75-90 days, but a substantial number of papers take 250 days or more! Do you think the manuscript is sitting on the editor’s desk that whole time? Of course not.

For the majority of the handling time, in most cases, the manuscript will be in the hands of the reviewers and/or the authors themselves. Reviewers are usually expected to return their reviews within about three weeks, but this soft deadline is regularly exceeded, sometimes by months (and of course it only takes one slacker…it’s the slowest reviewer that is the rate-limiting step). Differences in reviewer handling times could conceivably be explained by bias: perhaps editors apply more pressure on reviewers of their friends’ manuscripts to get their reviews done on time. I think this is a stretch, though; editors have very little effective power over reviewers.

And there’s a third handling time here that’s unlikely to be explained by editor bias: that of the authors themselves. Scientific manuscripts are almost never accepted ‘as is’ the first time they’re submitted. There are almost always revisions. Unless it’s just me (it’s not just me). Sometimes the revisions are minor, but the journal usually won’t say ‘accepted’ until they’re done. Often they are major, requiring substantial rewriting, new analyses, or even additional experiments. Obviously, the time required to complete the revisions is similarly variable.

The article mostly assumes that differences in handling time are due to editors; what about the time authors take to revise? Could this explain any of the 19-day difference in median handling times? It could, if authors get their revisions done faster when they know the handling editor. I could easily imagine that this might be the case; the authors might be motivated to prioritize the revision when the editor is a senior colleague.

The authors have lumped the activities of three separate groups of people into one metric, ‘handling time’. This is fine; they worked with the information that was available. The problem is that they then attribute differences in handling time to just one of those groups; the possibility that reviewers and/or authors could have been responsible for some of the differences is not considered.

This is a pretty minor point, but for me it’s enough to question the conclusion of editorial bias. I’ll be convinced when it’s shown that the editors are actually the ones responsible for the differences in handling time. That’s not to say the results aren’t troubling. Bias is entirely plausible, and this raises the question of why editors are handling their co-authors manuscripts in the first place. In fact, they are selectively handling manuscripts from co-authors:

The case that editors handle a submission of previous co-authors (D = 1) occurs more than twenty times more often than expected at random. Even the case that the handling editor and the submitting authors have a common co-author in another publication (D = 2) occurs more than three times more often than at random.

There’s nothing nefarious about this; the editor with whom an author has previously co-authored a paper is the most likely to be expert in the subject matter. But it does raise the concerning possibility of conflicts of interest, as the authors point out:

This finding points to a rather strong social relation coming from previous collaborations. It should be less surprising when keeping in mind that authors and handling editors often belong to the same scientific community. Still, it bears potential conflicts of interest when handling the submissions of close collaborators.

It’s also unnecessary. PLoS ONE‘s editorial board has over 6,000 editors. It ought to be possible to find one who has relevant expertise, but who has not collaborated with a manuscript’s authors.


Stable links:

Sarigöl E, Garcia D, Scholtes I, Schweitzer F. 2017 Quantifying the effect of editor–author relations on manuscript handling times. Scientometrics 113, 609–631. doi:10.1007/s11192-017-2309-y


Leave a Reply