That journals can sometimes be fooled into publishing nonsense papers has been well established in some high-profile cases but these tended to be seen as isolated instances that were done deliberately, with the papers carefully constructed to prove a point.
Now the journal Nature reports on fake publications on a much larger scale using computer-generated papers. More than 120 papers were published, not in peer-reviewed journals, but in conference proceedings published by IEEE and Springer. Conference proceedings can have widely varying levels of review prior to publication. They usually face less scrutiny before they are published but the fraud is disturbing nonetheless because Springer says that these proceedings were supposed to have been peer-reviewed. IEEE has not said if their papers were peer-reviewed.
The fake papers were discovered by computer scientist Cyril Labbé of Joseph Fourier University in Grenoble, France and he says that these papers were generated using a freely available software program.
Labbé developed a way to automatically detect manuscripts composed by a piece of software called SCIgen, which randomly combines strings of words to produce fake computer-science papers. SCIgen was invented in 2005 by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge to prove that conferences would accept meaningless papers — and, as they put it, “to maximize amusement” (see ‘Computer conference welcomes gobbledegook paper”). A related program generates random physics manuscript titles on the satirical website arXiv vs. snarXiv. SCIgen is free to download and use, and it is unclear how many people have done so, or for what purposes. SCIgen’s output has occasionally popped up at conferences, when researchers have submitted nonsense papers and then revealed the trick.
…Labbé says that the latest discovery is merely one symptom of a “spamming war started at the heart of science” in which researchers feel pressured to rush out papers to publish as much as possible.
Some of the authors are denying having anything to do with the papers published under their names while others have not responded to requests for comments.
David Marjanović says
…But these aren’t real conferences. They’re spam. The “Computer conference welcomes gobbledegook paper” article ends in:
I myself regularly receive “invitations” to such “conferences” with ridiculously broad topics; invariably, I haven’t heard of any of the “organizers” or any of the people they’ve “asked” to be “keynote speakers”, and even though the topics are so broad, they can rarely be claimed to include what I work on! They harvest my e-mail address from my published papers, as do the spammers who “invite” me to publish in new “journals”.
Their goal is to collect the “participation fee”. The fake journals tend to claim to be open-access author-pays journals and to offer me a first-time discount on the publication fee…
Usually they address me as “Dear Professor/Researcher”. It’s like the “Dear Sir/Ma” used by the people who desperately want to foist tens of megabucks on me that they inherited from “cocoa and GOLD merchants” who were “poisoned by business associates” anywhere from Ghana to Nigeria.
The new paper reveals that Springer (one of the big four science publishers) and the Institute of Electrical and Electronic Engineers have repeatedly been fooled into publishing the “proceedings” of such “conferences”, or perhaps simply not cared. That is a scandal, but a different one than your post makes it sound like.
Again, these are fake conferences. From the new paper:
and:
Looks like they weren’t peer-reviewed, because the whole conference was a scam in the first place – each one of them.
Yeah, most or all of those journals were scams as I describe above.
Mano Singham says
@David,
So, fake papers submitted to fake conferences. How appropriate.
jamessweet says
I don’t review papers, but I do participate on a screening panel at the company where I work, where we decide which patent proposals will actually get sent to the lawyers to be patented (which is expensive). The process is pretty different in practice from peer review, but it’s a similar idea.
Once I got a proposal that, while clearly composed by a human, was completely unworkable. I think what happened is that in a meeting somewhere, somebody noticed that the description of algorithm X used some of the same words as problem Y, and said, “Hey, we should apply X to Y! You there, write up an invention proposal!” Except the words happened to mean different things in these different contexts, and the combination ended up being gibberish.
Here’s the thing: The first time I read it, I said to myself, “Oh man, that one is highly technical… I’ll come back to it later.” The next two times I thought, “Wow, this is quite subtle. It will take me some time to grasp the math.” It wasn’t until about the fourth read when I realized the problem was not with my understanding, but rather with the proposal itself. Even then, I had a co-worker read it over to make sure I wasn’t missing some key point. I think I spent more time with the nonsense proposal than I do with most serious ones.
I can definitely sympathize with these faulty peer reviewers, is what I am saying. If I was on a tight schedule, and all that was expected of me was a yea or a nay, and especially if I thought one or two other peer reviewers were going to be double-checking the paper, I might be pretty tempted to say, “Well, I don’t get it, but that’s because I’m not familiar with that particular algorithm. Whatever, I’m sure they did due diligence…” And of course that would have been a mistaken assumption, but a tempting one nonetheless!
astrosmashley says
surely you know about this
http://www.elsewhere.org/pomo/
Skokol’s postmodernist generator. A new and unique paper with every click
Mano Singham says
@jamesweet,
I have reviewed papers and proposals and it is not easy. It is time-consuming and difficult, exacerbated but the fact that many scientists are simply not good writers. So I too amy sympathetic to papers passing through the filter, unless glaring errors were overlooked.
I think part of the problem is that reviewers may be reluctant to tell authors “Look, I just don’t understand what you are trying to say and/or do. Please rewrite it so that it is more clear to the reader.”
jamessweet says
Yes, precisely. There are two parts of our invention review process that I think help immensely:
First, the primary reviewer is supposed to come to the panel with a short (usually 3-5 sentences) summary of what the invention is all about, in their own words. You can kinda fake it, and I’ve seen it done that the inventor basically writes the summary… but at least forcing the reviewer to attempt a summary I think makes it less likely. With the example at hand, if all I had to do was check a box, I might have been tempted to just shrug my shoulders and move on (especially if I was really busy). But I knew I didn’t understand it well enough to write a summary, so that sort of forced me to either make a conscious decision to circumvent the process, or else press on until I understood.
Second, you are supposed to talk to the inventor(s), at minimum via a brief email exchange, to make sure you understand. This, also, doesn’t always happen, but then there is at least a record that the process wasn’t followed to the letter.
Alas, the latter is probably not practical in academic peer review, since getting all of these disparate people in contact would be a logistical nightmare. (Even in the example I am giving, when everyone is working at the same company, logistics sometimes sabotage that part of the process) It’s too bad, because what a difference that could make…
David Marjanović says
Exactly.