Here we go again. Another paper, this time in Radiology Case Reports, got published while including obvious AI-generated text. I haven’t read the paper, since it’s been pulled, but it’s easy to see where it went wrong.
It begins:
In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.
That is enraging. The author of this paper is churning them out so heedlessly that they provide no time or care to the point they’ve given up writing and now have given up reading their own work. Back in the day when I was publishing with coauthors, we were meticulous to the point of tedium in proofreading — we’d have long sessions where we’d read alternate sentences of the paper to each other to catch any typos and review the content. Ever since I’ve assumed that most authors follow some variation of that procedure. I was wrong.
If I knew an author was this sloppy and lazy in their work, I wouldn’t trust anything they ever wrote. How can you make all the thought and effort you put into the science, and then just hand off the communication of that science to an unthinking machine? It suggests to me that as little thought was put into the research as in the writing.
No wonder there is such a glut of scientific literature.