In praise of rudeness


You may have heard that the replicability of much biomedical research has been called into question, in particular by the Ioannidis paper in 2005 that demonstrated that a heck of a lot of junk got into print, largely as a consequence of statistical noise being treated as significant, for a host of reasons. It was a bit of a wake-up call (unfortunately, most people just rolled over an smacked the snooze button), but one person who is on full alert is Dan Graur. Graur is being impolite again, and has a recommendation to improve the problem.

Interestingly, the rate with which junk claims are published in the field of experimental physics is nowhere near the stratospheric rates that are found in biology and medicine. Why the difference? Dr. Ioannidis thinks that there are two reasons for the difference. First, it seems that in the biomedical research community there exists an aversion to publish negative results, especially negative results of failed replications.

Second, it seems that there are sociological differences between the physics community and the biomedical community. In physics, there seems to be a higher “community standard for shaming reputations.” If people step out of line and make unsubstantiated claims, they are shamed in public.

Wait — I have to call a foul on the play. The Ioannidis paper certainly does say the first difference, but the second…nope. The quoted phrase about shaming doesn’t appear anywhere in the source Graur links to — I’d like to see where it came from.

I’m inclined to agree with it, except that I don’t have any evidence of any public shaming going on in the physics community. I’d like to know more about how physicists police their own than is given here.

This is the abstract from Ioannidis — you can easily see that there is a focus on removing bias and better statistics, there isn’t anything about using shame as a tool.

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

But don’t let that stop Graur, he’s on a roll!

In biomedicine, the search for truth is no longer a virtue, politeness is. According to an editorial in Nature Methods that singled out our work for criticism, one should avoid “harsh and offensive words” at all costs. “Civility in discourse is essential,” proclaim the editors of Nature Methods. Do not shame reputations! Well… by not shaming reputations, we have built a field of study where bombast thumps substance, and where wasters of public money are rewarded. By paying attention to “manners” we have prostituted science to a degree where “most published research findings are false.”

Rudeness has nothing to do with science. Science is not about abiding by a code of behavior put forward by Miss Manners. In criticizing ENCODE, our style of writing was meant to bring to the attention of the public a problem generated by the ENCODE propaganda barrage.

Face it, ENCODE for creationists was like “water memory” for people believing in homeopathic medicine. ENCODE deserved the same treatment as the “water memory” paper that was published in Nature by Jacques Benveniste. ENCODE needed to be shredded to pieces in a manner similar to the way Great Randy [the Amazing Randi] shredded “water memory.” The Great Randy [the Amazing Randi] was so rude, that his criticism was likened by Benveniste to a “Salem witch hunt.”

In science, sometimes a strong rude voice is needed to fight self-promotion and self-delusion. My favorite example of a rude voice concerns Theodor Roosevelt and his refutation of Abbott Thayer’s theory on all coloration in nature being “concealing” (e.g., the famous flamingoes in the sunset). Thayer’s book was shredded to pieces by Theodor Roosevelt (one year after completing his presidency). I wish my mastery of the English language would allow me to emulate Roosevelt’s viciousness. Alas, English is my third language.

We need strong and impolite voices to fight “stem cells created by acid baths,” “cold fusion,” “arsenic-based life,” and other feats of self delusion. People still believe that Svante Pääbo sequenced ancient DNA from an Egyptian mummy in 1985. Why? Because there were no strong and impolite rebuttals. Every criticism was whispered and “soto voce.” Science has become a collection of Yes Men (and Women) afraid of the big shots and their own shadows.

I agree, and I’d like to see more vigorous responses to the boring lot of trivial phenomenology that is cluttering up some of the journals I like to read. We’re getting to a point where the literature is swamped with kipple that could benefit from some housecleaning, and a little less emphasis on publishing for the sake of publishing. But we rely so often on the quantity of articles published as a metric for academic success, rather than the quality.

But I’d also like to see a stronger analysis comparing the literature in physics and biomedicine — is it really that different?

Comments

  1. infraredeyes says

    Physicists are more inclined that almost any other discipline to publish preprints, for example at Arxiv. Maybe the junk is filtered out at that level before full, peer-reviews publication.

  2. Sastra says

    One of the best things about alternative medicine is supposed to be how little internal criticism there is — or so I have been told by some of its proponents. Everything “works” for some people and not for others. You just have to find the one that’s right for you.

    If you go then to a Holistic Wellness convention or and clinic you’ll be offered a broad smorgasbord of approaches — many of which contradict other approaches. No matter, it’s all good. You won’t see arguments between the booths, you won’t hear one speaker contradicting another. You can’t obtain good health if there’s going to be negativity!

    Thus, the only real criticism is reserved for allopathic medicine and the phramaceutical companies — and it’s brutal. Vicious conspiracies and medical professionals seemingly concerned with enriching themselves by knowingly killing off the weak. Scientists trying to force their views on others, refusing to listen to the personal stories and experiences of those who have learned for themselves. Mockery and derision of the Brave Maverick Doctors who aren’t afraid to think outside the hegemonic Western Science box and borrow the wisdom and insights of the ancients, the non-western, and the ancient non-western. The unenlightened attack what they cannot understand and bully what they will not accept.

    But oh, within the walls of the alternative thinkers, it’s all sweetness and light. The mutual acceptance is heartwarming, the sort of thing you’d see at a music festival or interfaith alliance. This is how science should work: there are no wrong answers. There’s only different people discovering their own paths and respecting the paths of others.

    Has mainstream biology been infected with this Happy Clappy version of Mutual Validation? I don’t know. But when it comes to health there seems to be a fair chunk of the mainstream population which finds this reassuring and plausible.

  3. says

    Once I did summer research with a major experimental physics collaboration, and they told us to never leak anything. Not because we were afraid of losing secrets, but because we were afraid that the media might misinterpret an internal rumor as an unannounced discovery. This would be a huge embarrassment to thousands of scientists. When a big collaboration like that makes an announcement they want to be sure of it.

    Take as another example, those faster than light neutrinos. The group that made that announcement was embarrassed to have to announce it, since it most likely indicated a failure to correct for some experimental error (and this turned out to be true). But they announced it anyway, because they accumulated lots of evidence, and needed the help of the wider physics community to resolve it.

  4. Enkidum says

    I dunno about biomedicine per se, but in psychology there simply isn’t a culture of replication at all. It’s not just a lack of ability to publish null results, although that’s definitely a problem, it’s a lack of ability to publish a failure to replicate (or, for that matter, a successful replication). This may be changing with some new article formats in Psychological Science and Attention, Perception and Psychophysics, both of which are high-prestige journals that are adopting a format where you submit a proposal, basically, and your results are guaranteed to be published, along with someone else’s attempt to replicate your experiment. I don’t know if this style of paper is the solution, but something needs to happen.

    Contrast this with physics. I don’t know that much about the details of physics journals, but I do know people who immediately after Pons & Fleischmann came out with cold fusion, were able to publish refutations within months, in high-prestige journals. This just isn’t possible, at least not in a straightforward way, in psychology.

  5. brett says

    @PZ Myers

    I’m inclined to agree with it, except that I don’t have any evidence of any public shaming going on in the physics community. I’d like to know more about how physicists police their own than is given here.

    I’d say theoretical physics has it just as bad. It’s still dominated by String Theory, even though there isn’t an iota of experimental evidence to prove it. And when the proponents get called out for that, they . . . try to switch the standard of evidence (see Sean Carroll trying to retire falsifiability), or start rambling about untestable parallel universes that explain everything and nothing.

  6. robro says

    I would assume that economics drives at least some junk publication of biomedicine research, and contributes to the reluctance to replicate.

  7. says

    As a physicist, I’m struggling to think of many instances of shaming in my field.

    If there is any significant difference between bio-medicine and physics wrt false positive rates in the literature (which I’m not at all sure of), perhaps it is related to physicist’s less frequent reliance on p-values as a metric.

    In my former field, studying quantum dots embedded in semiconductors, it is clear that every sample is unique. Nobody could ever say that a measurement of mine was wrong, unless they measured the same sample, making my work in that field effectively unfalsifiable!

    About being rude:

    During my PhD, in The Netherlands, I was the only native English speaker in our group. A colleague had received feedback from a peer reviewer including a recommendation to have a native speaker look at the grammar and style of the paper.

    I was delighted to help, but one or two sentences in the introduction seemed beyond remedy, and the author seemed reluctant to be specific about what their point was. After some forensic interrogation, it turned out he was trying to draw attention to a shortcoming of an earlier method, but that one of the authors of that method was a co-author on the current paper. He wanted to make his point, without being rude, which amounted to not actually stating the content of his point. I told him, “you are on your own with that one, logic doesn’t permit what you are trying to achieve.”

  8. R Johnston says

    The reason for all the marginal results and statistical noise getting published in the field of biomedical research is simple: drug and other medical patents are phenomenally big business. Similarly, negative results don’t generate dollars. Biomedical journals are published primarily as a way to generate money for investors, not as a way to advance and memorialize academic credibility and competence and not as a way to advance knowledge. Biomedical research that isn’t readily verifiable can be quite valuable so long as it’s not obviously and readily falsifiable.

    Our medical patents system is a terrible thing on many, many levels.

  9. gingerbaker says

    “the literature in physics and biomedicine — is it really that different?”

    The research is way different – there is very little in the way of a truly controlled experiment in medicine. Compared to say, a biochemistry experiment, human trials are unbelievably messy. There are confounders galore, ridiculous amounts of variation in human subjects, placebo effects, etc.

    Plus, most clinical researchers know even less than pure science researchers about statistics, have zero incentive to report negative findings. So, they wind up designing protocols with enough strength to generate tiny albeit statistically-significant differences. The actual clinical significance of these tiny statistically-significant results is rarely addressed. They report on secondary outcomes frequently which is a no-no.

    Hate to be pompous, but most clinical research is done by physicians, not science PhD’s. It shows.

  10. wpjoe says

    RE: difference in biomedicine types and physics types. I was once told by the editor of Science (years ago) that biologists are really cut-throat on reviews as compared to the physical science people and that we needed to lighten up.

    RE: repeatability of biology experiments. I think there are a lot of unknown variables that affect biological processes and that procedures can fail to reproduce a result because we don’t know that we should be measuring those variables. An example: my lab routines uses the enzyme DNA ligase that we purchase from a commercial supplier. Use of this enzyme is a hit and miss affair in that a third of your ligation reactions may fail to produce ligated DNA. Why? A little too much salt in your DNA preparation or some LPS from the E. coli or some other subtle impurity caused the reaction not to go. So as a biologist, I am reluctant to see a failure of my lab to reproduce a result right away as proof that the published result was wrong. We might have to try many conditions to get the positive and negative controls to work and then spend a lot of time trying to replicate some published result. It is not always worth the effort when you could be doing the experiments you wanted to do.

  11. twas brillig (stevem) says

    Biomedical research that isn’t readily verifiable can be quite valuable so long as it’s not obviously and readily falsifiable.

    True, but like the subject of the OP; be careful with the words you are using. “falsifiable” has a special and precise use in Science: means a theory provides ways that it CAN be falsified. Not that an argument is so weak that it is most likely false. I’m such a nitpicker: EG “theory” doesn’t mean “a guess”, in Science; but a proposed system to explain all the facts of the subject. Maybe I’m misreading your use of the word… I see it now. Quite right, you are. Sorry to nitpick so rudely. </nitpick>

  12. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    People still believe that Svante Pääbo sequenced ancient DNA from an Egyptian mummy in 1985.

    Wait, what?

    **I’m** one of those people.

    So what’s the current state of analyzing/sequencing ancient DNA? Are the Neanderthal DNA claims bogus? The Denisovan claims? Was it ancient DNA isolation a failed idea in 1985, but successful later?

    Plus: was this someone falsifying or someone missing some serious sources of error?

    Wow, do I feel ignorant. I have totally missed this.

  13. Asad Aboobaker says

    IMO, the biggest difference between physics and other fields is the standard of what’s considered ‘significant’. In many fields, p value of p < 0.05 is sufficient. In physics, it's not even close — p < 0.003 ("3 sigma") is the bare minimum for considering something significant, and the typical standard is p < 0.0000003 ("5 sigma"). That's one in 3.5 million.

  14. gillt says

    Gruar:

    People still believe that Svante Pääbo sequenced ancient DNA from an Egyptian mummy in 1985.

    Whatever. Who still believes this? Anyway, Svante Pääbo just published a book where he’s apparently upfront about the whole affair.

    There are better things to criticize Pääbo over–reputation among women, duplicitousness, ego, hogging last author slot on Institute papers, etc.

  15. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    @gillt:

    1. You don’t read the comments, do you? Isn’t it a little ungenerous to believe in commenting enough to think other people should read your observations, but that there’s no reason you should read the observations of others?

    2. Graur (note spelling) wasn’t criticizing Pääbo. Graur was criticizing a research culture that is too deferential to manners and insufficiently deferential to evidence. Graur was criticizing **everyone else** for failing to take on Pääbo’s results in the way Graur feels was deserved.

    Poor reading of the OP and failure to read the comments makes a dull contribution…

  16. Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says

    @gillt #16:

    Now thanks for that: that was a good article on the egyptian side.

    Should one presume that if the data is so challenged on 2400 year old samples that Neanderthal & Denisovan DNA is even more challenged? The reason I ask is that the article stresses the importance of temperature. But does the heat of Egypt that increases the DNA decay rate increase by 10% or 10k% or 10m%? If the heat’s effects are outsized, then perhaps the much colder environment of the Denisovan cave would preserve DNA – perhaps particularly in a tooth, as I remember was used for at least one sample – long enough for current techniques to be successful.

    But the article, by stressing Egypt and heat, leaves me unclear on the (to me) more interesting questions that affect our understanding of human evolution.

    Again, thanks for the useful link.

  17. gillt says

    1. Of course I read the comments but sometimes I also skim them and may miss some stuff. Beyond that I don’t understand the point you’re trying to make.

    2. Graur is clearly criticizing everyone not everyone else. “Science has become a collection of Yes Men (and Women) afraid of the big shots and their own shadows.” Big shot with Yes Men is not exactly a neutral or objective description of Pääbo’s role as one of the heads of Leipzig’s Max Planck Institute.

    In his other writings, it’s pretty clear Graur has a personal beef with Pääbo.

    The paper she is referring to is entitled “Complete Mitochondrial Genomes of Ancient Canids Suggest a European Origin of Domestic Dogs” and was written by Thalmann et al., where the “et al.” stands for some of the biggest luminaries of ancient DNA, such as Svante Pääbo, the person who gave us a piece of Egyptian mummy mitochondrial DNA that turned out to be his own.

    http://judgestarling.tumblr.com/post/79974811093/shaming-reputations-as-a-means-of-reducing-the

    Moreover, the field of ancient DNA is ruled today by people that essentially made their careers out of artifacts and exaggerated claims (e.g., Poinar, Pääbo)

    http://judgestarling.tumblr.com/post/61115020130/a-short-history-of-ancient-dna-or-why-do-i-remain-a

    Regarding your #13 comment. Note that Graur contributes nothing to criticisms of contemporary attempts to sequence ancient DNA. Apparently to him Denisovan DNA is guilt by association.

  18. David Marjanović says

    Contrast this with physics. I don’t know that much about the details of physics journals, but I do know people who immediately after Pons & Fleischmann came out with cold fusion, were able to publish refutations within months, in high-prestige journals. This just isn’t possible, at least not in a straightforward way, in psychology.

    …OK. I’ll state it as a fact now: psychology is not currently being done as a science. It’s not science if you can’t publish “we’ve found that this paper is wrong”.

    I would assume that economics drives at least some junk publication of biomedicine research, and contributes to the reluctance to replicate.

    Yeah. Actual fraud seems to be limited to the disciplines where money is involved, apart from a few psychologists who… eh, see above. In the absence of money, the worst you get is people scooping others for personal glory like in Aëtogate.

    RE: difference in biomedicine types and physics types. I was once told by the editor of Science (years ago) that biologists are really cut-throat on reviews as compared to the physical science people and that we needed to lighten up.

    Biologists rarely recommend rejection, though.

    I’m such a nitpicker:

    Science with a capital S is the journal.

    People still believe that Svante Pääbo sequenced ancient DNA from an Egyptian mummy in 1985.

    Wait, what?

    **I’m** one of those people.

    So what’s the current state of analyzing/sequencing ancient DNA?

    Rule of thumb: all ancient DNA sequenced in the 80s and 90s is contamination from humans or their turkey sandwiches (see also “Triceratops was a giant horned turkey”). Both cleanliness and sequencing have advanced a great deal.

    There’s no way random contaminations could create such a thing as the Neandertaler or Denisovan genomes. I’m sure they’re real. Hey, they’re not even 0.04 million years old.

  19. gillt says

    @Crip Dyke #18

    Maybe someone more qualified than I am can address your questions on DNA integrity. Until then it’s my understanding that a novel method involving single strand DNA and high-throughput sequencing (Illumina short reads) yielded low (~31x) but apparently sufficient coverage to the human genome from a female Denisovan finger bone. (The female part is important as it allowed them to estimate male DNA contamination.)

    From the 2012 paper in Science
    A High-Coverage Genome Sequence from an Archaic Denisovan Individual

    The paper notes previous failed attempts at sequencing archaic human DNA (Sanger sequencing?) and justification for their use of single strand DNA (technical/practical: better yield and lower error rate). Another milestone over past attempts is better AT representation, which they chalk up to the single strand method. Apparently ancient DNA is GC biased. Overall the paper concludes <0.5% contemporary human contamination in their finger bone sample.

  20. chrislawson says

    I’ll add my voice to those who’ve said that the big difference between medical research and the rest of science is the enormous financial and social forces at play. A huge proportion of medical papers are funded or supported by industry, almost always with an interest in the outcome. Even where money isn’t at play, there are social agendas that warp the field, such as the American Academy of Paediatrics appalling 2013 position paper on male circumcision that is full of junk science in support of a cultural practice that doesn’t earn people a lot of money.

    I’m not saying these forces don’t apply at all in other fields, but they are endemic to medicine. How many other fields have had journals created by a major publisher like Elsevier for the sole purpose of laundering industry research?

  21. Enkidum says

    @Marjanović 20:

    “I’ll state it as a fact now: psychology is not currently being done as a science. ”

    Weeeelll… no, that’s not really fair. It’s harder to publish refutations in psych, and virtually impossible to publish simple replications. You can actually publish refutations, provided they are of the “this paper is wrong, and here is the true explanation for the apparent effect” style, and not simply “this paper is wrong, because I couldn’t replicate it”. And there are good reasons for this – failure to replicate is extraordinarily hard to interpret when you’re dealing with something as messy as behaviour and brains. Someone mentioned salt in the DNA culture above, well now you’ve got things like the music the subject was listening to before they came in the room, the shirt the experimenter was wearing, yadda yadda yadda. So in order to say a previous result is spurious, you have to have a good explanation of why.

    Personally I think the lack of an ability to publish replications is more of a problem. At the end of the day, if you can replicate it at will, it’s real, and that’s the only thing that matters. If there was some quantifiable metric of how many unrelated researchers had replicated a given result, and some way of giving them career payoffs for having done so, then we’d have a much better reason to be confident in specific results, particularly in messy fields like psychology.

  22. mildlymagnificent says

    I don’t have any evidence of any public shaming going on in the physics community

    Don’t know about the literature, but I do recall a comment somewhere by a physicist who took a mathematician friend along to a physics conference. Friend was severely taken aback by the “vigour” of the “discussions”. He said that nothing so overtly combative happened at mathematics conferences. He’d never seen speakers and other participants taken apart the way they were in that environment.

    As for results and significance in medical papers. A mathematician friend of my husband’s was working on a project that was taking years and consequently he didn’t have any of his own work to publish. So he kept a supply of medical journals in his bottom drawer. (This was 30+ years ago so nobody used standard computer statistics packages as they do now.) Whenever he thought the time was getting too long between publications, he’d flip through a few pages till he found a paper where the significance was wrong, or not significant, or reversed, and write a paper that took virtually no effort and go happily back to working on the topic that really interested him for several more months.

  23. martinhafner says

    Just wondering if those physicist who entered the bio-medical field behave compared to physicians and biologists. Do they do better? And what about those who participated in ENCODE? Maybe it’s just a feelinging but aren’t physicists over-represented in the higher ENCODE ranks.

  24. David Marjanović says

    Fuck off

    I’m serious. If you don’t have time to read a thread, you don’t have time to add to it. At the very least use Ctrl+F to find out if what you want to say has already been addressed.

    You can actually publish refutations, provided they are of the “this paper is wrong, and here is the true explanation for the apparent effect” style, and not simply “this paper is wrong, because I couldn’t replicate it”.

    Ah, OK.

    Personally I think the lack of an ability to publish replications is more of a problem. At the end of the day, if you can replicate it at will, it’s real, and that’s the only thing that matters.

    Can you publish “I tested the hypothesis on a larger dataset/under a larger number of different conditions, and the effect reported by the earlier paper is still there”? That’s routine in my field, for a wide definition of “my field”.

    and consequently he didn’t have any of his own work to publish. So he kept a supply of medical journals in his bottom drawer.

    …Awesome.

  25. gillt says

    #28

    I’m serious. If you don’t have time to read a thread, you don’t have time to add to it. At the very least use Ctrl+F to find out if what you want to say has already been addressed.

    You know, not that an asshole regular with hair-trigger privilege deserves an explanation, but I was being overly humble in admitting skimming comments–I never skim before posting–because I really had no idea at the time what Crip Dyke was upset about and wanted to diffuse it. I hadn’t refreshed the page before I submitted my comment so it looked like I was responding to something Crip Dyke wrote when I was only responding to what Graur was referring to, which was basically, in my opinion, the media not knowing about Pääbo’s Pharaoh fail (which is bullshit), and not any particular individual. Get it? I never saw Crip Dykes post. I cross posted. Sorry Crip Dyke. Now are you going to fuck off or do you have something else to add?

  26. Lagerbaer says

    In physics, it depends highly on the field you’re. It is definitely possible to publish junk, and I know an example from my own field, condensed matter physics:

    For a few decades now, we’re in possession of a computational method called “Density Functional Theory”. It’s a truly amazing way to compute certain properties of materials, and with some care to extract the electronic structure. However, to use this method, you have to really know what you’re doing and what you’re looking for, because there are many knobs to turn and fiddle with.

    Now, some groups treat their DFT code like a complete black box and feed it with input without any clear idea of what they’re doing, and then publish the results. It’s easy enough: There are so many compounds and combinations of materials that you can analyze that you’d never run out of things to run your code on. Sure, you won’t get into a high tier journal, but you can generate for yourself a steady flow of low tier papers. And since yet another DFT study of yet another boring compound is so boring, it’s unlikely that someone will bother to check your computations.

    On the other hand, if you publish crap or even fraudulent things that are interesting, then you can bet that someone will try to replicate your results. Sure, replications themselves don’t make for the most interesting papers, but I think it’s the geek factor here: People will say: “Oh, that’s a cool new thing. I wonder if I can get one of my grad students to build the thing, and then we could tweak it like this or like that…”.

    But I don’t think it’s so much about ‘shaming’ and more about everyone trying to get stuff to work, and then speaking up if it doesn’t work.

  27. says

    It’s an easy answer folks. just follow the money. There are financial repercussions to bio-med findings. In physics, a field not that incentivised by money, the repercussions are to your reputation. Also in phyiscs probably a lot less secrecy (again a result of market driven research) and not as many weirdly diverse types of research endeavors (again..money) that involve more specialized non-generalized expertise. Biology is MESSY…

  28. Rey Fox says

    Now are you going to fuck off or do you have something else to add?

    Who died and made you the Pharyngula bouncer?

  29. Enkidum says

    Can you publish “I tested the hypothesis on a larger dataset/under a larger number of different conditions, and the effect reported by the earlier paper is still there”? That’s routine in my field, for a wide definition of “my field”.

    Well… probably? I’m having trouble thinking of any examples, but there might be some out there. I would guess those would be significantly lower-profile publications, though, unless the original finding was extremely controversial. (I’m a lowly post-doc with only about 10 pubs to my name, so I may be missing something here.) And it’s certainly not common.

    It’s an easy answer folks. just follow the money.

    Well… the money matters, but not in psychology, where the problem is probably almost as endemic as medicine. Other than “money” in the sense of getting/keeping academic jobs, the standard publish-or-perish mentality, etc. I’m sure there’s a lot of corruption, both conscious and unconscious, in the medical research field, but frankly I doubt there’s that much in psych.