Andy Schlafly, the blinkered pudyanker at Conservapædia, has been on an impotent crusade against Richard Lenski for some time, and to his own routine self-humiliation. A while back, Schlafly wrote a petty, silly demand to Lenski that he turn over all of his data to the Conservapædians…Lenski wrote back and scorched him. Schlafly kept whining, mewling, and carping for the data (which he wouldn’t know what to do with if he got it, anyway), Lenski slammed him again.
Schlafly, demonstrating the causal relationship between arrogance and incompetence, has done it once more. He wrote to the Proceedings of the National Academy of Sciences with a letter listing his perceived gripes with the Lenski research, which he expected to be published. It’s a joke. He lists experimental errors that aren’t errors, statistical flaws that don’t exist, and snootily denies their interpretation of the results. And, of course, he whimpers again that the data hasn’t been publicly released. Once again, he openly reveals that he doesn’t understand the research.
The editorial board reviewed his letter and rejected it…no surprise at all. They’re also not likely to publish letters from schizophrenic hobos, random assortments of flyspecks on a sheet of urine-stained toilet paper, or the crayon scribblings of spoiled 3rd grade children who are outraged that the hot-lunch menu is inadequately stocked with pizza. Here is their reply:
A member of the Editorial Board has evaluated the letter and concluded that PNAS cannot publish it for the following reasons:
From what I take to be the underlying issue from the numbered points, Mr. Schlafly’s main concern has to do with the fact that one experiment failed to yield a statistically significant result, and this happened to be the experiment with the largest sample size. Every experiment has limited power to detect a difference of any given magnitude, and so in a series of experiments some may yield non-significant results even when the null hypothesis is false. The non-significant experiment may even be the one with the largest sample size. There is nothing exceptional in this–it is a matter of chance. Nevertheless, from a statistical point of view, it is proper to combine the results of independent experiments, as Blount et al. did correctly in their original paper. If the overall result is significant, as it is in this case, then the whole series of tests is regarded as significant. Mr. Schlafly seems to suggest that experiments differing in sample size cannot be combined in an overall analysis, and if this is what he is suggesting, he is wrong.
I think Letters published in PNAS should raise points that in themselves, or in conjunction with the authors’ response, should be of wide interest to the readership of PNAS or should illuminate some obscure or subtle point. The issues raised by Mr. Schlafly are neither obscure nor subtle, but are part of everyday statistical analysis at a level too elementary to need rehearsal in the pages of PNAS.
Mr. Schlafly’s final comment about release of data is uncalled for. My understanding is that the authors have made the relevant materials available on their web site. This seems to me to meet the requirement that “data collected with public funds belong in the public domain.” If Mr. Schlafly believes that the disclosure is incomplete, that is an issue that needs to be argued with the original funding agency, not with the readers of PNAS.
“The issues raised by Mr. Schlafly are neither obscure nor subtle, but are part of everyday statistical analysis at a level too elementary to need rehearsal in the pages of PNAS.” Oh, snap.
Oh, yeah, and … “he is wrong.”