If you’ve also been wondering what the answer to this question might be, you’ll just have to come to Cafe Scientifique in Morris next week.
I’m going to guess that the answer is that it’s a prerequisite to getting into biology classes? Maybe?
If you’re in the West Central Minnesota neighborhood, join us! There are activities planned for the day: we have our Undergraduate Research Symposium in the morning, then the march, and we end up at the Morris Theater, where CURE will be hosting a free documentary on the environment.
Ugh. I got up at 5am and tried to read a statistics paper to put myself back to sleep, and it didn’t work. Dang numbers, stop being interesting! Anyway, this paper was a meta-meta-analysis to try and dig up the causes of bias that might be causal to the reproducibility crisis in the scientific literature. Here’s the abstract from the Fanelli, Costas, and Ioannidis (2017) paper; my emphasis on some of the key points.
Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.
So, in part, the reproducibility problem is cause by new researchers scrambling to get a flashy result that will get them some attention, it’s worsened if they’re working in isolation rather than as part of a team, and there are a few scientists who are ethically compromised who have been spoiling the whole barrel of apples. That all makes sense to me.
It’s hard to police against individuals with little scientific integrity — rascals are present in every field. Catching them after the fact doesn’t necessarily help, because they’ve already tainted the literature with a flash-in-the-pan compromised paper.
Scientists who had one or more papers retracted were significantly more likely to report overestimated effect sizes, albeit solely in the case of first authors. This result, consistently observed across most robustness tests, offers partial support to the individual integrity hypothesis.
Catching a scientist who publishes bad data is already severely punished, so I don’t think that one is an avenue for improving the reliability of papers. It shouldn’t be ignored, obviously, but the other observations might lead to more improvement.
The mutual control hypothesis was supported overall, suggesting a negative association of bias with team size and a positive one with country-to-author ratio. Geographic distance exhibited a negative association, against predictions, but this result was not observed in any robustness test, unlike the other two.
Collaboration is good. In the days when I was in a large lab, it was always a little suspicious when someone suddenly plopped a whole, completed paper down in the lab meeting and announced that they’d finished the experiment, and by the way, would you like to be an author on the paper? I always turned those offers down, because a co-authorship ought to be the product of ongoing involvement in the work, not some attempt at fishing for external approval. But more cooperation and vetting of each other’s work ought to be a general hallmark of good science.
I’m not in a big research lab anymore, but I still try to get that across in student labs. There’s always someone who objects to having to work with those other students and wants to do their lab projects all by themselves, and I have to turn them down and tell them they have to work in teams. They probably think it’s so I’ll have fewer lab reports to grade (OK, maybe that’s part of it…), but it’s mainly because teamwork is an essential part of the toolkit of science.
And now I’m getting confirmation that it also helps reduce spurious results.
The biggest effect, though, is associated with small study size.
Our study asked the following question: “If we draw at random from the literature a scientific topic that has been summarized by a meta-analysis, how likely are we to encounter the bias patterns and postulated risk factors most commonly discussed, and how strong are their effects likely to be?” Our results consistently suggest that small-study effects, gray literature bias, and citation bias might be the most common and influential issues. Small-study effects, in particular, had by far the largest magnitude, suggesting that these are the most important source of bias in meta-analysis, which may be the consequence either of selective reporting of results or of genuine differences in study design between small and large studies. Furthermore, we found consistent support for common speculations that, independent of small-study effects, bias is more likely among early-career researchers, those working in small or long-distance collaborations, and those that might be involved with scientific misconduct.
More data! This is also helpful information for my undergraduate labs, since I’m currently in the process of cracking the whip over my genetics students and telling them to count more flies. Only a thousand? Count more. MORE!
The paper does end on a positive note. They’ve identified some potential sources of bias, but overall, science is in fairly good shape.
In conclusion, our analysis offered a “bird’s-eye view” of bias in science. It is likely that more complex, fine-grained analyses targeted to specific research fields will be able to detect stronger signals of bias and its causes. However, such results would be hard to generalize and compare across disciplines, which was the main objective of this study. Our results should reassure scientists that the scientific enterprise is not in jeopardy, that our understanding of bias in science is improving and that efforts to improve scientific reliability are addressing the right priorities. However, our results also suggest that feasibility and costs of interventions to attenuate distortions in the literature might need to be discussed on a discipline- and topic-specific basis and adapted to the specific conditions of individual fields. Besides a general recommendation to interpret with caution results of small, highly cited, and early studies, there may be no one-size fits-all solution that can rid science efficiently of even the most common forms of bias.
Fanelli D, Costas R, Ioannidis JPA (2017) Meta-assessment of bias in science. Proc.Nat.Acad.Sci USA doi: 10.1073/pnas.1618569114.
He’s still battling dualism, as seen in this New Yorker profile. He’s still arguing with Chalmers, and he’s still going strong…with some exasperation.
Despite his affability, Dennett sometimes expresses a weary frustration with the immovable intuitions of the people he is trying to convince. “You shouldn’t trust your intuitions,” he told the philosophers on the Rembrandt. “Conceivability or inconceivability is a life’s work—it’s not something where you just screw up your head for a second!” He feels that Darwin’s central lesson—that everything in biology is gradual; that it arrives “not in a miraculous, instantaneous whoosh, but slowly, slowly”—is too easily swept aside by our categorical habits of mind. It could be that he is struggling with the nature of language, which imposes a hierarchical clarity upon the world that’s powerful but sometimes false. It could also be that he is wrong. For him, the struggle—a Darwinian struggle, at the level of ideas—continues. “I have devoted half a century, my entire academic life, to the project, in a dozen books and hundreds of articles tackling various pieces of the puzzle, without managing to move all that many readers from wary agnosticism to calm conviction,” he writes, in “From Bacteria to Bach and Back.” “Undaunted, I am trying once again.”
There’s something about this concept, that the mind is a product of the physics, chemistry, and biology of the brain, that some people cannot accept. But then I have an equally strong intuition that it is, so it’s hard to fault people for wanting to disbelieve it; I can still fault them for ignoring the growing evidence for the purely material basis of the mind, the absurdity and poor quality of the evidence for dualism, and the inability to come up with a mechanism, even an outline of an idea, for how dualism would work.
Chris Dixon has written an excellent history of mathematics. When most of us think of math, we go “ugh” and call it boring and turn away, but really, it’s so fundamental that we should be far more excited about it. Most of the major turning points in my education involved math: it was geometry when I was in the 8th grade that sparked my first interest, and learning algebra and logarithms in high school chemistry got me focused on science. When I started teaching myself how to program computers (I was an inadequate teacher, and quickly signed up for courses in the CS department), I had to also teach myself basic Boolean logic, because in those ancient days when your only recourse was to learn assembly language, and ANDs, NANDs, NORs, and ORs were the name of the game. Transistors are just logic implemented in silicon.
I agree when Dixon writes,
Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.
I would add that in the 1970s public education system, we wouldn’t have imagined that, either. I had teachers who thought math was stuff you only needed to know for business school — you know, accounting. You can still see that attitude when people wonder why they need to learn this algebra stuff, anyway — they’ll never use it. They’re wrong. You’ll just use it in unexpected ways, because what you’re being given is a creative toolbox for thinking about the world.
The historical context in this article is useful, though, for making a case that math isn’t just practical, it’s also a foundation for thought that belongs in the liberal arts canon. And also that it’s a significant part of philosophy, which too many scientific pragmatists also tend to dismiss.
I once knew someone who was a contributor to the Nobel Prize sperm bank. He wasn’t a Nobelist, but he was a smart and accomplished scientist — he just had a dewar of liquid nitrogen next to his bed, where he’d make an occasional deposit (with the assistance of his wife, he assured me), and then the samples would be shipped off for processing and…insemination, I presume. It’s not something I’d ever do, and apparently very few Nobelists actually contributed to it.
The Repository for Germinal Choice was a real thing (it’s been discontinued since the death of its founder, Robert Graham), kind of the last gasp of the scientific eugenics movement. It’s premise is typical crankery. I’ve met a fair number of Nobelists and big name scientists, and I’m sorry, they’re just people, with the usual range from nice people to total assholes — and actually, I suspect that they’re enriched for the nasty end of the scale. Scientist sperm plucked out of a vat of self-selected donors is probably actually less valuable than sperm hand-picked from a donor you know and like. Since this vat also contains sperm from notorious racist William Shockley, you’re probably best off avoiding it altogether. Also note: all of the donors were white, because of course they were, and oh no, insisted Graham, he was not a racist.
Anyway, one of the offspring of the Repository tracked down his biological “father”. The result was disappointing and troubling. I’m more troubled by the idea that people still think there’d be some great advantage to having an absentee father who had an advanced degree.
While the Repository is defunct, there are still individuals, like this one, who advertise their willingness to inseminate people. I’d also be worried…what if extreme narcissism is a heritable trait?
It’s officially the first day of Spring. I looked outside to see if flowers had suddenly erupted, but it’s too early and too dark to see.
It’s also the end of our Spring Break, and I have to get back to work, although it’s not as if I took it easy this last week. I’m actually prepared! This is my agenda for the week:
Genetics: We’ve been working through chromosomal changes, and I’ve been a little concerned about some of the students not quite understanding what’s going on, so we’re going to spend the first half of class with me leading them through some visualization exercises. I’m going to give them some word problems and have them draw the answers — it should also be a gentle warm-up to the class. Then it’s all sex and mapping for a while.
Genetics lab: Our mapping experiment is done, we just have to collate the results and do the calculations. Simultaneously, we’re starting a new experiment, a complementation assay.
Ecological Development: Endocrine disruptors! That’s always a fun way to start your week. Even more fun: an exam! An oral exam! The last half of this week and the first half of next week are going to be dedicated to meeting one-on-one with students to grill them on general concepts.
Biological Communications: I don’t think I’ve mentioned it before, but I’m also teaching a course in science writing — this semester it’s more of an independent study sort of thing, where they’re supposed to be putting together a substantial term paper on a subject of their choice. So far, it’s been little stuff — come up with a topic, do the preliminary research, give me short writing samples to demonstrate that you’re actually working on it — but their first full rough draft is due this week, so I’m getting stacks of papers to grade over the coming weekend.
We also have a guest seminar this week from an immunologist, Amy Weinmann, who is going to talk to us about epigenetics and development, which will fit in just fine with my eco-devo course.
I’m actually all planned out for the next two or three weeks. I just have to do the actual work. At least I think I know what I’m doing.
Got it done. My lab is all cleaned up and shiny and organized, except maybe for the bits behind the camera that you can’t see. So many beakers and flasks and bottles scrubbed! So many jagged shards of glass tidied up, pools of toxic chemicals siphoned off, untriggered bombs detonated, bones of previous adventurers interred!
Tomorrow…my office. This was just the warm-up.