Scott Adams embarks on the Johnny Hart road

I don’t normally read Dilbert — I’ve seen far too much of the benighted ignorant psyche of its creator — but this one was just laid out on a table at the coffee shop yesterday, and I knew I’d have to deal with it. In this one, Dilbert goes full climate science denialist. This might be fun, to dissect Dilbert, because even though it will kill what little humor is present in it, at least we’ll have a good time laughing at Scott Adams. Let’s dissect the shit out of this thing.

Here’s the setup.

OK, this is sort of fine. I think it’s a good idea for companies to think about what impact climate change will have on them, and how they affect the environment. I’m at a green university, and we’ve had these sorts of discussions. Still do, all the time.

It is definitely true that human activity is warming the Earth. It will lead to a global catastrophe, depending on how you define catastrophe: it will cause acute economic disruption, resource wars, and the death of millions. Is that catastrophic enough for you?

By the way, I notice that the scientist is a goateed and balding white man in a lab coat. It’s either unconscious bias (that’s how scientists are supposed to look!), or, I can’t help but notice a weak resemblance to Michael Mann.

Next panel, Dilbert asks Scott Adams’ idea of a smart question.

On the face of it, yes, that is a good question. I’d encourage students to ask that every time an instructor told them something. But consider the context. The answer to that question is readily available — google it. You can read the papers. You should have the answer to that from your high school earth science class. So why is Dilbert being made to ask this trivial question right at the start of this meeting? I can tell right away that this is not a sincere question, this is a derailing tactic to justify a software engineer speaking out of his ass to the scientific expert. Sound familiar?

Then we get the eternal dilemma of the science popularizer. Do you just scorch this ass with contempt because you can see right through him, or do you try to take the question seriously and give the primer in kindergarten climatology he’s asking for?

You can’t win, you know. The game is rigged. If you do the former, you’ll be accused of being hostile and mean. If you do the latter, you’re patronizing and people will write scornful blog posts about how you think raw data dumps will cure all the scientific misunderstandings in the world.

So what do you do? Most of us will take the generous view and try to explain exactly what the questioner is asking for, like our Michael Mann surrogate here:

And that’s also fine. So far, the strip has been true to the characters, and the nature of their interactions. It’s denialist vs. scientist, familiar territory, and now it’s time for the funny, clever twist…but Adams can’t deliver. He has to resort to sticking words in the mouth of the scientist that are not at all true to the character.

That’s just wrong. It’s not what climate scientists say or even think. It’s what Scott Adams, who is no scientist of any kind, says and thinks. And with that betrayal of the premise of the joke, it abruptly falls flat and dies. If all you can do to discredit a point of view is to lie and make puppets say falsehoods, it’s your position that fails. Adams does this because he lacks any insightful response to the honest arguments of scientists.

I guess there’s supposed to be a punchline of some sort next. Once again, Adams fails to meet the minimal standards of his medium.

I think the punchline is supposed to be implying that science supporters can only defend their position by calling True Skeptics mean names. Of course, the entire point of the two panels just above that is to call climate scientists conscious liars.

The only people who will find this at all funny are the denialists who see the panels in which the climate scientist openly maligns his methodology as affirmations of their beliefs. That’s OK, it’ll finally be the death of Dilbert — I skimmed the comments and noticed several people were shocked that Scott Adams endorse an anti-scientific claim. Apparently they’ve never read his blog before.

I shouldn’t claim it’ll kill Dilbert, though. Nothing kills syndicated comics. Johnny Hart went full-blown creationist/evangelical Christian/anti-Muslim bigot, and newspapers just kept right on buying up the strips. Hart died in 2007, and B.C. is still going.

And people think tenured professors have it easy.

Creationists need better evidence than that

I found this claim by Mark Armitage that had determined that a triceratops fossil was only a few thousand years old to be ridiculous. He has a defender, Jay Wile, who disagrees with me. He has two main points.

First, I said that carbon dating a dinosaur fossil is absurd — the 14C levels will be too low to get a reliable ratio. Wile thinks that you can, and that being able to cite a number makes it true.

Well, had Dr. Myers bothered to click on the link given in my post, he would have seen that an age was reported: 41,010 ± 220 years. As I state in that link, this is well within the accepted range of carbon-14 dating, and it is younger than many other carbon-14 dates published in the literature. In addition, the process used to make the sample ready for dating has been spelled out in the peer-reviewed literature, and it is designed to free the sample of all contamination except for carbon that comes from the original fossil. Now as I said in my original post, it’s possible that the reading comes from contamination. However, I find that unlikely, given the process used on the sample, the cellular evidence that Armitage found, and the fact that such carbon-14 dates are common in all manner of fossils that are supposedly millions of years old or older.

There are two sources of 14C we have to be concerned with. The bulk of it is cosmogenic, formed in the upper atmosphere from cosmic ray bombardments of ordinary, stable 14N. This 14C decays at a geologically rapid rate, with a half-life of 5730 years. Living things respire and tend to equilibrate their 14C levels with the environment. Another source, though, is the radioactive decay of other elements that generate high energy particles that can also bang into atoms to generate unstable radioactive isotopes. This is a much rarer event, though, so objects that are dead and buried and isolated from the atmosphere tend to equilibrate to a much lower concentration of 14C.

In carbon dating, the 14C to 12C ratio is measured. If it’s close to that of the atmosphere, it was recently exchanging carbon with the atmosphere. If it’s somewhere above the level of dead carbon buried deep in rocks (which has a non-zero level of 14C), it’s older, and we can estimate how much older from the ratio. You can always calculate a ratio. You can always measure a date. However, it will hit a ceiling of about 50,000 years, because of the limits of precision and because the ratio can converge to a value indistinguishable from the background level of 14C. Date a carbon sample that’s a hundred thousand years old; it will return an age of 50,000 years. Carbon date a chunk of coal from the Carboniferous, 300 million years ago, and it will return an age of 50,000 years.

That an “age” was reported is meaningless. An age of 40,000 years means that about 7 14C half-lives had passed, or that less than 1% of the atmospheric levels of 14C were present in the sample. Wile doesn’t understand this at all. He doesn’t seem to comprehend that there could be another source of 14C than from equilibration with the atmosphere. He thinks it is significant that ancient carbon can have non-zero amounts of 14C.

However, creation scientists have carbon-dated fossils, diamonds, and coal that are all supposed to be millions of years old. Nevertheless, they all have detectable amounts of carbon-14 in them. For example, this study shows detectable levels of carbon-14 in a range of carbon-containing materials that are supposedly 1-500 million years old. Surprisingly, the study includes diamonds from several different locations! Another study showed that fossil ammonites and wood from a lower Cretaceous formation, which is supposed to be 112-120 million years old, also have detectable levels of carbon-14 in them. If these studies are accurate, they show that there is something wrong with the old-earth view: Either carbon dating is not the reliable tool it is thought to be for “recent” dating, or the fossils and materials that are supposed to be millions of years old are not really that old. Of course, both options could also be true.

Or that there are underground sources of radioactive decay that can generate low levels of 14C, and that Jay Wile doesn’t understand basic principles of radiometric dating.

Wile also dismisses the possibility of an inclusion of recent biological material in the sample that might skew the date earlier, which is unjustifiable. Armitage himself writes about Soft, moist, muddy material can be seen surrounding pores of bone vessels on inner horn surfaces and rootlets penetrating lower, interior surface of samples where he claims to spot intact Triceratops cells.

But contamination can’t possibly be a confounding problem, oh no.

The second main point Wile makes is that gosh, those cells sure look like osteocytes, which have a distinctive shape with many branching processes. How would osteocytes have gotten in there?

Armitage did not compromise his own results. He simply wrote truthfully about his fossil. In addition, anyone with a basic understanding of histology would know why plant roots, fungal hyphae, and insect remains do not compromise his results in any way. Based on all the visual evidence, the cells he found are osteocytes. They are not only the shape and size one expects from osteocytes, they have the filipodial extensions that are characteristic of osteocytes. They also have the cell-to-cell junctions one expects in groups of osteocytes. Thus, they cannot be the result of contamination, since plants, fungi, and insects do not have osteocytes.

My answer to that is…I don’t know. It’s weird. And Armitage doesn’t know either, and everything he says about the sample is incompatible with these being intact, preserved osteocytes.

The fact that any soft tissues were present in this heavily fossilized horn specimen would suggest a selective fossilization process, or a sequestration of certain deep tissues as a result of the deep mineralization of the outer dinosaur bone as described by Schweitzer et al. (2007b). As described previously, however, the horn was not desiccated when recovered and actually had a muddy matrix deeply embedded within it, which became evident when the horn fractured. Additionally, in the selected pieces of this horn that were processed, soft tissues seemed to be restricted to narrow slivers or voids within the highly vascular bone, but further work is needed to fully characterize those portions of the horn that contained soft material. It is unclear why these narrow areas resisted permineralization and retained a soft and pliable nature. Nevertheless it is apparent that certain areas of the horn were only lightly impacted by the degradation that accompanied infiltration by matrix and microbial activity. If these elastic sheets of reddish brown soft tissues are biofilm remains, there is still no good explanation of how microorganisms could have replicated the fine structure of osteocyte filipodia and their internal microstructures resembling cellular organelles. Filipodial processes show no evidence of crystallization as do the fractured vessels and some filipodial processes taper elegantly to 500 nm widths.

So…

  • The tissue is not isolated or protected in any way. It’s wet, unmineralized, and filled with a “muddy matrix”. Some of the soft tissues, the “vessels”, are crystallized.

  • The “osteocytes”, though, are perfectly preserved down to the level of organelles, ultrastructural junctions, and delicate processes.

Doesn’t anyone else have a problem with this? I’ve had to struggle with fixative cocktails to get good preservation of single-cell levels of detail; I’ve had animal tissue bathed in a soothing, perfectly balanced medium under my microscope, and seen bacterial infections turn them into disintegrating, collapsing blobs of blebbed out fragments of decaying cells within minutes.

Yet somehow Armitage finds picture-perfect “osteocytes” in tissues that have been soaking in mud, perforated by plant roots, and presumably have been lying there rotting since, by his measure, some time around the Great Flood, a few thousand years ago.

I’m just curious. As an experiment, if we killed a cow and then left it to rot in a damp field for just a month, would that be a good way to make useful histological samples of bone tissue?

How about if we left it there for a year? Or 40,000 years?

The Schweitzer papers on preserved cells in dinosaur bone at least demonstrate careful technique to minimize contamination and artifacts. They also don’t include comments that reveal the author doesn’t understand the basic principles of radiometric dating. The Armitage papers, on the other hand, are sloppy, get improbable results, and reveal a lot of biased reasoning.

I don’t know how cells that look like osteocytes got there, but I’m very suspicious.

Davies and Lineweaver are back on the atavisms bandwagon

I’ve been complaining about the cancer nonsense peddled by Paul Davies and Charles Lineweaver for years. I imagine nothing will stop them. They’ve got this weird idea that cancers are atavisms — that what they are is a reactivation of an ancestral program that was constructed a billion years ago by free-living single-celled organisms, that was then shackled and constrained by the evolution of multicellularity, but which can then re-emerge when the old program is liberated by mutations in the multicellularity genes that are its jailers. Cancers are your protistan ancestors, yearning to break free.

It’s nonsense. Davies and Lineweaver are physicists with no comprehension of how evolution works. There are no ‘genetic programs’ that can linger for a billion years in a suppressed state to functionally rebound with the accidental removal of a metazoan control program. When you add that Davies justifies his model by invoking Haeckelian recapitulation, a theory that’s been dead and wrong for over a century, you’ve got a recipe for raging crackpotterism from a sober, respected physicist.

Yet they still get published. They still manage to sucker in scientists who ought to know better. They’ve published again in PLoSOne, in a paper titled “Ancient genes establish stress-induced mutation as a hallmark of cancer”. Here’s the abstract.

Cancer is sometimes depicted as a reversion to single cell behavior in cells adapted to live in a multicellular assembly. If this is the case, one would expect that mutation in cancer disrupts functional mechanisms that suppress cell-level traits detrimental to multicellularity. Such mechanisms should have evolved with or after the emergence of multicellularity. This leads to two related, but distinct hypotheses: 1) Somatic mutations in cancer will occur in genes that are younger than the emergence of multicellularity (1000 million years [MY]); and 2) genes that are frequently mutated in cancer and whose mutations are functionally important for the emergence of the cancer phenotype evolved within the past 1000 million years, and thus would exhibit an age distribution that is skewed to younger genes. In order to investigate these hypotheses we estimated the evolutionary ages of all human genes and then studied the probability of mutation and their biological function in relation to their age and genomic location for both normal germline and cancer contexts. We observed that under a model of uniform random mutation across the genome, controlled for gene size, genes less than 500 MY were more frequently mutated in both cases. Paradoxically, causal genes, defined in the COSMIC Cancer Gene Census, were depleted in this age group. When we used functional enrichment analysis to explain this unexpected result we discovered that COSMIC genes with recessive disease phenotypes were enriched for DNA repair and cell cycle control. The non-mutated genes in these pathways are orthologous to those underlying stress-induced mutation in bacteria, which results in the clustering of single nucleotide variations. COSMIC genes were less common in regions where the probability of observing mutational clusters is high, although they are approximately 2-fold more likely to harbor mutational clusters compared to other human genes. Our results suggest this ancient mutational response to stress that evolved among prokaryotes was co-opted to maintain diversity in the germline and immune system, while the original phenotype is restored in cancer. Reversion to a stress-induced mutational response is a hallmark of cancer that allows for effectively searching “protected” genome space where genes causally implicated in cancer are located and underlies the high adaptive potential and concomitant therapeutic resistance that is characteristic of cancer.

A translation:

  • Some have said that cancer is a reversion to the single-celled state. The “some” just happens to be us.

  • We naively predict that the genes involved in a disease of multicellularity, cancer, would be genes that have evolved after the emergence of multicellularity. We helpfully defined a colossal window of one billion years to assay.

  • We compared the distribution of cancer-causing mutations to a uniform, random distribution of mutations. Surprise: some mutations cause cancer, others don’t, so it doesn’t fit a random distribution. There are lots of genes that are less than 500 million years old that are implicated in cancer.

  • But wait! When we looked in a database of cancer-causing gene mutations, COSMIC, we find that the database is enriched for old genes. This is contrary to our hypothesis, therefore we need to find a new rationalization, rather than rejecting our hypothesis.

  • The genes that are commonly broken in cancer are involved in DNA repair and cell cycle control, which are truly primitive, ancient functions — prokaryotes have genes for this. What possible “ancient program” could be reactivated to fit our story?

  • I know! Stress-induced hypermutation! Cancer cells are invoking multi-billion year old processes; it can’t be that breaking genes produces a loss of specificity and efficiency in repair, they are switching on genes to cause mutations. They’re doing it on purpose! Yeah, that’s the ticket!

It’s a truly awful mess of a paper, in which the authors juggle lots of data to make it fit their preconceptions, and in which apparently no result can possibly cause them to reject their hypothesis. And their conclusions are strange.

Here we present evidence demonstrating that cancer manifests as an atavistic recapitulation of pre-metazoan mechanisms of stress-induced mutation in somatic cells, explaining its capacity to evolve resistance to therapy. The mechanistic roots of this behavior are retained over evolutionary time scales because they are critical to the successful function of the germline and immune system. In addition to generating base-line diversity in both the innate and adaptive immune system, normal germline mutational patterns maintain diversity in recently evolved gene families governing functions such as toxin detection and detoxification. In cancer the controlled restriction of this phenomenon to the germline and immune system is disrupted, allowing somatic cells to effectively search ancient genome space for solutions to the stress-induced pressures they are experiencing. We propose stressed-induced mutation as a hallmark of cancer reflected by genomic instability.

There is a phenomenon in the immune system where, for instance, the immunoglobulin genes are prone to greater mutation rates during cell proliferation — it is a way to increase the diversity of antibody types. So yes, there are mechanisms to increase the rate of errors. What Davies and Lineweaver are proposing is that cancer cells are actively and purposefully switching on these mutation-generating processes to “search ancient genome space”, whatever the hell that is and however the hell inducing a greater frequency of random mutations would find and restore these imaginary “ancient genes”. Note also that they’re babbling about “atavistic recapitulation of pre-metazoan mechanisms” while discussing properties of the adaptive immune system. This makes no sense.

Genomic instability is one of the hallmarks of cancer. It’s not necessarily a planned sort of thing. If you disrupt DNA repair mechanisms with a mutation, you’ll get more mutations in other genes. If you mutate the gatekeeping proteins that act to ensure that cell division does not proceed if you have gross chromosomal errors, you will have more gross chromosomal errors. The fact that populations of cancer cells routinely accumulate new mutations as they progress is not an observation that supports the idea that cancers are a reversion to an ancestral healthy, single-celled state.

If you need to cleanse your palate after that dog’s breakfast of bad science, here’s a much more interesting paper on cancer: Carbon dating cancer: defining the chronology of metastatic progression in colorectal cancer. The investigators took advantage of a tragic series of events in a cancer patient: an initial biopsy initiated needle track seeding. That is, a few cells trickled out of the biopsy needle, which acted as the mechanism of metastasis, and started new tumors, so they knew precisely when these tumors were initiated. This allowed them to use standard phylogenetic methods on sequences from samples of the cancer as it progressed, and put together a history of mutations for the diversifying cells of the cancer.

Chronology of the patient’s CRC evolution. (A) Whole-genome sequencing of multiple lesions from the patient’s malignancy allowed phylogenetic reconstruction of the tumour tree. In the phylogenetic tree, dates within brackets indicate the time of clinical diagnosis whereas dates in italic highlight the estimated times of the different lesions. (B) Illustrative cartoon of the patient disease progression and samples taken. At time tc the first colorectal cancer cell arose, giving rise to the primary tumour. At time tl the first metastatic clone emerged, giving rise to the lung metastasis, quickly followed by a second metastasis to the thyroid emerging at time tt . At time tcR the sample from the primary tumour was collected (resection) and analysed. During the lung biopsy, the needle tract seeding event spread cancer cells in the chest wall at time tcw. A few weeks later, at time tcR the lung metastasis was resected and profiled. Finally at time tcwR the chest wall metastasis was also sampled.

I know this excerpt from the paper is rather dense, but all you need to take away from it is that the different branches of the cancer pedigree have some common mutations, and also a constellation of unique mutations. Look, TP53, often called the “guardian of the genome”, is completely taken out with a missense mutation, and whole chromosomes show LOH (loss of heterozygosity) — that is, the cancer cells have lost entire copies chromosome 5, and in one subset, chromosomes 2, 9, and 22. This kind of wholesale chaos would not have been an adaptive response by a protistan ancestor.

Targeted sequencing of 409 cancer-related genes in both the primary CRC and all metastatic sites (lung, thyroid, chest wall and urinary tract) of our case revealed the presence of clonal non-sense mutations in APC (Gln1367*), as well as missense mutations in CTNNB1 (Leu156Gln), KRAS (Gly12Asp) and TP53 (Ser215Ile). The same mutations were not detected in the normal tissue or in the Hurthle adenoma. Two genes included in our panel showed discordance between primary and metastatic cancer: ADAMTS20, a metalloproteinase involved with cancer invasion and migration, and AKAP9 an A-kinase anchor protein which binds to the regulatory subunit of protein kinase A. The ADAMTS20 missense mutation Arg1885Thr was observed in the primary cancer but was not detected in either the lung, thyroid or chest wall metastases. Conversely, the AKAP9 missense mutation Ala3077Pro was found in all the metastatic sites but was not detected in the primary cancer. All these mutations were also found using WGS [Whole Genome Sequencing], furthermore ADAMTS20 and AKAP9 were validated by Sanger Sequencing. All lesions were microsatellite stable (MSS). Copy number analysis based on WGS data revealed a relative low level of chromosomal instability (CIN), with LOH of chromosome (chr) 5, gain of chr7, and a focal amplification on chr13q12.2-12.3, encompassing the VEGFR1 and CDX2 genes. These aberrations were clonal in all the CRC lesions, whereas the Hurthle adenoma showed a distinct profile characterized only by loss of chr2, chr9, and chr22. Primary tumour and metastatic sites displayed the same dominant, age related, mutational signature 1 which has previously observed in CRC as well as other cancer types.

Now that’s interesting stuff. Unfortunately, one of the lessons learned from analysis of other cancers is that every cancer is different — there are common modalities, such as the loss of certain checkpoint proteins, but there are multiple ways to achieve certain outcomes, and they don’t have to appear in any particular order. It definitely looks far less like a reactivation of a specific ancestral state and more like a derangement of regulatory functions.

Davies’ approach is kind of silly, too, a test designed to give whatever results the researchers want, with a set of observations that were basically a given with their parameters. For example, a lot of cancers mess up cell signaling to trigger uncontrolled growth. The signaling molecules are often factors like receptor tyrosine kinases (RTKs), which are only found in multicellular animals and one group of protists. So yes, it’s inevitable that you’ll find a lot of “young” (less than a billion years old) genes among your cancer-causing candidates. That observation does not support the idea of cancers being atavisms.

Perfectly innocuous, mundane video inspires hyperbole on the internet

I am mystified — the most trivial things get labeled with extravagant labels on the internet, and I’m experiencing hyperbole fatigue (actually, I wouldn’t be surprised if some of the annoying ads that are generated for this post are full of this crap). The latest example is this “viral video” that is being described as “bizarre“, “uncomfortable“, “revolting“, and “gross“. It’s none of those things. It’s routine and commonplace. It’s just a razor clam on an Oregon beach.

This copy has the most ordinary title: “clam digs into sand”.

My family used to dig for razor clams, and we knew how fast they could burrow. It wasn’t gross, it was wonderful: they dig by anchoring themselves with that muscular foot and expelling water to fluidize the sand around them, and then contracting muscles to pull themselves deeper into the muck, which then firms up around them. They were so fast at burrowing in that you needed special tools to keep up with them — a clam gun, which was a tube you’d push around the clam and then pull up to remove the clam and all the wet sand around it (that could be heavy work), or these narrow shovels that would let you dig fast. We’d walk along the beach or in the shallows, looking for spurts from their siphons or the little dimples they’d leave on the surface, and then you’d race to excavate them before they got away.

Here’s a video from the Washington state parks department on how to dig for razor clams.

They’re delicious, by the way. That clam is just one big hunk of almost pure muscle.

Also, that video shows what I’ve always thought of as a real beach: gray, cloudy, foggy, and wet, and going to the beach meant putting on denim and flannel and good solid boots, getting cold and damp, and coming home to a seafood feast. It was kind of the opposite of glamorous and weird, internet.

Suddenly, this seems important

So…when the giant meteor strikes, how will you die? There’s actually a study of potential deaths in an asteroid impact. I guess it shouldn’t surprise anyone, but the odds of being instantly disintegrated because you are replaced by a crater are very low; the most likely cause of death is from violent winds and the shock wave. Your house, for instance, will be shattered and you’ll just be one more piece of debris flung outwards amidst a tumbling mass of jagged shards of wood and metal and stone.

This study only analyzed short-term consequences, though. I suspect that even more deaths would follow from exposure and starvation and disease and roving feral gangs of Trumplicans.

The earth was a complicated place for intelligent species a quarter million years ago

When I first heard about Homo naledi, the question at the top of my head was “How old is it?” It was a hominin, it looked fairly primitive with a small brain the size of a gorilla’s, yet it was found in a mass “grave”, where part of the mystery was how so many dead hominins ended up in this difficult-to-reach, hidden cave system in South Africa. The authors didn’t report a date. Speculation ran from 3 million years old to 300 thousand years old, both dates seeming extreme and unlikely.

Now we have a date: between 236,000 and 335,000 years old. Astonishing.

That’s really young. Furthermore, they’ve found another chamber in the cave network with even more bones.

All indications are that this was a thriving population of little, primitive people.

The bones, remarkably, show few signs of disease or stress from poor development, suggesting that Homo naledi may have been the dominant species in the area at the time. “They are the healthiest dead things you’ll ever see,” said Berger.

Homo naledi stood about 150cm tall fully grown and weighed about 45kg. But it is extraordinary for its mixture of ancient and modern features. It has a small brain and curved fingers that are well-adapted for climbing, but the wrists, hands, legs and feet are more like those found on Neanderthals or modern humans. If the dating is accurate, Homo naledi may have emerged in Africa about two million years ago but held on to some of its more ancient features even as modern humans evolved.

It’s still a mystery how all these bones ended up in the caves. These don’t seem to be ceremonial burials, it’s more like they were chucking their dead down some hole to drop them in a deep cave.

Why wasn’t this machine in my life 35 years ago?

Let me tell you about this miserable year I had in grad school. Judith Eisen and I had figured out that there was this repeating pattern of spinal motoneurons in zebrafish — this was special because it meant that we had a new set of identified neurons, cells that we could name and recognize and come back to in fish after fish, and that had specific locations and targets. I had flippantly suggested that we name them Primary Zebrafish Motoneurons (PZM cells, get it?), but a colleague, Walt Metcalfe, talked me down from that bit of vanity — it is so 19th century to name a cell after yourself, even indirectly — and I came up with the rather more mundane names of CaP, MiP, and RoP, for caudal, middle, and rostral primary motoneurons, for their location within each segment. So yay, interesting result, and it fit well within the overarching project I was working on for my thesis, which was on the development of connectivity in the spinal cord.

Specifically, I was looking at how another famously named neuron, the Mauthner cell, grew an axon down the length of the spinal cord and hooked up to the motor neurons there. Mauthner is a command neuron; when it fires, it sends a signal to one side of the spinal cord, triggering the motoneurons on that side to make all the muscles contract — the fish bends vigorously and quickly to one side as part of an escape response. Finding out that our one named cell, Mauthner, was making synapses on another set of named cells, our primary motoneurons, was an opportunity to look at connectivity in an even more detailed way.

But then my committee asked a really annoying question: how do you know Mauthner is making synapses on CaP? Have you looked? Thus began my miserable year. I said no, but how hard can it be? I’ll just make a few ultrathin sections, look at them in the transmission electron microscope, snap a few pictures, and presto, mission accomplished. Except, of course, I hadn’t done EM work before. Our EM tech, Eric Shabtach, made it look easy.

So I started learning how to fix and section zebrafish embryos for EM. It turns out that was non-trivial. I was working with nasty chemicals, cocktails of paraformaldehyde, glutaraldehyde, and acetaldehyde, which all had to be just right or you’d end up with tissue blown up full of holes. I had to postfix with osmium tetroxide, with all the fun warnings about how just the fumes can fix your corneas. And then I had to master using an ultramicrotome and making glass knives, and cutting those fish just right. There were times I’d get the fixation perfect and then find I’d screwed up on the sectioning, and produced a lot of crap as the knife chattered across the section, or there was a bit of a nick in the blade that gouged furrows across every one. And then the way we got these extremely thin slices into the scope was to scoop them up on these delicate copper grids, and of course every time you were closing in on the synapse you wanted, that section would have the most interesting part fall right on an opaque copper grid wire. Or you’d find that that was the section you lost.

It takes a lot of skill and practice to do electron microscopy well, and it also takes a little luck, at least in the old days, to find the one thing you were looking for. I failed. I struggled for about a year, going in every day and prepping samples and spending hours slicing away at tiny dead embryos imbedded in epoxy, before finally giving up and deciding I needed to do stuff that was more immediately successful, because I needed to do this graduation thing.

I still kind of cringe remembering that long fruitless year, but now I can ease my conscience by just telling myself the technology wasn’t yet ready. Here’s a cool new paper, Whole-Brain Serial-Section Electron Microscopy In Larval Zebrafish. They’ve automated the process. Just look at this goddamn machine, it’s beautiful:

Serial sectioning and ultrathin section library assembly for a 5.5dpf larval zebrafish. a, Serial sections of resin-embedded samples were picked up with an automated tape-collecting ultramicrotome modified for compatibility with larger reels containing enough tape to accommodate tens of thousands of sections. b–c, Direct-to-tape sectioning resulted in consistent section spacing and orientation. Just as a section left the diamond knife (blue), it was caught by the tape. d, After serial sectioning, the tape was divided onto silicon wafers that functioned as a stage in a scanning electron microscope and formed an ultrathin section library. For a series containing all of a 5.5dpf larval zebrafish brain, ~68m of tape was divided onto 80 wafers (with ~227 sections per wafer). e, Wafer images were used as a coarse guide for targeting electron microscopic imaging. Fiducial markers (copper circles) further provided a reference for a perwafer coordinate system, enabling storage of the position associated with each section and, thus, multiple rounds of re-imaging at varying resolutions as needed. f, Low-resolution overview micrographs (758.8×758.8×60nm3 vx –1) were acquired for each section to ascertain sectioning reliability and determine the extents of the ultrathin section library. Scale boxes: a, 5×5×5cm3 ; b, 1×1×1cm3 ; c, 1×1×1mm3 . Scale bars: e, 1cm; f, 250µm.

Then they scanned in all those tidily organized thin sections into the computer for reconstruction. I am impressed.

We next selected sub-regions within this imaging volume to capture areas of interest at higher resolutions using multi-scale imaging. We first performed nearly isotropic EM imaging by setting lateral resolution to match section thickness over the anterior-most 16000 sections. All cells are labelled in ssEM, so this volume offers a dense picture of the fine anatomy across the anterior quarter of the larval zebrafish including brain, sensory organs (e.g., eyes, ears, and olfactory pits), and other tissues. Furthermore, this resolution of 56.4×56.4×60nm3/vx is ~500× greater than that afforded by diffraction-limited light microscopy. The imaged volume spanned 2.28×108 µm3 , consisted of 1.12×1012 voxels, and occupied 2.4 terabytes (TB). In this data, one can reliably identify cell nuclei and track large calibre myelinated axons. To further resolve densely packed neuronal structures, a third round of imaging at 18.8×18.8×60nm3/vx was performed to generate a high-resolution atlas specifically of the brain. The resulting image volume spanned 12546 sections, contained a volume of 5.49×107 µm3 , consisted of 2.36×1012 voxels, and occupied 4.9TB. Additional acquisition at higher magnifications was used to further inspect regions of interest, to resolve finer axons and dendrites, and to identify synaptic connections between neurons.

Thirty five years ago we were storing most of our image data on VHS tape, and our computers all used floppies with about 100K capacity. I wonder how many floppies we would have needed to store all that? Oh, I did get my very first hard drive about the time I graduated, which held five million bytes. I was very proud.

I was wondering if they actually had the EM section demonstrating the Mauthner-to-CaP synapse. Probably. Now it’s such a minor issue that has already been shown elsewhere and with multiple techniques that it isn’t even mentioned. It’s in their data set, though, I’m sure. They’ve reconstructed the entire axon arbor of CaP in serial EM sections.

The position of a caudal primary (CaP) motor neuron in the spinal cord and its innervation of myotome 6 projected onto a reslice through ~2200 serial sections.

2200 sections! I spent a year on that project and probably got half that number. I don’t know whether to cry or steal the data, invent a time machine, and go back and hand myself a photo at the start of that year.