Axon guidance mechanisms are thoroughly evolutionary in origin

The Discovery Institute thinks axon guidance mechanisms are evidence for intelligent design. I think they just trawl the scientific literature for the words “complex” and “purpose” and get really excited about the imaginary interpretations in their head of papers they don’t really understand.

There’s no mention of evolution here, nor in the full paper in Science. The paper, however, does use a notable word: purpose. “These findings identify NELL2 as an axon guidance cue and establish Robo3 as a multifunctional regulator of pathfinding that simultaneously mediates NELL2 repulsion, inhibits Slit repulsion, and facilitates Netrin attraction to achieve a common guidance purpose.” In fact, they use it again in their concluding sentence:

Our results also show that Robo3.1 serves as an integrative hub: Its three diverse actions in response to three different cues — mediating NELL2 repulsion from the motor column, potentiating midline Netrin-1 attraction, and antagonizing midline Slit repulsion — act simultaneously, are mutually reinforcing, and serve the common purpose of steering commissural axons toward and across the midline. This multiplicity of mechanisms likely helps ensure high-fidelity steering of axons to their targets.

It’s one of those occasions in biology (not rare) when the term “intelligent design,” despite other merits, falls flat as a description. This is super-intelligent ultra-design.

Getting axons in the nervous system to their proper destinations actually is a very complex problem: much wires, many connections, wow. If you look at complex systems like the brain, you shouldn’t be surprised that the mechanisms are complex. And further, the functional requirements of those systems, which may require that Neuron A navigate to Target B in order for the pattern to work, it’s easy to say that the purpose of those mechanisms is to hook A up to B. It does not imply the existence of a designer, only the existence of functional constraints.

But also, they picked a system with which I’m fairly familiar. Way back in the 1990s, this is what I did: try to figure out the rules behind commissural neuron migration, the very stuff the DI is talking about. I was focusing on a cellular approach — I was observing neurons that grew across the midline to contact cells on the opposite side of the nervous system — and I reached some of the same general answers that more recent research has discovered. The question was why an axon would grow all the way across the nervous system to reach a target that had a closer equivalent right next door, on the same side.

Here’s a simple cartoon version of the problem. Neuron A is supposed to, has the function of, has the purpose of connecting to Neuron B; in the normal animal, it grows all the way across the midline to touch the contralateral (on the opposite side) B neuron.


But the question remains: there’s a left B and a right B. Why doesn’t neuron A on the left side take the lazy shortcut and grow straight to the left B?


The answer we came up with in my work is that there is a hierarchy of interactions. That A finds the midline much more attractive than B at first, so it grows to the middle of the animal, and then, after a brief flirtation with the midline, changes its priorities to favor B cells after all, and just keeps growing across the midline to find the other B. (Actually, what we found that was most important in changing the left A’s affinities was contact with the right A, which arrived at the midline at about the same time.)

We worked that out with direct observations of neuron behavior, and also a series of experiments in which we killed various cells A would interact with. What we didn’t know at all was what molecules were involved.

And that’s where the Discovery Institute is so wrong. We had a cellular description, but when other laboratories in the late 1990s started discovering the molecular signals involved, molecules like Netrin and Robo and Slit, it was a wonderful revelation. It’s like how on one level, you can see a car and watch it run and figure out general things like wheels and steering, but when you get out the wrenches and start taking the engine apart, you can really see the mechanistic basis of its operation. Every step deeper into the guts of the problem tends to reinforce our understanding that it’s fully natural, and was built around natural processes.

The other big shift was that we could now generalize to other organisms and pick apart the evolutionary foundations to these mechanisms. I was looking at specific cells in the grasshopper embryo, and we could see that those very same cells are present in other arthropods, but we didn’t have the tools to do molecular comparisons. Identifying the molecules responsible meant that we could ask if they were present in other organisms, whether they were conserved, and whether these molecular processes were used in multiple cells, rather than just the few I studied.

If the Discovery Institute had looked just a little bit harder (or had not intentionally chosen to ignore all the papers that studied the evolution of axon guidance mechanisms), they might have noticed that there’s a very interesting literature on how these molecules evolved. There are plenty of papers that survey the evolutionary pattern of axon guidance mechanisms.

When did axons and their guidance mechanisms originate in animal evolution? Many axon guidance receptors (e.g. type II RPTPs, Eph RTKs, and the DCC, UNC5 and Robo families) are related to CAMs of the immunoglobulin superfamily, suggesting that axon guidance mechanisms evolved from signaling pathways involved in general cell–cell or cell–ECM adhesion in an ancestral animal. The simplest animals with nervous systems are cnidarians, which have isopolar neurons arranged in ‘nerve nets’; simpler animals (e.g. sponges, mesozoans) have no recognizable neurons. Thus, neurons and their guidance mechanisms must have evolved in a common ancestor of all metazoans, but after the divergence of sponges (Figure 1). Intriguing recent work suggests that sponges, which have no discernible nervous systems, nevertheless contain a diverse set of receptor tyrosine kinases and RPTPs [54,102]. Thus, many of these molecules could have evolved prior to (and may have been necessary for) the evolution of nervous systems in the urbilaterian.

Many axon guidance mechanisms are not only conserved at the molecular level, but also at the level of the body plan (reviewed in [103]). For example, netrins are secreted from ectodermal cells at the ventral midline of nematodes and insects and from the floorplate of the spinal cord of vertebrates (dorsal midline ectoderm, homologous to the ventral ectoderm of insects). Thus, in an ancestral animal, circumferential movements of axons or cells around the dorsoventral axis were probably oriented towards or away from a midline netrin source, and perhaps also from a midline Slit source. Studies in the coming years are likely to reveal the extent to which the patterning roles of other guidance mechanisms have been retained during the evolution of different body plans, and may help further outline the likely organization of the nervous system of our primitive ancestors.

These molecules are also multi-functional and play roles in other systems than the nervous system. They’re important in organogenesis and the maturation of the reproductive system, and are part of an interactive network of cell signaling molecules. It’s really complex, but what the DI doesn’t appreciate is that biology and evolutionary processes are really, really good at generating complexity. Look ot all the things the SLIT-ROBO system does!


You might notice that they play a role in cancer signaling, too, but then everything does.

Once again, the Intelligent Design creationists completely miss the point. The work on these axon patterning systems has been deeply informed by evolutionary perspectives, while the DI is reduced to mining for mentions of “complexity” in papers, as if that somehow supports their ignorance-based position.

Chisholm A, Tessier-Lavigne M (1999) Conservation and divergence of axon guidance mechanisms. Curr Opin Neurobiol. 9(5):603-15. (Note that this paper came out very shortly after the discovery of netrins, by the fellow who discovered them — evolutionary biology has been part of this story from the very beginning.)

Dickinson RE1, Duncan WC (2010) The SLIT-ROBO pathway: a regulator of cell function with implications for the reproductive system. Reproduction 139(4):697-704.

Casey Luskin vs. Homo naledi

The Intelligent Design Creationists are always getting annoyed at the third word in that label — they’re not creationists, they insist, but something completely different. They’re scientists, they think. They’re just scientists who favor a different explanation for the diversity of life on Earth than those horrible Darwinist notions. But of course, everything about them just affirms that they’re simply jumped-up creationists with airs, from their founding by an evangelical Christian, Phillip Johnson, to their crop of fellows like Paul Nelson and William Dembski, who happily profess their science-denying faith to audiences of fellow evangelicals, to their stance on every single damn discovery that comes out of paleontology and molecular biology. The real misnomer is that they work at a think-tank called the Discovery Institute, when their response to every scientific discovery that confirms evolution is a spasm of jerking knees and a chorus of “uh-uh” and “no way”.

It makes no sense. They completely lack an intellectual framework for dealing with new findings in science, so instead of explaining how Intelligent Design “Science” better explains an observed phenomenon, they instead dredge up some entirely unqualified spokesperson to mumble half-baked, pseudo-scientific excuses for why those Darwinists have it all wrong.

Case in point: Homo naledi, the newly discovered South African species. If they actually were Intelligent Design “Scientists”, they’d respond with the same puzzled happiness that real scientists do: we’re not sure where to place this species in our family tree, but it’s very exciting, and fits with our growing knowledge about the diversity of early hominins — there were lots of different species of human ancestral species and dead-end branch species living at the same time on Earth right up to less than 100,000 years ago. This fact of the fossil data has been known since I was a wee young lad growing up reading about Louis and Mary Leakey in National Geographic. That multiple hominin species coexisted and overlapped in time is part of the body of data that we have, and it fits just fine with evolutionary theory. The history of a lineage is a braided stream, with populations branching off and diverging, sometimes dying off, other times merging with other branches. And we explain this pattern with theories about common descent, genetic drift, and selection.

[Read more…]

2 + 2 = 17, for certain values of 2

A while back, I responded to Behe/Luskin’s claim that his model proving the impossibility of evolution of chloroquinone resistance was vindicated. I pointed out (as did Ken Miller) that showing that a particular trait required multiple point mutations did not affect the probability in the naive way that Behe and Luskin calculated — in particular, it did not require that the mutations be simultaneous. We’re familiar with a great many known mutations that involve multiple sequential hits to have their effect. I mentioned the work on steroid receptor evolution, and how cancer is an amazing example of the power of the accumulation of sequential variants.

Behe fired back, issuing a challenge to ‘show my numbers’. If he could have said anything to confirm that he was obliviously ignoring my point, that was it, and so I blew him a raspberry and ignored his challenge. I wasn’t arguing with his numbers, and we could even use his very own set of numbers — my point was in the operation he was doing with those numbers. His assumption is that you must have two mutations occur simultaneously, in the same individual, so that you simplistically multiply the probabilities together to get an improbably low frequency. I’m saying that’s invalid: these mutations can happen independently, they can accumulate to some frequency in the population, and then a second mutation can occur.

Now Larry Moran has carried through on the calculations. Using the known data on mutation rates, and throwing away Behe’s bogus demand that everything occur in the very same instant, he shows that the evolution of chloroquinone resistance ought to be rare, but not at all impossible, and with frequencies that are in the ballpark of what is observed.

Furthermore, Moran describes a paper that quantifies the presence of malaria strains in the population that contain pieces of the resistant combinations and that further describe the sequential series of mutational events that led to the most resistant strains.

All of the strains (except D17) are found in naturally occurring Plasmodium populations and the probable pathways to each of the major chloroquine resistant strains are shown. It takes at least four sequential steps with one mutation becoming established in the population before another one occurs.

None of the mutations occurred simultaneously as Behe claimed in his book.

The intelligent design creationists are somehow still crowing victory. I don’t quite understand how — their premises have been demolished, they’ve been cut off at the knees, but I guess their followers are easily bamboozled if they shout “math!” loud enough. Even if the math is wrong.

There’s a secular argument for wearing underpants on your head. So?

Sarah Moglia points out that David Silverman has been saying some weird things recently.

Yesterday, an article was published about atheists at CPAC (Conservative Political Action Conference). Featured prominently in the article was Dave Silverman, president of American Atheists. In it, Dave was quoted as saying, “I will admit there is a secular argument against abortion. You can’t deny that it’s there, and it’s maybe not as clean cut as school prayer, right to die, and gay marriage.” Is that so?

I’m trying to figure out what this ‘secular argument’ actually is; he didn’t say. I have encountered anti-choice people tabling at an atheist convention, and they couldn’t say either — I got the impression these were actually religious people trying to evangelize to the atheists with a pretense, and they stood out oddly from the rest of the crowd…rather like an atheist shilling at CPAC. So speak up, Dave, tell us what these secular arguments are.

I’m also wary because in my business we’ve run into folks peddling religious bullshit under the guise of being secular before: we call them intelligent design creationists. No one is fooled. Similarly, the anti-choicers who claim to be making a rational secular argument are easy to see through, since they ultimately always rely on some magical perspective on the embryo.

But here’s the bottom line: it is not enough to make a purely secular argument. It has to also be a good argument, unless atheism is to become a smokescreen for nonsense, to be accepted purely because of its godless label. And then atheism might as well just be another religion.

Magic RNA editing!

One of those wacky Intelligent Design creationists (Jonathan McLatchie, an arrogant ignoramus I’ve actually met in Scotland) has a theory, which is his, to get around that obnoxious problem of pseudogenes. Pseudogenes are relics, broken copies of genes that litter the genome, and when you’ve got a gang of ideologues who are morally committed to the idea that every scrap of the genome is designed and functional, they put an obvious crimp in their claims.

So here’s this shattered gene littered with stop codons and with whole exons deleted and gone; how are you gonna call that “functional”, creationist? McLatchie’s solution: declare that it must still be functional, it’s just edited back into functionality. He uses the example of GULOP, a gene responsible for vitamin C synthesis, which is pretty much wrecked in us. Nonfunctional. Missing big bits. Scrambled. With missing regulatory elements, so it isn’t even transcribed. No problem: it’s just edited.

As I mentioned previously, the GULO gene in humans is rendered inactive by multiple stop codons and indel mutations. These prevent the mRNA transcript of the gene from being translated into a functional protein. If the GULO gene really is functional in utero, therefore, presumably it would require that the gene’s mRNA transcript undergo editing so that it can produce a functional protein. It’s not at all difficult to understand how this could occur.

Yes, RNA editing is a real thing. RNA does get processed before it’s translated into protein. McLatchie has a teeny-tiny bit of knowledge and is abusing it flagrantly.

I’ve hammered out dents in a car, and I’ve touched up rust spots with a little steel wool and a can of spray paint. My father was also an auto mechanic and could do wonders with a wrench. Auto repair exists, therefore…


…patching up that vehicle should be no problem at all, right? I expect to see it cruisin’ down the highway any time now.

Maybe two cans of spray paint this time…?

Historical and observational science

Dealing with various creationists, you quickly begin to recognize the different popular flavors out there.

The Intelligent Design creationists believe in argument from pseudoscientific assertion; “No natural process can produce complex specified information, other than Design,” they will thunder at you, and point to books by people with Ph.D.s and try to tell you they are scientific. They aren’t. Their central premise is false, and trivially so.

Followers of Eric Hovind I find are the most repellently ignorant of the bunch. They love that presuppositional apologetics wankery: presuppose god exists, therefore god exists. It’s like debating a particularly smug solipsist — don’t bother.

The most popular approach I’ve found, though, is the one that Ken Ham pushes. It’s got that delightful combination of arrogant pretense in which the Bible-walloper gets to pretend he understands science better than scientists, and simultaneously allows them to deny every scientific observation, ever. This is the argument where they declare what kinds of science there are, and evolutionary biologists are using the weak kind, historical science, while creationists are only using the strong kind, observational science. They use the distinction wrongly and without any understanding of how science works, and they inappropriately claim that they’re doing any kind of science at all.

A recent example of this behavior comes from Whirled Nut Daily, where I’m getting double-teamed by Ray Comfort and Ken Ham (don’t worry, I’m undaunted by the prospect of being ganged up on by clowns.)

According to Ken Ham’s blog at Answers in Genesis, Minnesota professor PZ Myers, who was interviewed by Comfort, said: “Lie harder, little man … Ray Comfort is pushing his new creationist movie with a lie. … What actually happened is that I briefly discussed the evidence for evolution – genetics and molecular biology of fish, transitional fossils, known phylogenies relating extant groups, and experimental work on bacterial evolution in the lab, and Ray comfort simply denied it all – the bacteria were still bacteria, the fish were still fish.”

But Ham explained that Comfort “asks a question something like this: ‘Is there scientific evidence – observable evidence – to support evolution?’ Well, none of them could provide anything remotely scientific. Oh, they give the usual examples about changes in bacteria, different species of fish (like stickleback fish) and, as to be expected, Darwin’s finches. But as Ray points out over and over again in ‘Evolution vs. God,’ the bacteria are still bacteria, the fish are still fish, and the finches are still finches!”

Isn’t that what I said? I gave him evidence, which he denied by falling back on a typological fallacy: the bacteria are still bacteria. What he refuses to recognize is that they were quantitatively different bacteria, physiologically and genetically. To say that something is still X, where X is an incredibly large and diverse group like fish and bacteria, is to deny variation and diversity, observable properties of the natural world which are the fundamental bedrock of evolutionary theory.

But the giveaway is that brief phrase “scientific evidence — observational evidence”. That’s where the real sleight of hand occurs: both Comfort and Ham try to claim that that all the evidence for evolution doesn’t count, because it’s not “observational”. “Were you there?” they ask, meaning that the only evidence they’ll accept is one where an eyewitness sees a complete transformation of one species to another. That is, they want the least reliable kind of evidence, for phenomena that are not visual. They’re freakin’ lying fools.

All scientific evidence is observational, but not in the naive sense that all that counts is what you see with your eyes. There is a sense in which some science is regarded as historical, but it’s not used in the way creationists do; it does not refer to science that describes events in the past.

Maybe some examples will make that clearer.

We can reconstruct the evolutionary history of fruit flies. We do this by observation. That does not mean we watch different species of fruit flies speciate before our eyes (although it has been found to occur in reasonable spans of time in the lab and the wild), it means we extract and analyze information from extant species — we take invisible genetic properties of the flies’ genomes and turn them into tables of data and strings of publishable code. We observe patterns in their genetics that allow us to determine patterns of historical change. Observation and history are intertwined. To deny the history is to deny the observations.

Paleontology is often labeled a historical science, but it doesn’t have the pejorative sense in which creationists use it, and it is definitely founded in observation. For instance, plesiosaurs: do you think scientists just invented them? No. We found their bones — we observed their remains imbedded in rock — and further, we found evidence of a long history of variation and diversity. The sense in which the study of plesiosaurs is historical is that they’re all extinct, so there are no extant forms to examine, but it is still soundly based on observation. Paleontology may be largely historical, but it is still a legitimate science built on observation, measurement, and even prediction, and it also relies heavily on analysis of extant processes in geology, physics, and biology.

The reliance on falsehoods like this bizarre distinction between observational and historical science that the Hamites and Comfortians constantly make is one of the reasons you all ought to appreciate my saintly forebearance, because every time I hear them make it, I feel a most uncivilized urge to strangle someone. I suppress it every time, though: I just tell myself it’s not their fault their brains were poisoned by Jesus.

The last intelligent creationist


Earlier today, Maggie Koerth-Baker posted this tweet:

I dig this graph, but I think it misses an outreach opportunity by ascribing common misconceptions to creationists only

It links to a diagram showing evolution as a linear path rather than a branching tree, and it got me thinking about terribly popular misconceptions about evolution that were started by smart people, and a doozy came to mind. A whole collection of doozies, actually, from one single terribly clever person.

You’ve all heard the stupid creationist objection to evolution — “if evolution is true, how come there are still monkeys?” — but have you ever wondered who the first person to come up with that criticism was? You might be surprised.

The first instance I’ve been able to find was by Richard Owen, head of the British Museum and one of the premiere scientists of his day, and it was said in a rather notorious review of Darwin’s Origin, published in the Edinburgh Review in 1860. So not a stupid fellow, but one with an axe to grind, and also a creationist…but then, just about everyone was a creationist in 1860. Still, it’s a remarkable document.

Some background you need to know, though. This review was authored by Owen. When it needs to cite a scientist for its claims, it cites…Professor Richard Owen. It does so 11 times. Reading it with knowledge of its authorship really diminishes its authority to an amazing degree, and greatly inflates Owen’s appearance of pomposity.

It’s also an agonizing read. Darwin sometimes sounds a bit quaint and wordy nowadays, but at least he’s lucid and logical, and his writing flows well: I found Owen’s review to be a rough read, turgid and inelegant. I know I’ve got a bit of a bias which colors my opinion, but seriously, when you read the excerpt below, you’ll see what I mean.

On the other hand, if you read the whole thing, you’ll be struck by how it uses a whole collection of arguments that sound little different than what creationists say now, but that it is considerably more erudite. I hate to give them advice, but if creationists tossed out the trash written by Gish and Ham and any of the hacks at the Discovery Institute, and just regurgitated Owen’s words, there is a great deal that most of the warriors for evolution would have a tough time rebutting. Owen knew a lot of zoology, and he deploys it effectively to buttress some fundamentally flawed arguments.

Like this one. He doesn’t literally say “if evolution is true, how come there are still monkeys?” — he uses much more obscure examples and far more convoluted language, but it’s the same sentiment.

But has the free-swimming medusa, which bursts its way out of the ovicapsule of a campanularia, been developed out of inorganic particles? Or have certain elemental atoms suddenly flashed up into acalephal form? Has the polype-parent of the acalephe necessarily become extinct by virtue of such anomalous birth? May it not, and does it not proceed to propagate its own lower species in regard to form and organisation, notwithstanding its occasional production of another very different and higher kind. Is the fact of one animal giving birth to another not merely specifically, but generically and ordinally, distinct, a solitary one? Has not Cuvier, in a score or more of instances, placed the parent in one class, and the fruitful offspring in another class, of animals? Are the entire series of parthenogenetic phenomena to be of no account in the consideration of the supreme problem of the introduction of fresh specific forms into this planet? Are the transmutationists to monopolise the privilege of conceiving the possibility of the occurrence of unknown phenomena, to be the exclusive propounders of beliefs and surmises, to cry down every kindred barren speculation, and to allow no indulgence in any mere hypothesis save their own? Is it to be endured that every observer who points out a case to which transmutation, under whatever term disguised, is inapplicable, is to be set down by the refuted theorist as a believer in a mode of manufacturing a species which he never did believe in, and which may be inconceivable?

Doesn’t it sound so much more intelligent to ask, if evolution is true, why haven’t inorganic particles evolved into free-swimming medusae, and hey, why are there still polype-parents of the acalephe? Why aren’t we observing new forms bursting up out of the inanimate world in the same way they must have in Darwin’s version of the past?

The intelligent design creationists are also missing an opportunity. This is one of my favorite parts: Owen is snidely berating Darwin for thinking up this cunning new mechanism and then discarding the other ‘scientific’ mode of biological change…that is, divine creation. Transmutationists, as he calls evolutionists, are unable to see other ways that creation might work. “You can’t handle the truth!” is what he’s saying here.

Here it is assumed, as by Mr. Darwin, that no other mode of operation of a secondary law in the foundation of a form with distinct specific characters, can have been adopted by the Author of all creative laws that the one which the transmutationists have imagined. Any physiologist who may find the Lamarckian, or the more diffused and attenuated Darwinian, exposition of the law inapplicable to a species, such as the gorilla, considered as a step in the transmutative production of man, is forthwith clamoured against as one who swallows up every fact and every phenomenon regarding the origin and continuance of species ‘in the gigantic conception of a power intermittently exercised in the development, out of inorganic elements, of organisms the most bulky and complex, as well as the most minute and simple.’ Significantly characteristic of the partial view of organic phenomena taken by the transmutationists, and of their inadequacy to grapple with the working out and discovery of a great natural law, is their incompetency to discern the indications of any other origin of one specific form out of another preceding it, save by their way of gradual change through a series of varieties assumed to have become extinct.

Similarly, Owen siezes on Darwin’s remark that all life descended from one primordial form “into which life was first breathed” to chastise him for limiting god:

By the latter scriptural phrase, it may be inferred that Mr. Darwin formally recognises, in the so-limited beginning, a direct creative act, something like that supernatural or miraculous one which, in the preceding page, he defines, as ‘certain elemental atoms which have been commanded suddenly to flash into living tissues.’ He has, doubtless, framed in his imagination some idea of the common organic prototype; but he refrains from submitting it to criticism. He leaves us to imagine our globe, void, but so advanced as to be under the conditions which render life possible; and he then restricts the Divine power of breathing life into organic form to its minimum of direct operation.

I have some sympathy for this argument, and I think Darwin himself regretted making that one concession, because as we can see, creationists will sieze any excuse to invoke their personal god.

There’s also a section where he chides Darwin for not giving enough credit to Lamarck, and another where he favorably cites Buffon for his idea that species are mutable to a limited degree (Owen himself accepted some range of change over time), and calculated that all mammals could be reduced to 15 basic stocks. Creationists calculating storage space on the ark, take notice.

So yes, a lot of creationist arguments have their source not in really stupid people, but in some very intelligent and scientifically conservative people in the past. The problem is that modern creationists are clinging to rotten antique ideas that have long been dismantled. I’d also point out that creationist arguments have decayed: Owen’s writing, opaque and pretentious as it is, is far more challenging than anything I’ve seen from his degraded intellectual descendants.

I think if I were teaching a course in anti-creationism, I’d give this essay to my students and we’d spend about a week taking it apart — it would be a good exercise for them. And oh, they would hate me for it.

The Genetic Code is not a synonym for the Bible Code

Oh, boy. The Intelligent Design creationists are all excited about a new paper that purports to have identified an intelligent signal in the genetic code.

Here’s a new paper that can be added to the growing stack of intelligent-design articles in peer-reviewed journals. Even though the authors do not use the phrase “intelligent design,” their reasoning centers on the detection of an intelligent signal embedded in the genetic code — a mathematical and semantic message that cannot be accounted for by a natural cause, “be it Darwinian, Lamarckian,” chemical affinities or energetics, or any other.

I’ve read the paper by ShCherbak and Makukov, and by golly, the Discovery Institute flack really has accurately summarized the paper: it does explicitly and clearly claim to have identified evidence of design in the genetic code! That’s newsworthy in itself, that the creationists can accurately summarize a scientific paper…as long as the results conform to their ideological expectations.

Unfortunately, what they’ve so honestly described is good old honest garbage.

Here’s the short summary of what they do: they jigger the identities of the amino acids coded for by each codon into a number, a nucleon sum. What is that, you might ask? It’s determined by adding up the number of protons and neutrons in the amino acid, which is simply the mass number of the compound. Further, you can distinguish the amino acid into it’s R group, and the atoms that make up the peptide chain proper, which he calls the B group, for standard block. The mass number of the B group is always 74, except for proline, so he transfers a hydrogen from the R group to the proline B group to bring it up to 74, and by the way, did you notice that 74 is two times 37, which is a prime number? Now if you take all the three-digit decimals with identical digits (111, 222, 333…999), and sum their digits (111=3, 222=6, 333=9, etc.) you get the quotient of the number divided by…37!!!1!!

Are you impressed yet? This is simply numerology, juggling highly derived quantities that have little to do with functional properties of the molecules to come up with arbitrary numerical relationships, and then claiming that they’re somehow significant. They also play games with the sums of the mass numbers of just the R groups for certain codons, adding or subtracting the B number, finagling things until they get numbers that are evenly divisible by their magic prime number of 37, etc. It’s pure nonsense through and through.

But every once in a while, something sensible emerges out of the murk. Here’s the logic of their argument:

To be considered unambiguously as an intelligent signal, any patterns in the code must satisfy the following two criteria: (1) they must be highly significant statistically and (2) not only must they possess intelligent-like features, but they should be inconsistent in principle with any natural process, be it Darwinian or Lamarckian evolution, driven by amino acid biosynthesis, genomic changes, affinities between (anti)codons and amino acids, selection for the increased diversity of proteins, energetics of codon-anticodon interactions, or various pre-translational mechanisms.

(1) is simply saying that there must be a pattern of some sort — if the code were purely random assignment of arbitrary nucleotides to each amino acid, it wouldn’t be much of a sign — it would suggest that the sequence is noise, not signal. (2) is the really hard part, the one where you’d have to do a lot of work: you’d have to show that natural processes did not contribute to the pattern. They do not do that. They can’t do that. They take a different and curious tack.

They literally argue that because organizing the code by their nucleon sums makes no sense and has no reasonable functional consequences…therefore it must be an artificial and intentional feature. I’ve heard this argument before. It’s called the Chewbacca defense. Ladies and gentlemen, think about it: that does not make sense! If nucleon numbers show a mathematical pattern of any kind in their relationship to codons, you must accept the existence of a designer.

However, if we can show a natural property that leads to the organization of the genetic code, then I’m afraid their argument evaporates. Even more so than building an argument on the Chewbacca defense, that is.

There’s a very good discussion of the genetic code in Nick Lane’s book, Life Ascending: The Ten Great Inventions of Evolution, and I’ll briefly summarize it.

First, there is a pattern to the genetic code! No one has ever denied that; it’s obviously not the case that amino acids are randomly assigned to trios of nucleotides. Here’s the code:


Let’s look at one amino acid, glycine (Gly), down in the bottom right corner. The genetic code is degenerate: that means that most amino acids have multiple combinations of nucleotides that can specify them. Glycine’s codes are GGU, GGC, GGA, and GGG. Do you see a pattern? The code is actually GG_, where the third position has a lot of slack or wobble, and any nucleotide will do. We see similar cases where just the first two nucleotides are sufficient to specify leucine, valine, serine, proline, threonine, alanine, and arginine. Even with the other amino acids, there are some constraints; CA_ can identify histidine or glutamine, but if the third letter is a pyrimidine (U or C), you get histidine, while if it’s a purine (A or G), you get glutamine. There are patterns all over the place here! So of course ShCherbak and Makukov could find evidence of significant organization.

But there’s more. There are other rules associated with this pattern.

In the synthesis of these amino acids, biochemistry typically modifies a raw starting material. The first letter of the codon says something about the biosynthesis of the associated amino acid.

If the first letter is:
• C, then the amino acid is derived from alpha-ketoglutarate.
• A, then the amino acid is derived from oxaloacetate.
• T, then the amino acid is derived from pyruvate.
• G, then the amino acid is derived in a single step from simple precursors.

The second letter of the codon is correlated with chemical properties of the amino acid.

If the second letter is:
• A, then the amino acid is hydrophilic.
• T, then the amino acid is hydrophobic.
• G or C, the amino acid has an intermediate hydrophobicity.

Wait…so there’s a pattern to the genetic code, and that pattern is associated with the physical properties of the amino acids? Why, that makes sense. Chewbacca is routed! The most likely origin of the code lies in likely catalytic properties of dinucleotides; pairs of nucleotides in ancient organisms were initially functioning as proto-enzymes before they were incorporated into strings of coding information. At least that provides a historical physico-chemical route to the particular code we now have that does not require weird numerological masturbation.

It’s rather pathetic that the Discovery Institute thinks this is a beautiful piece of science. It’s not. It’s nonsense. But look how the DI spins this story:

How will evolutionists respond to this paper? It’s hard to see how they could dismiss it. Maybe they will try to mock it as old Arabian numerology, or religiously inspired (since Kazakhstan, which funded the study, is 70% Muslim). Those would be unfair criticisms. The authors have Russian names, certified doctorates, and wrote in collaboration with leading lights in the West. Or perhaps critics could argue that the authors hail from a foreign country whose name has too many adjacent consonants in it to take them seriously.

No, it appears the only way out for Darwinists would be the “Dawkins Dodge.” You may remember that one from the documentary Expelled, where Dawkins admits the possibility of panspermia for Earth, so long as the designers themselves evolved by a Darwinian process.

What’s most notable about this paper is the similarity in design reasoning between the authors and the more familiar advocates of intelligent design theory. No appeals to religion or religious texts; no identifying the designer; just logical reasoning from effect to sufficient cause. The authors even applied the “design filter” by considering chance and natural law, including natural selection, before inferring design.

If Darwinists want to go on equating intelligent design with creationism, they will now have to take on the very secular journal Icarus.

I didn’t even consider the religious or ethnic basis of this study; it didn’t come to mind at all. It is clearly simple stupid numerology, though. Look at the rationale given for all of the conclusions, which consist entirely of mathematical manipulations of arbitrary derived properties of the molecules, to arrive at a claim of prime number significance.

We certainly don’t need to invoke panspermia. Nothing in the genetic code requires design. and the authors haven’t demonstrated otherwise.

I am most amused by the cute parallelism of claiming surprise that the authors of this paper use “design reasoning” similar to that used by American Intelligent Design creationists. They’ve been slinging this slop for decades; why be impressed that another set of Intelligent Design creationists in Kazakhstan are using the same tired tropes?

I’m also not impressed with the failure of implementation of their logic. OK, they have a ‘design filter’ that they apply, but so what? Their methods failed to recognize a well-known functional association in the genetic code; they did not rule out the operation of natural law before rushing to falsely infer design.

And that last bit…I don’t care what journal it was published in. The prestige of a journal does not confer infallibility, and even the best of journals will occasionally publish crap. They will be especially likely to publish garbage when they stretch beyond the expertise of their reviewers. Icarus is a journal of planetary science that publishes primarily on astronomy and geology. This particular paper conveniently falls between the cracks — it’s a weird paper full of trivial arithmetical manipulations for arcane purposes with no scientific justification for any of its procedures. I don’t know how it got accepted for publication, other than by boring the reviewers with its incomprehensible digit fiddling.

One last thing: don’t rush to claim a secular purpose behind this work. It’s already been appropriated by freaky strange religious fanatics and lovers of the bible codes. You can’t blame shCherbak directly for this weirdo’s interpretations, but certainly he isn’t far from his temperament.

The facts presented on this site, when combined with those now revealed to us by shCherbak, constitute invincible evidence of the truth of the Judeo-Christian Scriptures, and of the Being and Sovereignty of their Divine Author.

Yeah, numerology. Nothing but wanking over tables.

Larry Moran has more — it turns out that Uncommon Descent and Cornelius Hunter also liked this paper. Flies are drawn to shit, I guess.

ENCODE gets a public reaming

I rarely laugh out loud when reading science papers, but sometimes one comes along that triggers the response automatically. Although, in this case, it wasn’t so much a belly laugh as an evil chortle, and an occasional grim snicker. Dan Graur and his colleagues have written a rebuttal to the claims of the ENCODE research consortium — the group that claimed to have identified function in 80% of the genome, but actually discovered that a formula of 80% hype gets you the attention of the world press. It was a sad event: a huge amount of work on analyzing the genome by hundreds of labs got sidetracked by a few clueless statements made up front in the primary paper, making it look like they were led by ignoramuses who had no conception of the biology behind their project.

Now Graur and friends haven’t just poked a hole in the balloon, they’ve set it on fire (the humanity!), pissed on the ashes, and dumped them in a cesspit. At times it feels a bit…excessive, you know, but still, they make some very strong arguments. And look, you can read the whole article, On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE, for free — it’s open source. So I’ll just mention a few of the highlights.

I’d originally criticized it because the ENCODE argument was patently ridiculous. Their claim to have assigned ‘function’ to 80% (and Ewan Birney even expected it to converge on 100%) of the genome boiled down to this:

The vast majority (80.4%) of the human genome participates in at least one biochemical RNA- and/or chromatin-associated event in at least one cell type.

So if ever a transcription factor ever, in any cell, bound however briefly to a stretch of DNA, they declared it to be functional. That’s nonsense. The activity of the cell is biochemical: it’s stochastic. Individual proteins will adhere to any isolated stretch of DNA that might have a sequence that matches a binding pocket, but that doesn’t necessarily mean that the constellation of enhancers and promoters are present and that the whole weight of the transcriptional machinery will regularly operate there. This is a noisy system.

The Graur paper rips into the ENCODE interpretations on many other grounds, however. Here’s the abstract to give you a summary of the violations of logic and evidence that ENCODE made, and also to give you a taste of the snark level in the rest of the paper.

A recent slew of ENCODE Consortium publications, specifically the article signed by all Consortium members, put forward the idea that more than 80% of the human genome is functional. This claim flies in the face of current estimates according to which the fraction of the genome that is evolutionarily conserved through purifying selection is under 10%. Thus, according to the ENCODE Consortium, a biological function can be maintained indefinitely without selection, which implies that at least 80 − 10 = 70% of the genome is perfectly invulnerable to deleterious mutations, either because no mutation can ever occur in these “functional” regions, or because no mutation in these regions can ever be deleterious. This absurd conclusion was reached through various means, chiefly (1) by employing the seldom used “causal role” definition of biological function and then applying it inconsistently to different biochemical properties, (2) by committing a logical fallacy known as “affirming the consequent,” (3) by failing to appreciate the crucial difference between “junk DNA” and “garbage DNA,” (4) by using analytical methods that yield biased errors and inflate estimates of functionality, (5) by favoring statistical sensitivity over specificity, and (6) by emphasizing statistical significance rather than the magnitude of the effect. Here, we detail the many logical and methodological transgressions involved in assigning functionality to almost every nucleotide in the human genome. The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, mass-media hype, and public relations may well have to be rewritten.

You may be wondering about the curious title of the paper and its reference to immortal televisions. That comes from (1): that function has to be defined in a context, and that the only reasonable context for a gene sequence is to identify its contribution to evolutionary fitness.

The causal role concept of function can lead to bizarre outcomes in the biological sciences. For example, while the selected effect function of the heart can be stated unambiguously to be the pumping of blood, the heart may be assigned many additional causal role functions, such as adding 300 grams to body weight, producing sounds, and preventing the pericardium from deflating onto itself. As a result, most biologists use the selected effect concept of function, following the Dobzhanskyan dictum according to which biological sense can only be derived from evolutionary context.

The ENCODE group could only declare function for a sequence by ignoring all other context than the local and immediate effect of a chemical interaction — it was the work of short-sighted chemists who grind the organism into slime, or worse yet, only see it as a set of bits in a highly reduced form in a computer database.

From an evolutionary viewpoint, a function can be assigned to a DNA sequence if and only if it is possible to destroy it. All functional entities in the universe can be rendered nonfunctional by the ravages of time, entropy, mutation, and what have you. Unless a genomic functionality is actively protected by selection, it will accumulate deleterious mutations and will cease to be functional. The absurd alternative, which unfortunately was adopted by ENCODE, is to assume that no deleterious mutations can ever occur in the regions they have deemed to be functional. Such an assumption is akin to claiming that a television set left on and unattended will still be in working condition after a million years because no natural events, such as rust, erosion, static electricity, and earthquakes can affect it. The convoluted rationale for the decision to discard evolutionary conservation and constraint as the arbiters of functionality put forward by a lead ENCODE author (Stamatoyannopoulos 2012) is groundless and self-serving.

There is a lot of very useful material in the rest of the paper — in particular, if you’re not familiar with this stuff, it’s a very good primer in elementary genomics. The subtext here is that there are some dunces at ENCODE who need to be sat down and taught the basics of their field. I am not by any means a genomics expert, but I know enough to be embarrassed (and cruelly amused) at the dressing down being given.

One thing in particular leapt out at me is particularly fundamental and insightful, though. A common theme in these kinds of studies is the compromise between sensitivity and selectivity, between false positives and false negatives, between Type II and Type I errors. This isn’t just a failure to understand basic biology and biochemistry, but incomprehension about basic statistics.

At this point, we must ask ourselves, what is the aim of ENCODE: Is it to identify every possible functional element at the expense of increasing the number of elements that are falsely identified as functional? Or is it to create a list of functional elements that is as free of false positives as possible. If the former, then sensitivity should be favored over selectivity; if the latter then selectivity should be favored over sensitivity. ENCODE chose to bias its results by excessively favoring sensitivity over specificity. In fact, they could have saved millions of dollars and many thousands of research hours by ignoring selectivity altogether, and proclaiming a priori that 100% of the genome is functional. Not one functional element would have been missed by using this procedure.

This is a huge problem in ENCODE’s work. Reading Birney’s commentary on the process, you get a clear impression that they regarded it as a triumph every time they got even the slightest hint that a stretch of DNA might be bound by some protein — they were terribly uncritical and grasped at the feeblest straws to rationalize ‘function’ everywhere they looked. They wanted everything to be functional, and rather than taking the critical scientific view of trying to disprove their own claims, they went wild and accepted every feeble excuse to justify them.

The Intelligent Design creationists get a shout-out — they’ll be pleased and claim it confirms the validity of their contributions to real science. Unfortunately for the IDiots, it is not a kind mention, but a flat rejection.

We urge biologists not be afraid of junk DNA. The only people that should be afraid are those claiming that natural processes are insufficient to explain life and that evolutionary theory should be supplemented or supplanted by an intelligent designer (e.g., Dembski 1998; Wells 2004). ENCODE’s take-home message that everything has a function implies purpose, and purpose is the only thing that evolution cannot provide. Needless to say, in light of our investigation of the ENCODE publication, it is safe to state that the news concerning the death of “junk DNA” have been greatly exaggerated.

Another interesting point is the contrast between big science and small science. As a microscopically tiny science guy, getting by on a shoestring budget and undergraduate assistance, I like this summary.

The Editor-in-Chief of Science, Bruce Alberts, has recently expressed concern about the future of “small science,” given that ENCODE-style Big Science grabs the headlines that decision makers so dearly love (Alberts 2012). Actually, the main function of Big Science is to generate massive amounts of reliable and easily accessible data. The road from data to wisdom is quite long and convoluted (Royar 1994). Insight, understanding, and scientific progress are generally achieved by “small science.” The Human Genome Project is a marvelous example of “big science,” as are the Sloan Digital Sky Survey (Abazajian et al. 2009) and the Tree of Life Web Project (Maddison et al. 2007).

Probably the most controversial part of the paper, though, is that the authors conclude that ENCODE fails as a provider of Big Science.

Unfortunately, the ENCODE data are neither easily accessible nor very useful—without ENCODE, researchers would have had to examine 3.5 billion nucleotides in search of function, with ENCODE, they would have to sift through 2.7 billion nucleotides. ENCODE’s biggest scientific sin was not being satisfied with its role as data provider; it assumed the small-science role of interpreter of the data, thereby performing a kind of textual hermeneutics on a 3.5-billion-long DNA text. Unfortunately, ENCODE disregarded the rules of scientific interpretation and adopted a position common to many types of theological hermeneutics, whereby every letter in a text is assumed a priori to have a meaning.

Ouch. Did he just compare ENCODE to theology? Yes, he did. Which also explains why the Intelligent Design creationists are so happy with its bogus conclusions.

The Gumby Gambit

Tom Bethell is a fellow traveller with the Intelligent Design creationists of the Discovery Institute; he often publishes on their website, and he’s the author of quite a few books questioning the dogma of science. He also thinks he’s a polymath: he wrote Questioning Einstein: Is Relativity Necessary?, which claims that Einstein was wrong, and he also wrote The Politically Incorrect Guide to Science, which claims that radiation is good for you, there is no global climate change going on, Shakespeare didn’t write those plays, and evolution is bunk, among many other remarkable assertions.

He’s a gumbyesque crackpot, in other words.

His latest effort is a rant on l’affaire greenscreen in which he explains natural selection to us. Read on; you will be in awe as Mr Gumby bellows out his definitions and explanations. He gets everything absolutely backwards.

An analogous situation arises with varieties of bacteria that are immune to antibiotics. The immune varieties are suddenly “fit” and so they survive. But the word “adaptation” is misleading because the immune varieties have to appear first. They don’t “adapt,” or reshape themselves in recognition of the suddenly hostile environment. They are not like people who “adapt” to cold weather by putting on overcoats. They are like people who accidentally had overcoats on before the cold snap came.

NS is not supposed to be an explanation of how we get more of something; a dark moth, for example. It’s supposed to show how the moth itself arose. And that is what the Darwinists have never been able to demonstrate; not just with moths but with anything else. That’s why I hesitate to call NS “real.” Well, I guess it is, as long as it’s defined narrowly enough.

Read that last paragraph again. It’s a marvel. Tom Bethell doesn’t have even a basic understanding of the principle of natural selection; he doesn’t even understand it as well as Darwin, who wrote it up in 1859.

Natural selection is an explanation of how we get more (or less) of something; it describes one mode of change in the frequency of a trait in a population over multiple generations. It is not about physiological adaptation, but about changes in allele frequency. That’s all biologists have claimed for the concept, ever; it’s one of the things population geneticists have lots of math to describe.

Natural selection is not an explanation for how evolutionary novelties arise in the first place. For that, we have to look at mutations and subtler enabling changes that facilitate the emergence of new phenotypes, like recombination and genetic accommodation. The idea that variation in the environment can induce appropriate changes in heritable traits of organisms is the discarded notion of Lamarckian inheritance — we don’t see evidence of that.

He gets it all completely wrong. Even more remarkably, he gets it wrong after giving a useful analogy with his overcoat example.

Yes, natural selection works exactly like “people who accidentally had overcoats on before the cold snap came.” That’s Darwin’s key insight and Bethell’s key failure: natural selection isn’t about how individuals adapt, it’s about how populations adapt by winnowing out less fit individuals (those who don’t have an overcoat) and promoting the more fit individuals (those who happened to have an overcoat, and will pass it on to their children).

I really don’t understand how someone could write a whole book with chapters about evolution and not grasp that beautiful, simple, elegant idea. I suppose it’s the same way someone with no understanding of physics could write a whole book with no math in it disproving Einstein.

Isn’t it revealing, though, how the Discovery Institute promotes people like Bethell and Gauger who have no understanding of the field they aim to disprove? It’s as if the only people they can find who share their goals are all incompetents with delusions of understanding the science about as well as a reasonable high school student.