Reading for non-psychopaths

I’ve met Jon Ronson a few times, including this past weekend, but I can’t say I really know him — we’ve exchanged a few words, I’ve heard him give a talk, I now him as the short intense guy with the very bad hair, slightly neurotic, expressive, and funny. But I read his book, The Psychopath Test: A Journey Through the Madness Industry(amzn/b&n/abe/pwll) this afternoon while on the train from Cheltenham to Heathrow, and I have this idea of who he is.

He’s Hemingway. Yeah, you know, the macho, hard-drinking man’s man who would run with the bulls or fish for the marlin or pretend to be a war correspondent, who was all about his image as a man of danger or adventure. Only Ronson is more of a nerd’s nerd, and instead of physical danger, he’s all about deep psychic weirdness. So he writes about conspiracy theorists in Them(amzn/b&n/abe/pwll), or weirdos with purported psychic powers in The Men Who Stare at Goats(amzn/b&n/abe/pwll), or he charges off to interview the Insane Clown Posse. And now, in this book, he hangs out with psychopaths. He’s much freakier than Papa.

So the book is a tour of psychopaths and the people who study them. I don’t quite understand how a guy as anxious as Ronson could do it, which again speaks to his nerd machismo. Early on, we get introduced to the criteria for psychopathy:

1 Glibness/superficial charm
2 Grandiose sense of self-worth
3 Need for stimulation/proneness to boredom
4 Pathological lying
5 Cunning/manipulative
6 Lack of remorse or guilt
7 Shallow affect
8 Callous/lack of empathy
9 Parasitic lifestyle
10 Poor behavioural controls
11 Promiscuous sexual behaviour
12 Early behaviour problems
13 Lack of realistic long-term goals
14 Impulsivity
15 Irresponsibility
16 Failure to accept responsibility for own actions
17 Many short-term marital relationships
18 Juvenile delinquency
19 Revocation of conditional release
20 Criminal versatility

I read that and mentally checked off which characteristics fit myself (I know you’re doing it yourself) — surprisingly, none of them fit me at all, which makes me a kind of anti-psychopath. But then, I started thinking…isn’t that just exactly what a psychopath would say? And all we’d need to do is be really good at #4 to conceal everything else. This is what happens as you read the book; everything gets all twisty and you start getting paranoid and confused, because you start applying the criteria to everyone you know.

Or to entire institutions. Just try assessing the Republican party or the Catholic church by that list. The psychos are everywhere.

It’s good reading, anyway. It kept me engrossed for the whole trip, and left me wide-awake and a little jumpy as I worked my way through the airport.

The new phrenology

Morphological variation is important, it’s interesting…and it’s also common. It’s one of my major scientific interests — I’m actually beginning a new research project this spring with a student and I doing some pilot experiments to evaluate variation in wild populations here in western Minnesota, so I’m even putting my research time where my mouth is in this case. There has been some wonderful prior work in this area: I’ll just mention a paper by Shubin, Wake, and Crawford from 1995 that examined limb skeletal morphology in a population of newts, and found notable variation in the wrist elements — only about 70% had the canonical organization of limb bones.

i-1c9f04cedab9162d0f515173b1984313-taricha.jpeg

i-d03621754561d5ba17e77b0065cad98a-aorta.jpg

I’ve also mentioned the fascinating variation in the morphology of the human aorta. Anatomy textbooks lay out the most common patterns, but anyone who has taught the subject knows that once you start dissecting, you always find surprises, and that’s OK: variation is the raw material of evolution, so it’s what we expect.

The interesting part is trying to figure out what causes these differences in populations. We can sort explanations into three major categories.

  1. Genetic variation. It may be the the reason different morphs are found is that they carry different alleles for traits that influence the developmental processes that build features of the organism. Consider family resemblances, for instance: your nose or chin might be a recognizable family trait that you’ve inherited from one of your parents, and may pass on to your children.

  2. Environmental variation. The specific pattern of expression of some features may be modified by environmental factors. In larval zebrafish, for instance, the final number of somites varies to a small degree, and can be biased by the temperature at which they are raised. They’re also susceptible to heat shock, which can generate segmentation abnormalities.

  3. Developmental noise. Sometimes, maybe often, the specific details of formation of a structure may not be precisely determined — they wobble a bit. The limb variation Shubin and others saw, for example, was almost entirely asymmetric, so it’s not likely to have been either genetic or environmental. They were just a consequence of common micro-accidents that almost certainly had no significant effect on limb function.

When I see variation, the first question that pops into my head is which of the above three categories it falls into. The second question is usually whether the variation does anything — while some may have consequences on physiology or movement or sexual attractiveness, for instance, others may really be entirely neutral, representing equivalent functional alternatives. Those are the interesting questions that begin inquiry; observing variation is just a starting point for asking good questions about causes and effects, if any.

I bring up this subject as a roundabout introduction to why I find myself extremely peeved by a recent bit of nonsense in the press: the claim that liberal and conservative brains have a different organization, with conservatives having larger amygdalas (“associated with anxiety and emotions”) and liberals having a larger anterior cingulate (“associated with courage and looking on the bright side of life”).

Gag.

I don’t deny the existence of anatomical variation in the brain — I expect it (see above). I don’t question the ability of the technique, using MRI, to measure the dimensions of internal structures. I even think these kinds of structural variations warrant more investigation — I think there are great opportunities for future research to use these tools to look for potential effects of these differences.

What offends me are a number of things. One is that the interesting questions are ignored. Is this variation genetic, environmental, or simply a product of slop in the system? Does it actually have behavioral consequences? The authors babble about some correlation with political preferences, but they have no theoretical basis for drawing that conclusion, and they can’t even address the direction of causality (which they assume is there) — does having a larger amygdala make you conservative, or does exercising conservative views enlarge the amygdala?

I really resent the foolish categorization of the functions of these brain regions. Courage is an awfully complex aspect of personality and emotion and cognition to simply assign to one part of the brain; I don’t even know how to define “courage” neurologically. Are we still playing the magical game of phrenology here? This is not how the brain works!

Furthermore, they’re picking on a complex phenomenon and making it binary. Aren’t there more than one way each to be a conservative or a liberal? Aren’t these complicated human beings who vary in an incredibly large number of dimensions, too many to be simply lumped into one of two types on the basis of a simple survey?

This is bad science in a number of other ways. It was done at the request of a British radio channel; they essentially wanted some easily digestible fluff for their audience. The investigator, Geraint Rees, has published quite a few papers in credible journals — is this really the kind of dubious pop-culture crap he wants to be known for? The data is also feeble, based on scans of two politicians, followed by digging through scans and questionnaires filled out by 90 students. This is blatant statistical fishing, dredging a complex data set for correlations after the fact. I really, really, really detest studies like that.

And here’s a remarkable thing: I haven’t seen the actual data yet. I don’t know how much variation there is, or how weak or strong their correlations are. It’s because I can’t. This work was done as a radio stunt, is now being touted in various other media, and the paper hasn’t been published yet. It’ll be out sometime this year, in an unnamed journal.

We were just discussing the so-called “decline effect”, to which my answer was that science is hard, it takes rigor and discipline to overcome errors in analysis and interpretation, and sometimes marginal effects take a great deal of time to be resolved one way or the other…and in particular, sometimes these marginal results get over-inflated into undeserved significance, and it takes years to clear up the record.

This study is a perfect example of the kind of inept methodology and lazy fishing for data instead of information that is the root of the real problem. Science is fine, but sometimes gets obscured by the kind of noise this paper is promoting.

I have to acknowledge that I ran across this tripe via Blue Girl, who dismisses it as “sweeping proclamations about the neurophysiological superiority of the liberal brain”, and Amanda Marcotte, who rejects it because “This kind of thing is inexcusable, both from a fact-based perspective and because the implication is that people who are conservative can’t help themselves.” Exactly right. This kind of story is complete crap from the premise to the data to the interpretations.

Optogenetics!

The journal Nature has selected optogenetics as its “Method of the Year”, and it certainly is cool. But what really impressed me is this video, which explains the technique. It doesn’t talk down to the viewer, it doesn’t overhype, it doesn’t rely on telling you how it will cure cancer (it doesn’t), it just explains and shows how you can use light pulses to trigger changes in electrical activity in cells. Well done!

The new pick-up line: “I have a very large MRI device…”

A team of neuroscientists has made the coolest nerd porn film ever. They gave 16 women vibrators and asked them to bring themselves to orgasm while they made a movie…of their brains, using an MRI scanner. It’s going to premiere at the Society for Neuroscience meetings.

While it sounds like they have some interesting results — there is a consistent, wide-spread pattern of brain activity during orgasm, and specific areas are known to fire up — the article is not going to do a lot for Professor Barry Komisaruk’s reputation. The interviewer asked a few too many trivial questions.

“In women, orgasm produces a very extensive response across the brain and body,” said Barry Komisaruk, professor of psychology at Rutgers University in New Jersey, who oversaw the research.

“In one experiment we asked women to self-stimulate and then raise their hands each time they orgasmed. Some women raised their hands several times each session, often just a few seconds apart,” Professor Komisaruk said. “So the evidence is that woman tend to have longer orgasms and can experience several of them.”

So…the women in Professor Komisaruk’s life have never had a satisfactory sexual experience with him, so he needed a multi-million dollar machine to figure this out? I’m glad he finally learned this!

(I joke—I’m sure there are interesting neurological results, this article just highlighted the obvious, as if it were news.)

We don’t have a single mind, we have a series of fabulator modules

Everyone should read this very good, very clear article in Seed magazine. It stomps on the concept of a soul from the perspective of modern neuroscience.

The evidence supports another view: Our brains create an illusion of unity and control where there really isn’t any. Within the wide range of works arranged along the axis of soulism, from Life After Death: The Evidence, by Dinesh D’Souza, to Absence of Mind, by Marilynne Robinson, it is clear there is very little understanding of the brain. In fact, to advance their ideas, these authors have to be almost completely unaware of neurology and neuroscience. For example, Robinson tells us, “Our religious traditions give us as the name of God two deeply mysterious words, one deeply mysterious utterance: I AM.” The translation might be, “indoctrination tells us we have a soul, it feels like we are a unified little god in control of our bodies, so we are.”

It’s all illusions, all the way through.

The sexist brain

It looks like I have to add another book to my currently neglected reading list. In an interview, Cordelia Fine, author of a new book, Delusions of Gender: How Our Minds, Society, and Neurosexism Create Difference(amzn/b&n/abe/pwll), has a few provocative things to say about gender stereotypes and the flimsy neuroscience used to justify them.

So women aren’t really more receptive than men to other people’s emotions?

There is a very common social perception that women are better at understanding other people’s thoughts and feelings. When you look at one of the most realistic tests of mind reading, you find that men and women are just as good at getting what their interaction partners were thinking and feeling. It even surprised the researchers. They went on to discover that once you make gender salient when you test these abilities [like having subjects check a box with their sex before a test], you have this self-fulfilling effect.

The idea that women are better at mind reading might be true in the sense that our environments often remind women they should be good at it and remind men they should be bad at it. But that doesn’t mean that men are worse at this kind of ability.

But it seems like a Catch-22: Women who pursue careers in math are being handicapped by the fact that there are so few women pursuing careers in math.

Gender equality is increasing in pretty much all domains, and the psychological effects of that can only be beneficial. The real issue is when people in the popular media say things like, “Male brains are just better at this kind of stuff, and women’s brains are better at that kind of stuff.” When we say to women, “Look, men are better at math, but it’s because they work harder,” you don’t see the same harmful effects. But if you say, “Men are better at math genetically,” then you do. These stem from the implicit assumption that the gender stereotypes are based on hard-wired truths.

Here we have a brain, receptive and plastic and sensitive to learning, constantly rewiring itself, with a core of common, human traits hardwired into it, and over here we have scientists who have been the recipient of years of training, often brought up in a culture that fosters an interest in science and math…and somehow, many of these scientists are resistant to the idea that the brain is easily skewed in different directions by the social environment. I don’t get it. I was brought up as a boy, and I know that throughout my childhood I was constantly being hammered by male-affirmative messages and biases, and I think it’s obvious that girls were also hit with lots of their gender-specific cultural influences. Yet somehow we’re supposed to believe that the differences between men and women are largely set by our biology? That women aren’t as good at math because hormones wire up their brain in a different way than the brains of men, and it’s not because our plastic brains receive different environmental signals?

Fine appeals to my biases about the importance of environmental influences, I’ll admit; the interview is a bit thin on the details. But I’ll definitely have to read her book.

Ray Kurzweil does not understand the brain

There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced that he’s a genius, when he’s actually just another Deepak Chopra for the computer science cognoscenti.

His latest claim is that we’ll be able to reverse engineer the human brain within a decade. By reverse engineer, he means that we’ll be able to write software that simulates all the functions of the human brain. He’s not just speculating optimistically, though: he’s building his case on such awfully bad logic that I’m surprised anyone still pays attention to that kook.

Sejnowski says he agrees with Kurzweil’s assessment that about a million lines of code may be enough to simulate the human brain.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

I’m very disappointed in Terence Sejnowski for going along with that nonsense.

See that sentence I put in red up there? That’s his fundamental premise, and it is utterly false. Kurzweil knows nothing about how the brain works. It’s design is not encoded in the genome: what’s in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn’t even aware of the magnitude of that problem.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven’t even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil’s clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!

Let me give you a few specific examples of just how wrong Kurzweil’s calculations are. Here are a few proteins that I plucked at random from the NIH database; all play a role in the human brain.

First up is RHEB (Ras Homolog Enriched in Brain). It’s a small protein, only 184 amino acids, which Kurzweil pretends can be reduced to about 12 bytes of code in his simulation. Here’s the short description.

MTOR (FRAP1; 601231) integrates protein translation with cellular nutrient status and growth signals through its participation in 2 biochemically and functionally distinct protein complexes, MTORC1 and MTORC2. MTORC1 is sensitive to rapamycin and signals downstream to activate protein translation, whereas MTORC2 is resistant to rapamycin and signals upstream to activate AKT (see 164730). The GTPase RHEB is a proximal activator of MTORC1 and translation initiation. It has the opposite effect on MTORC2, producing inhibition of the upstream AKT pathway (Mavrakis et al., 2008).

Got that? You can’t understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway. Is that trivially deducible from the structure of the protein? No. It had to be worked out operationally, by doing experiments to modulate one protein and measure what happened to others. If you read deeper into the description, you discover that the overall effect of RHEB is to modulate cell proliferation in a tightly controlled quantitative way. You aren’t going to be able to simulate a whole brain until you know precisely and in complete detail exactly how this one protein works.

And it’s not just the one. It’s all of the proteins. Here’s another: FABP7 (Fatty Acid Binding Protein 7). This one is only 132 amino acids long, so Kurzweil would compress it to 8 bytes. What does it do?

Anthony et al. (2005) identified a Cbf1 (147183)-binding site in the promoter of the mouse Blbp gene. They found that this binding site was essential for all Blbp transcription in radial glial cells during central nervous system (CNS) development. Blbp expression was also significantly reduced in the forebrains of mice lacking the Notch1 (190198) and Notch3 (600276) receptors. Anthony et al. (2005) concluded that Blbp is a CNS-specific Notch target gene and suggested that Blbp mediates some aspects of Notch signaling in radial glial cells during development.

Again, what we know of its function is experimentally determined, not calculated from the sequence. It would be wonderful to be able to take a sequence, plug it into a computer, and have it spit back a quantitative assessment of all of its interactions with other proteins, but we can’t do that, and even if we could, it wouldn’t answer all the questions we’d have about its function, because we’d also need to know the state of all of the proteins in the cell, and the state of all of the proteins in adjacent cells, and the state of global and local signaling proteins in the environment. It’s an insanely complicated situation, and Kurzweil thinks he can reduce it to a triviality.

To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it’s the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn’t even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.

I’ll make a prediction, too. We will not be able to plug a single unknown protein sequence into a computer and have it derive a complete description of all of its functions by 2020. Conceivably, we could replace this step with a complete, experimentally derived quantitative summary of all of the functions and interactions of every protein involved in brain development and function, but I guarantee you that won’t happen either. And that’s just the first step in building a simulation of the human brain derived from genomic data. It gets harder from there.

I’ll make one more prediction. The media will not end their infatuation with this pseudo-scientific dingbat, Kurzweil, no matter how uninformed and ridiculous his claims get.

(via Mo Constandi)


I’ve noticed an odd thing. Criticizing Ray Kurzweil brings out swarms of defenders, very few of whom demonstrate much ability to engage in critical thinking.

If you are complaining that I’ve claimed it will be impossible to build a computer with all the capabilities of the human brain, or that I’m arguing for dualism, look again. The brain is a computer of sorts, and I’m in the camp that says there is no problem in principle with replicating it artificially.

What I am saying is this:

Reverse engineering the human brain has complexities that are hugely underestimated by Kurzweil, because he demonstrates little understanding of how the brain works.

His timeline is absurd. I’m a developmental neuroscientist; I have a very good idea of the immensity of what we don’t understand about how the brain works. No one with any knowledge of the field is claiming that we’ll understand how the brain works within 10 years. And if we don’t understand all but a fraction of the functionality of the brain, that makes reverse engineering extremely difficult.

Kurzweil makes extravagant claims from an obviously extremely impoverished understanding of biology. His claim that “The design of the brain is in the genome”? That’s completely wrong. That makes him a walking talking demo of the Dunning-Kruger effect.

Most of the functions of the genome, which Kurzweil himself uses as the starting point for his analysis, are not understood. I don’t expect a brain simulator to slavishly imitate every protein, but you will need to understand how the molecules work if you’re going to reverse engineer the whole.

If you’re an acolyte of Kurzweil, you’ve been bamboozled. He’s a kook.

By the way, this story was picked up by Slashdot and Gizmodo.

The secret life of babies

Years ago, when the Trophy Wife™ was a psychology grad student, she participated in research on what babies think. It was interesting stuff because it was methodologically tricky — they can’t talk, they barely respond in comprehensible way to the world, but as it turns out you can get surprisingly consistent, robust results from techniques like tracking their gaze, observing how long they stare at something, or even the rate at which they suck on a pacifier (Maggie, on The Simpsons, is known to communicate quite a bit with simple pauses in sucking.)

There is a fascinating article in the NY Time magazine on infant morality. Set babies to watching puppet shows with nonverbal moral messages acted out, and their responses afterward indicate a preference for helpful agents and an avoidance of hindering agents, and they can express surprise and puzzlement when puppet actors make bad or unexpected choices. There are rudiments of moral foundations churning about in infant brains, things like empathy and likes and dislikes, and they acquire these abilities untaught.

This, of course, plays into a common argument from morality for religion. It’s unfortunate that the article cites deranged dullard Dinesh D’Souza as a source — is there no more credible proponent of this idea? That would say volumes right there — but at least the author is tearing him down.

A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument [that a godly force must intervene to create morality]. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”

The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which baby-making just isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.)

So far, so good. I think this next bit gives far too much credit to Alfred Russel Wallace and D’Souza, though, but don’t worry — he’ll eventually get around to showing how they’re wrong again.

The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger.

No, I disagree with the rationale here. It is not a problem for evolution at all to find that humans exhibit an excessive altruism. Chance plays a role; our ancestors did not necessarily get a choice of a fine-tuned altruism that works exclusively to the benefit of our kin — we may well have acquired a sloppy and indiscriminate innate tendency towards altruism because that’s all chance variation in a protein or two can give us. There’s no reason to suppose that a mutation could even exist that would enable us to feel empathy for cousins but completely abolish empathy by Americans for Lithuanians, for instance, or that is neatly coupled to kin recognition modules in the brain. It could be that a broad genetic predisposition to be nice to fellow human beings could have been good enough to favored by selection, even if its execution caused benefits to splash onto other individuals who did not contribute to the well-being of the possessor.

But that idea may be entirely moot, because there is some evidence that babies are born (or soon become) bigoted little bastards who do quickly cobble up a kind of biased preferential morality. Evolution has granted us a general “Be nice!” brain, and also that we acquire capacities that put up boundaries and foster a kind of primitive tribalism.

But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups — even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.

That’s kind of cool, if horrifying. It also, though, points out that you can’t separate culture from biological predispositions. Babies can’t learn who their own kind is without some kind of socialization first, so part of this is all about learned identity. And also, we can understand why people become vegetarians as adults, or join the Peace Corps to help strangers in far away lands — it’s because human beings have a capacity for rational thought that they can use to override the more selfish, piggy biases of our infancy.

Again, no gods or spirits or souls are required to understand how any of this works.

Although, if they did a study in which babies were given crackers and the little Catholic babies all made the sign of the cross before eating them, while all the little Lutheran babies would crawl off to make coffee and babble about the weather, then I might reconsider whether we’re born religious. I don’t expect that result, though.

The Ubiquity of Exaptation

On Thursday, I gave a talk at the University of Minnesota at the request of the CASH group on a rather broad subject: evolution and development of the nervous system. That’s a rather big umbrella, and I had to narrow it down a lot. I say, a lot. The details of this subject are voluminous and complex, and this was a lecture to a general audience, so I couldn’t even assume a basic science background. So I had to think a bit.

I started the process of working up this talk by asking a basic question: how did something as complex as the nervous system form? That’s actually not a difficult problem — evolution excels at generating complexity — but I knew from experience that the first hurdle to overcome would be a common assumption, the idea that it was all the product of

purposeful processes, ranging from adaptationist compulsion to god’s own intent — that drive organisms to produce smarter creatures. I decided that what I wanted to make clear is that the origin of many fundamental traits of the nervous system is by way of chance and historical constraints, that the primitive utility of some of the things we take for granted in the physiology of the brain does not lie in anything even close to cognition. The roots of the nervous system are in surprisingly rocky ground for brains, and selection’s role has been to sculpt the gnarly, weird branches of chance into a graceful and useful shape.

So I put together a talk called The Ubiquity of Exaptation (2.7M pdf). The barebones presentation itself might not be very informative, I’m afraid, since it’s a lot of pictures and diagrams, so I’ll try to give a brief summary of the argument here.

The subtitle of the talk is “Nothing evolved for a purpose”, and I mean that most seriously. Evolved innovations find utility in promoting survival, and can be honed by selection, but they aren’t put there in the organism for a purpose. The rule in evolution is exaptation, the cooption of elements for use in new properties, with a following shift in function. It’s difficult to just explain, so I picked three examples from the evolution of the nervous system that I hoped would clarify the point. The three were 1) the electrical properties of the cell membrane, which are really a byproduct of mechanisms of maintaining salt balance; 2) synaptic signaling, which coopts cellular machinery that evolved for secretion and detecting external signals; and 3) pathfinding by neurons, the process that generates patterned connectivity between cells, and which uses the same mechanisms of cellular motility that we find in free-living single celled organisms.

  1. Excitability. This was the toughest of the three to explain, because I wasn’t talking to an audience of biophysicists. Our neurons (actually, all of our cells; even egg cells have interesting electrical properties) maintain an electrical potential, a voltage, across their membranes that you can measure with very tiny electrodes. This voltage undergoes short, sharp transient changes that produce action potentials, waves of current that move down the length of the cell. How do they do that? Where did this amazing electrical trick come from?

    The explanation lies in a common problem. Our cells have membranes that are permeable to water, and they also must contain a collection of proteins that are not present in the external environment. The presence of these functional solutes inside the cell should create an osmotic gradient, so that water would flow in constantly, trying to dilute the interior to be iso-osmotic (the same concentration) as the outside. Some cells have different ways to cope: one way is to build cell walls that retain the concentration in the interior with pressure; another is to have specialized organelles to constantly pump out water. Our cells use a clever and rather lazy scheme: they compensate for the high internal concentration of essential proteins by creating a high external concentration of some other substance, which is impermeant to the cell membrane. Water has the same concentration inside and outside, but there are different distributions of solutes inside and outside.

    What we use to generate these differential distributions are ionic salts, charged molecules. Positively charged sodium ions are high in concentration outside, while positively charged potassium ions and negatively (on average) charged proteins are high in concentration on the inside. Because these are charged ions, their distribution also coincidentally sets up a voltage difference. I confess, I did show the audience the Goldman equation, which is a little scary, but I reassured them that they didn’t have to calculate it — they just needed to understand that the arrangements of salts in cells and the extracellular space generates a voltage that is simply derived from the physical and chemical properties of the situation.

    We use variations in these voltages to send electrical signals down the length of our nerves, but they initially evolved as a mechanism to cope with maintaining our salt balance. We’re also used to thinking of these electrical abilities as being part of a complicated nervous apparatus, but initially, they found utility in single-celled organisms. As an example, I described the behavior of paramecia. The paramecium swims about by beating cilia, like little oars; the membrane of the paramecium maintains an electrical potential, and also contains selectively permeable ion channels that can be switched open or closed. When the organism bumps into an obstacle, the channels open, calcium rushes in as the potential changes, and the cilia all reverse the direction of their beating, making the paramecium tumble backwards. The electrical properties of your brain are also functionally useful to single-celled organisms.

    I concluded this section by trying to reassure everyone that their brain is something more than just a collection of paramecia swimming about. Although the general properties of the membrane are the same, evolution has also refined and expanded the capabilities of the neuronal membrane: there are many different kinds of ion channels, which we can see by their homology to one another are also products of evolution, and each one is specialized in unique ways to add flexibility to the behavioral repertoire of the cell. The origins of the electrical properties are a byproduct of salt homeostasis, but once that little bit of function is available, selection can amplify and hone the response of the system to get some remarkably sophisticated results.

  2. Synaptic signaling. Shuttling electrical signals across the membrane of a cell is one thing, but a nervous system is another: that requires that multiple cells send signals to one another. A wave of current flowing through a membrane in one cell needs to be transmitted to an adjacent cell, and the way we do that is through specialized connections called synapses. A chemical synapse is a specialized junction between two cells: on one side, the presynaptic side, a change in membrane voltage triggers the release of chemicals into the extracellular space; on the recieving side, the post-synaptic side, there are localized collections of receptors for that chemical signal, and when they bind the chemical (called a neurotransmitter), they cause changes in the membrane voltage on their side.

    Once again, the cell simply reuses machinery that evolved for other purposes to carry out these functions. Cells use a secretory apparatus all over the place; we package up hormones or enzymes or other chemicals into small balloons of membrane called vesicles, and we can export them to the outside of the cell by simply fusing the vesicle with the cell membrane. Lots our cells do this, not just neurons, and it’s also a common function in single celled organisms. Brewer’s yeast, for instance, contain significant pieces of the membrane-associated signaling complex, or MASC, althogh they of course don’t make true synapses, which requires two cells working together in a complementary fashion.

    I described the situation in Trichoplax, an extremely simple multicellular organism which only has four cell types. The Trichoplax genome has been sequenced, and found to contain a surprising number of the proteins used in synaptic signaling…but it doesn’t have a brain or any kind of nervous system, and none of its four cell types are neurons. What a mindless slug like Trichoplax uses these proteins for is secretion: it makes digestive enzymes, not neurotransmitters, and sprays them out onto the substrate to dissolve its food. Again, in more derived organisms with nervous systems, they have simply coopted this machinery to use in signaling between neurons.

    As usual, I had to make sure that nobody came away from this thinking their brain was a conglomeration of Trichoplax squirting digestive enzymes around. Yeast, choanoflagellates, and sponges have very primitive precursors to the synapse; we can look at the evolutionary history of the structure and see extensive refinement and elaboration. The modern vertebrate synapse is built from over 1500 different proteins — it’s grown and grown and grown from its simpler beginnings.

  3. Pathfinding. How do we make circuits of neurons? I’ve just explained how we can conduct electrical signals down single cells, and how pairs of cells can communicate with each other, but we also need to be able to connect up neurons in reliable and useful ways, making complex patterned arrangements of cells in the brain. We actually know a fair amount about how neurons in the developing nervous system do that.

    Young nerve cells form a structure called the growth cone, an amoeboid process that contains growing pieces of the cell skeleton (fibers made of proteins like tubulin and actin), enzymes that act as motor proteins, cytoplasm, and membrane. These structures move: veils of membrane called lamellopodia flutter about, antennae-like rods called filopodia extend and probe the environment, and the whole bloblike mass expands in particular directions by the bulk flow of cytoplasm. The cell body stays in place, usually, and it sends out this little engine of movement that trundles away, leaving an axon behind it.

    “Amoeboid” is the magic word. The growth cone uses the same cellular machinery single-celled organisms use for movement on a substrate. Once again, exaptation strikes, and the processes that amoebae use to move and find microorganismal prey are the same ones that the cells in your brain used to lay down pathways of circuitry in your brain.

    Furthermore, there is no grand blueprint of the brain anywhere in the system. Growing neurons are best thought of as simple cellular automata which contain a fairly simple set of rules that lead them to follow entirely local cues to a final destination. I described some of the work that David Bentley did years ago (and also some of my old grasshopper work) that showed that not only can the cues be identified in the environment, but that experimental ablation of those intermediate targets can produce cells that are very confused and make erroneous navigational decisions.

    We also contain a great many possible signals: long- and short-range cues, signals that attract or repel, and also signals that can change gene expression inside the neuron and change its behavior in even more complicated ways. It’s still at its core an elaboration of behaviors found in protists and even bacteria; we are looking at amazingly powerful emergent behaviors that arise from simple mechanisms.

And that was the story. Properties of the nervous system that are key to its function and that many of us naively regard as unique to neurons are actually expanded, elaborated, specialized versions of properties that are also present in organisms that lack brains, nervous systems, or even neurons…and that aren’t even multicellular. This is precisely what we’d expect from evolutionary origins, that everything would have its source in simpler precursors. Furthermore, it’s a mistake to try and shoehorn those precursors into necessarily filling the same functions as their descendants today. Cooption is the rule. Even the brains of which we are so proud are byblows of more fundamental functions, like homeostasis, feeding, and locomotion.