I hate OnSwipe

This is a bit of privileged peevishness. There is this absolutely horrible piece of CSS that is widely used on many big name sites — it is used for portable devices like iPads and iPhones, and what it does is completely change the displayed formatting of the site. Suddenly, pages aren’t scrollable, but are broken up into screen-sized chunks, and you no longer change views by scrolling up and down, but by swiping side to side. And it throws a couple of cryptic icons on the bottom of the screen (do I want to know what the rocketship does? No, I do not.) It is classic too-clever-by-half web design, and I hate it.

When I’m browsing on my iPad, and I run into a site with OnSwipe enabled, I just abandon it. Nope, not worth the hassle.

What I really want to know from sites that use it is a) what made you think your readers want to abandon all the comfortable conventions of the web experience when they read your site on a portable device, because that makes no sense at all, and b) did you pay money for this piece of shit?

Tone-deaf Twitter

There are some serious problems with how Twitter handles blocking — in particular, if I block some obnoxious twit, but they post to a hashtag I follow (a conference hashtag, for instance), their messages are still displayed. This is the major reason why the BlockBot emerged — that automated widget that simply refuses to display tweets from a collection of known harassers, so that you can follow it instead of the hashtag — because Twitter won’t do the job.

Now, finally, Twitter gets around to changing the blocking behavior …and makes it worse. It used to be blocking someone also made them unfollow you, which made it very slightly harder for the harassers to stalk you. Apparently, inconveniencing assholes was intolerable to Twitter, so they’ve now changed it so blocking only mutes them, but still allows them to easily follow your every word, flag your tweets, and echo them to their clinging flock of fellow harassers. The harassers are now simply made invisible to the people they want to harass.

Imagine if the police were this helpful, and if you complained about someone and asked for protection, their response would be to magically make them invisible for you.

Why did they do this? I have no idea, except that there must be some assholes on the Twitter staff, which should surprise no one.

It’s probably futile, but there’s a petition asking them to stop making life easier for the jerks. I have no confidence they’ll listen or care, but go ahead, ask Twitter nicely.

Otherwise…hey, world, did you know there’s an available niche for a twitter-like service that also offers reasonable blocking and a little protection for users, and that doesn’t pander to misogynistic scumbags? They really could use some competition.


I found someone who likes the new policy (warning: links to creep pretending to masturbate…and just the description is enough, don’t you think?) That’s what we’re dealing with. That’s who Twitter’s policy panders to.


Twitter has reversed their changes. They’ve got a rather weird explanation for the earlier change, though.

In reverting this change to the block function, users will once again be able to tell that they’ve been blocked. We believe this is not ideal, largely due to the retaliation against blocking users by blocked users (and sometimes their friends) that often occurs. Some users worry just as much about post-blocking retaliation as they do about pre-blocking abuse. Moving forward, we will continue to explore features designed to protect users from abuse and prevent retaliation.

WTF? So I was supposed to worry that harassers I block might retaliate by…what? More online harassment? I assure you, they were going to do that anyway.

Proud to be a mutant, then

Chris Mooney has done it again: he has rationalized a piece of the conventional wisdom to support the status quo and to make it harder to break the creationist habit. He has an article on Mother Jones titled
Seven Evolutionary Reasons People Deny Evolution, and I gakked at every word, from the linkbait in the first word to the presumption of the second to the obvious promise of self-justification in the third.

And I let it go.

It’s Mooney, after all. But then Larry Moran is getting on his case, and Hemant Mehta has been hoodwinked into accepting this nonsense, so I feel compelled to dig into it. One interesting thing about the article, though, is that Mooney is unself-consciously practicing the same delusions he claims are natural in putting his argument together — you’d think he’d be a little bit aware…but no.

Let us begin with the last and work backwards through the essay. It’s his conclusion that pissed me off the most.

In any event, the evidence is clear that both our cognitive architecture, and also our emotional dispositions, make it difficult or unnatural for many people to accept evolution. “Natural selection is like quantum physics…we might intellectually grasp it, with considerable effort, but it will never feel right to us,” writes the Yale psychologist Paul Bloom. Often, people express surprise that in an age so suffused with science, science causes so much angst and resistance.

Oh, really?

I was brought up Lutheran, and got the full weekly dose of church and Sunday School; I was also a zealous reader from an early age, and discovered the science section of my local library when I was probably in first or second grade. I was exposed to both religion and science from an early age, and guess which one I found difficult and unnatural to grasp? That’s right. Somehow, these people think recognizing natural causes in the world around us, something we experience every moment of our lives, is difficult, while accepting the existence of magical, supernatural forces that we never ever see is “natural”.

Yet there I am at 8 years old, given a choice between a book about Noah’s Ark and Jesus, or one of Asimov’s collections of science essays, and there’s no question, no doubt in my mind, no niggling worries or hesitation — I choose Asimov every time, hands down. If I do read the Ark book (because I was bored and eventually devoured everything), it’s with arched brow and child-like incredulousness, because the story is so goddamned stupid.

And now, as an adult, I not only intellectually grasp what we know about evolution, it feels right to me.

That either makes me a super-powered mutant, or that someone has got everything all wrong in their attempt to make the argument that religion is natural and evolved, and perhaps I’m just a mundane, ordinary guy who had a freedom to explore ideas not usually granted to most people, and that all the barriers to the acceptance of evolution are artificial, cultural, and recently imposed.

I assume most of my readers here also accept the idea of evolution. So which is it? Are we all special little flowers, the shining brilliant lucky recipients of a wisdom-bestowing mutation, while hoi polloi are dullard turds lacking in our biologically glorious powers, or do essentially all humans have capable, plastic brains capable of responding to the environment in reasonable (if limited) ways? I rather lean towards the latter view, as flattering as the former might be. I’m pushed even further towards that view by the bad arguments Mooney gives.

But first, let’s dismiss a standard rhetorical trick: before you give your grand conclusion, bring out a couple of caveats that you can dismiss. It gives the appearance that you’ve carefully weighed the evidence and have examined the opposition position. Mooney does not fail. He dredges up two irrelevant counter-arguments.

First, this doesn’t mean science and religion are fundamentally incompatible. The conflict may run very deep indeed, but nevertheless, some individuals can and do find a way to retain their religious beliefs and also accept evolution—including the aforementioned biology textbook author Kenneth Miller of Brown University, a Catholic.

You cannot know how much I despise the Ken Miller Argument. Not Miller himself, of course, but this weird happy relief people seem to find in dragging him out and gleefully pointing at his Catholicism and his science credentials. They are irrelevant to the point that science and religion are incompatible — people are complex and embody all kinds of contradictions. You simply cannot point to a single human being as a test for the logical incompatibility of two ideas. That presupposes that we’re capable of nothing but rational thought, and that our brains contain flawless logic circuits that do not allow clashing ideas to exist in our heads.

What would be interesting and relevant would be to ask Miller, or any scientist no matter what their religious beliefs, whether they found evolutionary thinking difficult or unnatural. Mooney does not do this, despite the fact that it is the only question relevant to his premise. He does raise a second irrelevant caveat, though.

Second, while there are many reasons to think that the traits above [his list of properties that predispose us to religion –pzm] comprise a core part of who we are, it doesn’t automatically follow that religion is the direct result of evolution by natural selection. It is also possible that religion arises as a byproduct of more basic traits that were, in turn, selected for because they conferred greater fitness (such as agency detection). This “byproduct” view is defended by Steven Pinker here.

I personally think the byproduct explanation is the best one I’ve heard, but so what? It says that religion exploits properties of the mind that evolved for other reasons. We could say exactly the same thing of mathematics: we didn’t evolve to solve calculus problems, there has never been significant selection pressure for people whose expertise is specifically in calculus, but we do have some properties — a surplus of capacity, perhaps, or general flexibility — that could be reworked by experience to enable the exercise of mathematics. Is math natural or unnatural?

But when you actually think about it, it is a killer argument against Mooney’s thesis…it’s too bad that he doesn’t think about it much at all. What does it even mean to talk about “natural” or “evolved” cognitive traits, like religion or science? Mooney is blithely talking about a “core part of who we are”, while describing culturally conditioned superficial myths held by a tribe of talking apes, all while ignoring the fact that his core traits are not universal.

Let’s consider his list of natural core.

Fear and the Need for Certainty. Finally, there appears to be something about fear and doubt that impels religiosity and dispels acceptance of evolution. “People seem to take more comfort from a stance that says, someone designed the world with good intentions, instead of that the world is just an intention-less, random place,” says Norenzayan. “This is especially true when we feel a sense of threat, or a feeling of not being in control.”

Fear and uncertainty are real — even us self-confident happy atheists have to deal with them. Every one of us has felt loss and grief, and we know that these are also great tools to use to manipulate people. But take it a step further. Fear is a universal, but how we cope with it is variable. This person Norenzayan is making an assertion that religious thinking is an evolved coping strategy for dealing with fear, but none of the evidence given supports that claim. It does support the idea that people with religious concepts stuck in their head tend to reinforce those concepts when stressed.

But that statement by Norenzayan is ridiculous. You’re in a difficult situation: which is more reassuring, the idea that a) it’s an accident of chance, or b) there really is a super-powerful being who is doing this on purpose to you? If you’re just looking for a good way to deal with your troubles, I find (a) immensely more comforting than (b). And when I have experienced stress and suffering, I don’t find myself suddenly looking for a supernatural agent out to get me. That view is a product of long cultural conditioning.

Turn this view on Mooney: he lives in a world full of superstitious people. How to explain this uncomfortable situation? Why, there must be an external agency, evolution, that has shaped them to be inimical to science.

Group Morality and Tribalism. All of these cognitive factors seem to make evolution hard to grasp, even as they render religion (or creationist ideas) simpler and more natural to us. But beyond these cognitive factors, there are also emotional reasons why a lot of people don’t want to believe in evolution. When we see resistance to its teaching, after all, it is usually because a religious community fears that this body of science will undermine a belief system—in the US, usually fundamentalist Christianity—deemed to serve as the foundation for shared values and understanding. In other words, evolution is resisted because it is perceived as a threat to the group.

Again, tribalism is real, and we apes do turn to our friendly support groups all the time, and we need our social glue to live happily. Nothing in this predisposes people to think religiously rather than scientifically, however. That we allow and encourage our fellow apes to cultivate their own peculiar rituals and behaviors to forge coherence actually says that those individual sets of rules are arbitrary, not determined. We can make subtribes within the human community that are bound together by religion, by atheism, by science (yes, science certainly does have its quaint bonding practices), or by such things as anime fandom.

Place the blame where it belongs. Not on human tribalism, but on the fact that a huge, successful, widespread binding factor, religion, also frequently imposes absurdly anti-scientific views on its members. And even there, the blame must be apportioned to historical contingencies, not fixed inevitabilities.

Turn this view on Mooney: he’s arguing for a tribe that cannot grasp basic principles of science for biological reasons. Yet I suspect that he does not place himself in the category of a person doomed to struggle against the unnatural weirdness of science. It’s always comforting to separate the other, the Creationists, from ourselves, the Scientists, isn’t it? But I can tell you that they’re just as smart and capable as we are — just that their intelligence is shunted off in unproductive directions.

Inability to Comprehend Vast Time Scales. According to Norenzayan, there’s one more basic cognitive factor that prevents us from easily understanding evolution. Evolution occurred due to the accumulation of many small changes over vast time periods—which means that it is unlike anything we’ve experienced. So even thinking about it isn’t very easy. “The only way you can appreciate the process of evolution is in an abstract way,” says Norenzayan. “Over millions of years, small changes accumulate, but it’s not intuitive. There’s nothing in our brain that says that’s true. We have to override our incredulity.”

I will give him this: I cannot imagine the magnitude of millions of years. Billions are right out. Heck, even a thousand years is a strain.

But I find it bizarre to argue that visualizing the accumulation of small changes is not intuitive — it’s actually imbedded deeply in the human experience. Every child knows that once they were a baby, and talks about growing up; Mom and Dad make a pencil mark on the door jamb for our height every year; we go through that awkward transition at puberty, and then we spend the rest of our life aging, feeling every creak and every fading ability. Old people rail against this new generation and praise the previous one. We live lives full of change, and usually spend our time complaining about it.

“Once things were different” is such a natural and easy sentiment that it is perverse to suddenly claim that no one thinks that way. That we have a hard time appreciating the magnitude of time over which changes occur is one thing; but inability to perceive change? Pfft.

Turn this view on Mooney: One thing I often find irritating in discussions of creationism are all these people who think the creationism of the last 60 years is the way religion has always been. Go back a century, and you might be surprised: deeply religious people were struggling to reconcile faith and science, and you don’t typically find that the depth of the geological record was a serious obstacle. You don’t find that change was a problem: the book of Genesis is all about a rather abrupt change, and of course theologians are dab hands at rationalizing and accepting the grand changes we see in the shift from the Old Testament to the New. Young Earth Creationism, as practiced in America today, is a relatively new and odd phenomenon.

Dualism. Yet another apparent feature of our cognitive architecture is the tendency to think that minds (or the “self” and the “soul”) are somehow separate from brains. Once again, this inclination has been found in young children, suggesting that it emerges early in human development. “Preschool children will claim that the brain is responsible for some aspects of mental life, typically those involving deliberative mental work, such as solving math problems,” write Yale psychologists Paul Bloom and Deena Skolnick Weisberg. “But preschoolers will also claim that the brain is not involved in a host of other activities, such as pretending to be a kangaroo, loving one’s brother, or brushing one’s teeth.”

This is another one that I will grant to Mooney, to a degree. Consciousness is a ubiquitous illusion, and it is all about taking a material substrate, the brain, and generating a perception of a self-aware monitor floating above it all. It’s a hardware/software distinction, in many ways, and it’s natural to interpret others’ behavior with a theory of mind.

But I gotta love studies that rely on the perspectives of pre-schoolers. Babies love to play peek-a-boo, and are endlessly surprised when you open your hands and…and there you are! Giggle and coo! But you know, when we get older, we would consider it grossly unnatural if an individual failed to acquire the concept of object permanence. Did you know that human minds mature over time? Inclinations in young children are not necessarily likely to be held by adults.

Also, creationists and religious people are not children, nor do they have child-like brains. Give ’em some credit, they can learn and adapt and grow just like us Grown-Up-Sciencey-Types.

Turn this view on Mooney: Grow up.

Overactive Agency Detection. But how do you know the designer is “God”? That too may be the result of a default brain setting.

Another trait, closely related to teleological thinking, is our tendency to treat any number of inanimate objects as if they have minds and intentions. Examples of faulty agency detection, explains University of British Columbia origins of religion scholar Ara Norenzayan, range from seeing “faces in the clouds” to “getting really angry at your computer when it starts to malfunction.” People engage in such “anthropomorphizing” all the time; it seems to come naturally. And it’s a short step to religion: “When people anthropomorphize gods, they are inferring mental states,” says Norenzayan.

Yes? Isn’t that what we were just talking about under Dualism? The logic of listicles is always reinforced when you use a magic number like 7 or 10, isn’t it?

It is true, I have been known to snarl at the TV (especially on, say, Sunday morning, when the pundits are babbling, or when I accidentally flip through Fox News), even though it is an inanimate object that is not responsible for the idiocy displayed on it. But you know, my mind is slightly less literal-minded than the story makes it out to be: even when my irrational side is getting tickled by the provocation of the noises from the magic box on the wall or the shapes of clouds, I’m quite able to draw myself up short and recognize reality. I know that turning the television off does not make the annoying people disappear, and that the bunny hopping about in the sky is no threat to my salad. While we recognize that the brain contains fallible perception generators, could we please also recognize that the brain also has more sophisticated processors to interpret those phenomena? And guess what — creationists and religious people also have them!

Where I object is that “short step to religion”. Fine; we can see that it’s an easy first step. But when I take a step, I try to keep on walking. What’s unnatural is take a step and then just stop when there’s a wide open path ahead of me. Why assume that those Religious Others are incapable of thinking beyond first impressions? Why not assume that they are just as capable of going on beyond that preliminary, primitive perception?

Turn this view on Mooney: It’s always tempting to find that first confirming impression and stop. No need to think further, we’ve already got the answer. It’s particularly tempting when we’ve got a thesis that we want affirmed, and there it is: by golly, I have grumbled at my computer, therefore, GOD is an entirely reasonable conclusion. Keep on walking, Mooney, there are more steps beyond the first.

Teleological Thinking. Essentialism is just one basic cognitive trait, observed in young children, that seems to hinder evolutionary thinking. Another is “teleology,” or the tendency to ascribe purposes to things and objects so as to assume they exist to serve some goal.

Recent research suggests that 4 and 5 year old children are highly teleological in their thinking, tending to opine, for instance, that clouds are “for raining” and that the purpose of lions is “to go in the zoo.” The same tendency has been observed in 7 and 8 year olds who, when asked why “prehistoric rocks are pointy,” offered answers like “so that animals could scratch on them when they got itchy” and “so that animals wouldn’t sit on them and smash them.”

Oh, jebus, more kiddie minds. Could we stop this please? Religious people aren’t primitive children. A lot of religious thinking is abstract, convoluted, elaborate, and sophisticated — it’s also deeply flawed, but let’s not pretend that we can find the roots of Catholic or Jewish theology in the simplistic thinking of seven year olds. The Summa Theologica, the Talmud, and the Hadiths are not one-off guesses produced by some kids on the playground at the prompting of psychologists. They are highly unnatural (in the sense that Mooney is using the word) products of exquisitely higher order thinking on the part of thousands of people deeply imbedded in an elaborate cultural tradition.

Simplistic biases like those we see in those kids are easily overcome. I’m an example, so is Mooney, so is Ken Miller, so is every scientist brought up in a Christian or Jewish or Muslim or Hindu society. That isn’t the problem. The problem lies in very detailed rationalizations assembled by scholars and leaders and influential social messengers that might play on intrinsic biases in our thinking, but I would say that that is also true of science. Shall we argue that Newtonian laws are built on our childlike appreciation of “funny man falls down” slapstick? That would be just as naive.

Turn this view on Mooney: You’re looking for intent and purpose in the pervasiveness of religious thought; you’re seeking simple causal agents behind a presumed bias towards religion. Have you considered the possibility that you’re engaging in the same teleological thinking that you’re labeling as a foundation for religion?

Biological Essentialism. First, we seem to have a deep tendency to think about biology in a way that is “essentialist”—in other words, assuming that each separate kind of animal species has a fundamental, unique nature that unites all members of that species, and that is inviolate. Fish have gills, birds have wings, fish make more fish, birds make more birds, and that’s how it all works. Essentialist thinking has been demonstrated in young children. “Little kids as young as my 2 and a half year old granddaughter are quite clear that puppies don’t have ponies for mommies and daddies,” explains McCauley.

“we seem to have a deep tendency to think about biology”…stop right there. We do? People think deeply about biology? Really?

Biologists think deeply about biology, and thinking deeply about biology seems to have produced the theory of evolution. Thinking superficially about the causes and origins of species, treating them as merely descriptive categories of convenience, seems to have produced creationism. Again with the little kids — are we seriously going to consider that 2½ year old kids have been “thinking deeply” about biology?

There are other people who think deeply about biology — hunters, for instance, or farmers. And they come up with schemes that also group animals and plants in patterns, but they generally aren’t trying to explain origins, but behavior. It’s when you actually think about where things come from that the assumptions of pre-schoolers begin to readily fall apart.

Turn this view on Mooney: Mooney has long been an opponent of “data dump” science, the idea that merely educating people about science is sufficient to persuade them to abandon religion. I’m somewhat sympathetic to that myself; it does take more than just hammering with facts to get an idea accepted. But I would also suggest that one major problem with Mooney’s argument is that he is so willing to assume that the first impressions of children represent deep thoughts — that he doesn’t seem to appreciate the importance of data and evidence in shaping how people think. This stuff matters. I am not the boy I was who first stepped into a library. Learning is actually central to the human experience.

Our cognitive architecture and emotional dispositions are certainly natural and biological, but they can equally well incline us to accept natural explanations as well as supernatural ones — we are shaped by our experiences as well as crude preliminary suppositions. The argument that religion is more “natural” than science is a bow to the naturalistic fallacy and a nod towards the status quo. Do not try to tell me that my mode of thinking, as easy and simply as I fell into it (and as readily as many in the scientific community similarly find it) is somehow weird, unnatural, inhuman, or in defiance of my evolved instincts. It is not. Every person has the capacity to think scientifically or religiously or even both, and there are modes of thought that are not limited by the presumptions of our particular culture as well.

It is a logical failure to assume that what is is what must be, or that the currently dominant elements of our culture are representative of how the human mind must work.

A new journal

It’s true that science publishing has some serious problems — can you access the latest results from federally funded research? Do you think Science and Nature are really the best science journals in the world? — so it’s good that some people are taking the lead in changing their approaches and developing alternative publishing models.

Leading academic journals are distorting the scientific process and represent a "tyranny" that must be broken, according to a Nobel prize winner who has declared a boycott on the publications.

Randy Schekman, a US biologist who won the Nobel prize in physiology or medicine this year and receives his prize in Stockholm on Tuesday, said his lab would no longer send research papers to the top-tier journals, Nature, Cell and Science.

Schekman said pressure to publish in "luxury" journals encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work. The problem was exacerbated, he said, by editors who were not active scientists but professionals who favoured studies that were likely to make a splash.

Easy for Schekman to do. He’s got a Nobel, I don’t think he has to worry about getting and maintaining a position, or even getting published where ever he wants anymore. Cutting out the “luxury” (I think they prefer to be called “prestige”) journals doesn’t discomfit him in the slightest.

Schekman is scathing in his assessment of the popular big name journals. But at least he’s also trying to do something to correct the situation: he is promoting a new open-access journal, eLife, of which he is the editor.

I took a look. It was a bit off-putting at first: Schekman’s face is plastered in the middle of the page, and there’s a link up top to “Follow Randy’s Nobel Journey”, and I thought…uh-oh, are we going to replace “luxury” journals with vanity journals? But then I browsed the several hundred currently published articles, and they’re not bad, at least if you’re interested in cell and molecular biology (oh, hey, I am!).

Looks like I’m adding another journal to the list I regularly check.

A cautionary note about fMRI studies

I’ve been distracted lately — it’s end of the world semester time — and so I didn’t have time to comment on this recent PNAS paper that reports on dramatic sex differences in the brains of men and women. Fortunately, I can just tell you to go read Christian Jarrett, who explains most of the flaws in the study, or you can look at these graphical illustrations of the magnitude of the differences. I just want to add two lesser points.

First, let’s all be really careful about the overselling of fMRI, ‘k? It’s a powerful tool, but it’s got serious spatial and temporal resolution limitations, and it is not, as many in the public seem to think, visualizing directly the electrical signaling of neurons. It’s imaging the broader physiological activity — respiration, oxygen flux, vascular changes — in small chunks of the brain. If you’re ever going to talk about fMRI, I recommend that you read Nick Logothetis’s paper that cooly assesses the state of affairs with fMRI.

The limitations of fMRI are not related to physics or poor engineering, and are unlikely to be resolved by increasing the sophistication and power of the scanners; they are instead due to the circuitry and functional organization of the brain, as well as to inappropriate experimental protocols that ignore this organization. The fMRI signal cannot easily differentiate between function-specific processing and neuromodulation, between bottom-up and top-down signals, and it may potentially confuse excitation and inhibition. The magnitude of the fMRI signal cannot be quantified to reflect accurately differences between brain regions, or between tasks within the same region. The origin of the latter problem is not due to our current inability to estimate accurately cerebral metabolic rate of oxygen (CMRO2) from the BOLD signal, but to the fact that haemodynamic responses are sensitive to the size of the activated population, which may change as the sparsity of neural representations varies spatially and temporally. In cortical regions in which stimulus- or task-related perceptual or cognitive capacities are sparsely represented (for example, instantiated in the activity of a very small number of neurons), volume transmission— which probably underlies the altered states of motivation, attention, learning and memory—may dominate haemodynamic responses and make it impossible to deduce the exact role of the area in the task at hand. Neuromodulation is also likely to affect the ultimate spatiotemporal resolution of the signal.

Just so you don’t think this is a paper ragging on the technique, let me balance that with another quote. It’s a very even-handed paper that discusses fMRI honestly.

This having been said, and despite its shortcomings, fMRI is cur- rently the best tool we have for gaining insights into brain function and formulating interesting and eventually testable hypotheses, even though the plausibility of these hypotheses critically depends on used magnetic resonance technology, experimental protocol, statistical analysis and insightful modelling. Theories on the brain’s functional organization (not just modelling of data) will probably be the best strategy for optimizing all of the above. Hypotheses formulated on the basis of fMRI experiments are unlikely to be analytically tested with fMRI itself in terms of neural mechanisms, and this is unlikely to change any time in the near future.

The other point I want to mention is that there’s a lot of extremely cool data visualization stuff going on in fMRI studies, and also that what you’re really seeing is data that has been grandly massaged. Imagine that I take a photo of my wife’s hand, and my hand. If I just showed you the raw images, the differences would be obvious, and you’d probably have no problem recognizing which was the man’s and which was the woman’s. This is not true of the raw data from two brain scans from a woman and a man — without all kinds of processing and data extraction (legitimate operations, mind you) it would look like a hash of noise. But do we look at two people’s hands, with obvious differences, and announce that we’ve made a dramatic discovery that sex differences are hardwired? So why do scientists get away with it if it involves sticking heads in a very expensive machine that makes funny noises?

Furthermore, the processing done in this distance was designed to abstract and highlight the differences, amplifying their perception. Take the photos of my wife’s hand and mine, and now do some jazzy enhancement to subtract out anything that is the same, so the bulk of the images are erased as unimportant, and then pseudocolor the remainder into neon reds and blues, and display it in 3 dimensions, rotating. That would be a weird, complex image far removed from the mundane familiarity of the shape of the hand, but it would emphasize real differences to an extraordinary degree, while obscuring all of the similarities, and give a false impression of the magnitude of the differences.

Let’s not assign all the differences to something genetic, either (although of course, some are modulated by biological — but not really genetic — differences). If you were to do the same comparison of my hand to my father’s, you’d see much grander differences than between mine and my wife’s. He was a manual laborer and mechanic, and I recall doing the comparison myself: his hands were muscular, powerful, calloused, deeply lined. I should have gotten a photo while he was alive so I could publish it in PNAS, touting significant biological differences between father and son.

(via Stephanie)


Logothetis NK (2008) What we can do and what we cannot do with fMRI. Nature 453(7197):869-78. doi: 10.1038/nature06976.

The reification of the gene

Razib Khan poked me on twitter yesterday on the topic of David Dobbs’ controversial article, which I’ve already discussed (I liked it). I’m in the minority here; Jerry Coyne has two rebuttals, and Richard Dawkins himself has replied. There has also been a lot of pushback in the comments here. I think they all miss the mark, and represent an attempt to shoehorn everything into an established, successful research program, without acknowledging any of the inadequacies of genetic reductionism.

Before I continue, let’s get one thing clear: I am saying that understanding genes is fundamental, important, and productive, but it is not sufficient to explain evolution, development, or cell biology.

But what the hell do we mean by a “gene”? Sure, it’s a transcribed sequence in the genome that produces a functional product; it’s activity is dependent to a significant degree on the sequence of nucleotides within it, and we can identify similar genes in multiple lineages, and analyze variations both as a measure of evolutionary history and often, adaptive function. This is great stuff that keeps science careers humming just figuring it out at that level. Again, I’m not dissing that level of analysis, nor do I think it is trivial.

However, I look at it as a cell and developmental biologist, and there’s so much more. That gene’s transcriptional state is going to depend on the histones that enfold it and the enzymes that may have modified it; it’s going to depend on its genetic neighborhood and other genes around it; it’s not just sitting there, doing its own thing solo. And you will cry out, but those are just products of other genes, histone genes and methylation enzymes and DNA binding proteins, and their sequences of nucleotides! And I will agree, but there’s nothing “just” about it. Expression of each of those genes is dependent on their histones and methylation state. And further, those properties are contingent on the history and environment of the cell — you can’t describe the state of the first gene by reciting the sequences of all of those other genes.

Furthermore, the state of that gene is dependent on activators and repressors, enhancer and silencer sequences. And once again, I will be told that those are just genetic sequences and we can compile all those patterns, no problem. And I will say again, the sequence is not sufficient: you also need to know the history of all the interlinked bits and pieces. What activators and repressors are present is simply not derivable from the genes alone.

And I can go further and point out that once the gene is transcribed, the RNA may be spliced (sometimes alternatively) and edited, processed thoroughly, and be subject to yet more opportunities for control. I will be told again that those processes are ultimately a product of genes, and I will say in vain…but you don’t account for all the cellular and environmental events with sequence information!

And then that RNA is exported to the cytoplasm, where it encounters other micro RNAs and finds itself in a rich and complex environment, competing with other gene products for translation, while also being turned over by enzymes that are breaking it down.

Yes, it is in an environment full of gene products. You know my objection by now.

And then it is translated into protein at some rate regulated by other factors in the cell (yeah, gene products in many cases), and it is chaperoned and transported and methylated and acetylated and glycosylated and ubiquitinated and phosphorylated, and assembled into protein complexes with all these other gene products, and its behavior will depend on signals and the phosphorylation etc. state of other proteins, and I will freely and happily stipulate that you can trace many of those events back to other genes, and that they respond in interesting ways to changes in the sequences of those genes.

But I will also rudely tell you that we don’t understand the process yet. Knowing the genes is not enough.

It’s as if we’re looking at a single point on a hologram and describing it in detail, and making guesses about its contribution to the whole, but failing to signify the importance of the diffraction patterns at every point in the image to our perception of the whole. And further, we wave off any criticism that demands a more holistic perspective by saying that those other points? They’re just like the point I’m studying. Once I understand this one, we’ll know what’s going on with the others.

That’s the peril of a historically successful, productive research program. We get locked in to a model; there is the appeal of being able to use solid, established protocols to gather lots of publishable data, and to keep on doing it over and over. It’s real information, and useful, but it also propagates the illusion of comprehension. We are not motivated to step away from the busy, churning machine of data gathering and rethink our theories.

We forget that our theories are purely human constructs designed to help us simplify and make sense of a complex universe, and most seriously we fail to see how our theories shape our interpretation of the data…and they shape what data we look for! That’s my objection to the model of evolution in The Selfish Gene: it sure is useful, too useful, and there are looming barriers to our understanding of biology that are going to require another Dawkins to disseminate.

Let me try to explain with a metaphor — always a dangerous thing, but especially dangerous because I’m going to use a computer metaphor, and those things always grip people’s brains a little bit too hard.

In the early days of home computing, we had these boxes where the input to memory was direct: you’d manually step through the addresses, and then there was a set of switches on the front that you’d use to toggle the bits at that location on and off. When a program was running, you’d see the lights blinking on and off as the processor stepped through each instruction. Later, we had other tools: I recall tinkering with antique 8-bit computers by opening them up and clipping voltmeters or an oscilloscope to pins on the memory board and watching bits changing during execution. Then as the tools got better, we had monitors/debuggers we could run that would step-trace and display the contents of memory locations. Or you could pick any memory location and instantly change the value stored there.

That’s where we’re at in biology right now, staring at the blinking lights of the genome. We can look at a location in the genome — a gene — and we can compare how the data stored there changes over developmental or evolutionary time. There’s no mistaking that it is real and interesting information, but it tells us about as much about how the whole organism works and changes as having a readout that displays the number stored at x03A574DC on our iPhone will tell us how iOS works. Maybe it’s useful; maybe there’s a number stored there that tells you something about the time, or the version, or if you set it to zero it causes the phone to reboot, but let’s not pretend that we know much about what the machine is actually doing. We’re looking at it from the wrong perspective to figure that out.

You could, after all, describe the operation of a computer by cataloging the state of all of its memory bits in each clock cycle. You might see patterns. You might infer the presence of interesting and significant bits, and you could even experimentally tweak them and see what happens. Is that the best way to understand how it works? I’d say you’re missing a whole ‘nother conceptual level that would do a better job of explaining it.

Only we lack that theory that would help us understand that level right now. It’s fine to keep step-tracing the genome right now, and maybe that will provide the insight some bright mind will need to come up with a higher order explanation, but let’s not elide the fact that we don’t have it yet. Maybe we should step back and look for it.