We need a sociologist of science…or a philosopher

There’s another paper out debunking the ENCODE consortium’s absurd interpretation of their data. ENCODE, you may recall, published a rather controversial paper in which they claimed to have found that 80% of the human genome was ‘functional’ — for an extraordinarily loose definition of function — and further revealed that several of the project leaders were working with the peculiar assumption that 100% must be functional. It was a godawful mess, and compromised the value of a huge investment in big science.

Now W. Ford Doolittle has joined the ranks of many scientists who immediately leapt into the argument. He has published “Is junk DNA bunk? A critique of ENCODE” in PNAS.

Do data from the Encyclopedia Of DNA Elements (ENCODE) project render the notion of junk DNA obsolete? Here, I review older arguments for junk grounded in the C-value paradox and propose a thought experiment to challenge ENCODE’s ontology. Specifically, what would we expect for the number of functional elements (as ENCODE defines them) in genomes much larger than our own genome? If the number were to stay more or less constant, it would seem sensible to consider the rest of the DNA of larger genomes to be junk or, at least, assign it a different sort of role (structural rather than informational). If, however, the number of functional elements were to rise significantly with C-value then, (i) organisms with genomes larger than our genome are more complex phenotypically than we are, (ii) ENCODE’s definition of functional element identifies many sites that would not be considered functional or phenotype-determining by standard uses in biology, or (iii) the same phenotypic functions are often determined in a more diffuse fashion in larger-genomed organisms. Good cases can be made for propositions ii and iii. A larger theoretical framework, embracing informational and structural roles for DNA, neutral as well as adaptive causes of complexity, and selection as a multilevel phenomenon, is needed.

In the paper, he makes an argument similar to one T. Ryan Gregory has made many times before. There are organisms that have much larger genomes than humans; lungfish, for example, have 130 billion base pairs, compared to the 3 billion humans have. If the ENCODE consortium had studied lungfish instead, would they still be arguing that the organism had function for 104 billion bases (80% of 130 billion)? Or would they be suggesting that yes, lungfish were full of junk DNA?

If they claim that lungfish that lungfish have 44 times as much functional sequence as we do, well, what is it doing? Does that imply that lungfish are far more phenotypically complex than we are? And if they grant that junk DNA exists in great abundance in some species, just not in ours, does that imply that we’re somehow sitting in the perfect sweet spot of genetic optimality? If that’s the case, what about species like fugu, that have genomes one eighth the size of ours?

It’s really a devastating argument, but then, all of the arguments against ENCODE’s interpretations have been solid and knock the whole thing out of the park. It’s been solidly demonstrated that the conclusions of the ENCODE program were shit.

yalejunk

So why, Yale, why? The Winter edition of the Yale Medicine magazine features as a cover article Junk No More, an awful piece of PR fluff that announces in the first line “R.I.P., junk DNA” and goes on to tout the same nonsense that every paper published since the ENCODE announcement has refuted.

The consortium found biological activity in 80 percent of the genome and identified about 4 million sites that play a role in regulating genes. Some noncoding sections, as had long been known, regulate genes. Some noncoding regions bind regulatory proteins, while others code for strands of RNA that regulate gene expression. Yale scientists, who played a key role in this project, also found “fossils,” genes that date to our nonhuman ancestors and may still have a function. Mark B. Gerstein, Ph.D., the Albert L. Williams Professor of Biomedical Informatics and professor of molecular biophysics and biochemistry, and computer science, led a team that unraveled the network of connections between coding and noncoding sections of the genome.

Arguably the project’s greatest achievement is the repository of new information that will give scientists a stronger grasp of human biology and disease, and pave the way for novel medical treatments. Once verified for accuracy, the data sets generated by the project are posted on the Internet, available to anyone. Even before the project’s September announcement, more than 150 scientists not connected to ENCODE had used its data in their research.

“We’ve come a long way,” said Ewan Birney, Ph.D., of the European Bioinformatics Institute (EBI) in the United Kingdom, lead analysis coordinator for ENCODE. “By carefully piecing together a simply staggering variety of data, we’ve shown that the human genome is simply alive with switches, turning our genes on and off and controlling when and where proteins are produced. ENCODE has taken our knowledge of the genome to the next level, and all of that knowledge is being shared openly.”

Oh, Christ. Not only is it claiming that the 80% figure is for biological activity (it isn’t), but it trots out the usual university press relations crap about how the study is all about medicine. It wasn’t and isn’t. It’s just that dumbasses can only think of one way to explain biological research to the public, and that is to suggest that it will cure cancer.

As for Birney’s remarks, they are offensively ignorant. No, the ENCODE research did not show that the human genome is actively regulated. We’ve known that for fifty years.

That’s not the only ahistorical part of the article. They also claim that the idea of junk DNA has been discredited for years.

Some early press coverage credited ENCODE with discovering that so-called junk DNA has a function, but that was old news. The term had been floating around since the 1990s and suggested that the bulk of noncoding DNA serves no purpose; however, articles in scholarly journals had reported for decades that DNA in these “junk” regions does play a regulatory role. In a 2007 issue of Genome Research, Gerstein had suggested that the ENCODE project might prompt a new definition of what a gene is, based on “the discrepancy between our previous protein-centric view of the gene and one that is revealed by the extensive transcriptional activity of the genome.” Researchers had known for some time that the noncoding regions are alive with activity. ENCODE demonstrated just how much action there is and defined what is happening in 80 percent of the genome. That is not to say that 80 percent was found to have a regulatory function, only that some biochemical activity is going on. The space between genes was also found to contain sites where DNA transcription into RNA begins and areas that encode RNA transcripts that might have regulatory roles even though they are not translated into proteins.

I swear, I’m reading this article and finding it indistinguishable from the kind of bad science I’d see from ICR or Answers in Genesis.

I have to mention one other revelation from the article. There has been a tendency to throw a lot of the blame for the inane 80% number on Ewan Birney alone…he threw in that interpretation in the lead paper, but it wasn’t endorsed by every participant in the project. But look at this:

The day in September that the news embargo on the ENCODE project’s findings was lifted, Gerstein saw an article about the project in The New York Times on his smartphone. There was a problem. A graphic hadn’t been reproduced accurately. “I was just so panicked,” he recalled. “I was literally walking around Sterling Hall of Medicine between meetings talking with The Times on the phone.” He finally reached a graphics editor who fixed it.

So Gerstein was so concerned about accuracy that he panicked over an article in the popular press, but had no problem with the big claim in the Birney paper, the one that would utterly undermine confidence in the whole body of work, did not perturb him? And now months later, he’s collaborating with the Yale PR department on a puff piece that blithely sails past all the objections people have raised? Remarkable.

This is what boggles my mind, and why I hope some sociologist of science is studying this whole process right now. It’s a revealing peek at the politics and culture of science. We have a body of very well funded, high ranking scientists working at prestigious institutions who are actively and obviously fitting the data to a set of unworkable theoretical presuppositions, and completely ignoring the rebuttals that are appearing at a rapid clip. The idea that the entirety of the genome is both functional and adaptive is untenable and unsupportable; we instead have hundreds of scientists who have been bamboozled into treating noise as evidence of function. It’s looking like N rays or polywater on a large and extremely richly budgeted level. And it’s going on right now.

If we can’t have a sociologist making an academic study of it all, can we at least have a science journalist writing a book about it? This stuff is fascinating.

I have my own explanation for what is going on. What I think we’re seeing is an emerging clash between scientists and technicians. I’ve seen a lot of biomedical grad students going through training in pushing buttons and running gels and sucking numerical data out of machines, and we’ve got the tools to generate so much data right now that we need people who can manage that. But it’s not science. It’s technology. There’s a difference.

A scientist has to be able to think about the data they’re generating, put it into a larger context, and ask the kinds of questions that probe deeper than a superficial analysis can deliver. A scientist has to be more broadly trained than the person who runs the gadgetry.

This might get me burned at the stake worse than sneering at ENCODE, but a good scientist has to be…a philosopher. They may not have formal training in philosophy, but the good ones have to be at least roughly intuitive natural philosophers (ooh, I’ve heard that phrase somewhere before). If I were designing a biology curriculum today, I’d want to make at least some basic introduction to the philosophy of science an essential and early part of the training.

I know, I’m going against the grain — there have been a lot of big name scientists who openly dismiss philosophy. Richard Feynman, for instance, said “Philosophy of science is about as useful to scientists as ornithology is to birds.” But Feynman was wrong, and ironically so. Reading Feynman is actually like reading philosophy — a strange kind of philosophy that squirms and wiggles trying to avoid the hated label, but it’s still philosophy.

I think the conflict arises because, like everything, 90% of philosophy is garbage, and scientists don’t want to be associated with a lot of the masturbatory nonsense some philosophers pump out. But let’s not lose sight of the fact that some science, like ENCODE, is nonsense, too — and the quantity of garbage is only going to rise if we don’t pay attention to understanding as much as we do accumulating data. We need the input of philosophy.

A quote from Ed Abbey, who died 24 years ago today

The geologic approach is certainly primary and fundamental, underlying the attitude and outlook that best support all others, including the insights of poetry and the wisdom of religion. Just as the earth itself forms the indispensable ground for the only kind of life we know, providing the sole sustenance of our minds and bodies, so does empirical truth constitute the foundation of higher truths. (If there is such a thing as higher truth.)

It seems to me that Keats was wrong when he asked, rhetorically, “Do not all charms fly … at the mere touch of cold philosophy?” The word “philosophy” standing, in his day, for what we now call “physical science.” But Keats was wrong, I say, because there is more charm in one “mere” fact, confirmed by test and observation, linked to other facts through coherent theory into a rational system, than in a whole brainful of fancy and fantasy. I see more poetry in a chunk of quartzite than in a make-believe wood nymph, more beauty in the revelations of a verifiable intellectual construction than in whole misty empires of obsolete mythology.

The moral I labor toward is that a landscape as splendid as that of the Colorado Plateau can best be understood and given human significance by poets who have their feet planted in concrete — concrete data — and by scientists whose heads and hearts have not lost the capacity for wonder. Any good poet, in our age at least, must begin with the scientific view of the world; and any scientist worth listening to must be something of a poet, must possess the ability to communicate to the rest of us his sense of love and wonder at what his work discovers.

I can defend both Lawrence Krauss and philosophy!

Philosophers are still grumbling about Lawrence Krauss, who openly dissed philosophy (word to the philosophers reading this: he recanted, so you can put down the thumbscrews and hot irons for now). This is one of those areas where I’m very much a middle-of-the-road person: I am not a philosopher, at least I’m definitely not as committed to the discipline as someone like Massimo Pigliucci, but I do think philosophy is an essential part of our intellectual toolkit — you can only dismiss it if you haven’t thought much about it, i.e., aren’t using philosophy at all.

So I’m pretty much in agreement with this post about the complementarity of philosophy and science. In fact, I’ll emphatically agree with this bit:

Scientists and mathematicians are really doing philosophy. It’s just that they’ve specialised in a particular branch, and they’re employing the carefully honed tools of their specific shard just for that particular job. So specialised, and so established is that toolkit, that they don’t consider them philosophers any more.

I’ll also agree with the flip side, where he defines philosophy:

Philosophy’s method is bounded only by the finite capacities of human thought. To the extent that something can be reckoned, philosophy can get there. As such, philosophy will never stop asking “why”.

But then I start to quibble (oh, no! I must be infected with philosophy!).

So what this is really saying is that science is a bounded domain of philosophy, while philosophy is unlimited, which sounds like philosophy has the better deal. But I’d argue otherwise: what’s missing in philosophy is that anvil of reality — that something to push against that allows us to test our conclusions against something other than internal consistency. It means philosophy is excellent at solving imaginary problems (which may be essential for understanding more mundane concerns), while science is excellent at solving the narrower domain of real problems. Science has something philosophy lacks: a solid foundation in empiricism. That’s a strength, not a weakness.

I think that’s where philosophers begin to annoy us, when they try to pass judgment using inappropriate referents — which is also how scientists like Krauss can annoy philosophers. And philosophers are so good at rationalizing disagreement away while carping on others. For instance…

Because scientists have a rather poor track record when it comes to doing philosophy.

Sam Harris’s attempt to provide a scientific basis for morality springs to mind, where he poo poos metaethics only to tread squarely in a metaethical dilemma. Or Richard Dawkins and his dismissal of religion as a false belief system, meanwhile dismissing the rather significant psychological and cultural functional roles it has played throughout human history, and may still play today.

Or Krauss, who without a hint of irony, suggests that good philosophers are really just bad scientists, when in fact he’s a good scientist doing philosophy badly. His definition of “nothing” comes not from within science, but is a grope in the dark for a definition that conforms with his particular theoretical predilections. That’s not how one defines things in polite (philosophical) circles, as David Albert pointed out.

After stating that scientists are philosophers and that science is a branch of philosophy, we’re now told that scientists do philosophy poorly. So is he saying that scientists must do science poorly? I know who’s not going to get invited to my next cotillion, that’s for sure.

Rather, scientists do their brand of philosophy very, very well — philosophers seem to be playing a two-faced game here of wanting to claim science as one of their own when they like what it accomplishes, but washing their hands of it when they don’t like it. Nuh-uh, people, you want to call us philosophers, you have to live with the stinking chemicals and the high energy discharges and the reeking cadavers now too.

His examples aren’t persuasive. I’ll skip over Harris, I’m not particularly fond of his efforts to explain morality, but the Dawkins complaint is weird. He does not disregard the immense psychological and cultural roles of religion: in fact, those are reasons why he and I both detest religion, because we’re aware of all the harm it does and has done. That we think the physical and psychological harm is enough that we should change it is not a sign that we’re doing bad philosophy at all; it’s a sign that we scientifical philosophers consider reality and empiricism to be extremely important factors in our thinking…apparently to a greater degree than many non-scientifical philosophers.

As for Krauss, I thought the Albert review was awful — typical unbounded philosophy with no anchor to the truth. Krauss’s definition of “nothing” was not just a grope in the dark. It was a definition built on empirical and theoretical knowledge of what “nothing” is like. Krauss is describing the nothing we have, Albert is describing the nothing he thinks we ought to have. Krauss is being the scientist, Albert is being the philosopher, and the conflict is driven because the philosopher is unable to recognize the prerequisites to doing science well.

I think that appreciating the boundaries of both disciplines as well as their strengths is important for getting along. Krauss may not have appreciated what philosophy has to offer, but a substantial reason for the friction is the smugness of philosophers who disrespect the functional constraints required for doing good science. Scientists don’t get to be “bounded only by the finite capacities of human thought”. We also have to honor the physical nature of reality.

In my head I have the capacity to flap my arms and fly. In the real working world…not so much.

I can only read Plantinga for the lulz

In a review of a new book by Alvin Plantinga, Christopher Tollefsen claims that Plantinga, “one of the most influential philosophers of the twentieth and early twenty-first centuries”, has “systematically dismantled…the claims of the new atheists”. I think we can take that about as seriously as his assessment of Crazy Alvin’s status as a philosopher.

I have zero interest in Plantinga’s “philosophy” — what I’ve read of it convinces me that it’s nothing but deranged Christian apologetics gussied up in academic dress, and the words of Plantinga himself pretty much have persuaded me that I couldn’t address him without frequent invocations of the frolics of shithouse rats. But I am interested in how these gyring rodents see the New Atheists — it’s always such an easy confirmation that they don’t know what they’re talking about.

The claim of the new atheists is that Darwin’s “Dangerous idea,” as Dennett calls it, proves that there is no divine agency responsible for the world. As Dennett explains, “an impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.” But the claims of Darwin show no such thing: even if Darwinism accurately identifies the mechanism by which evolution has occurred, Plantinga notes, “It is perfectly possible that the process of natural selection has been guided and superintended by God, and that it could not have produced our world without that guidance.”

The emphasis is there in the review; the first sentence just sings with ignorance. Here they are, carping at those New Atheists, and the first thing they say about them is blatantly false: no New Atheist claims to have a disproof of any god. We’re extremely forthright about saying it, too: Richard Dawkins explicitly laid out a scale of belief in The God Delusion, and it seems that every ferocious critic of that book never bothered to read it, because they were all stunned when Dawkins repeated the same thing on television: we don’t have absolute certainty about the nonexistence of a deity. We’re very confident that you might as well go on about your life as if gods don’t exist, but that’s about it.

Plantinga’s reservations at the end of that paragraph are also very silly. Evolution is the mechanism by which species have been shaped throughout the history of the world, and that is a fact; we can concede that there are other mechanisms besides natural selection (and in fact, we study them), and that one possibility, offered so far without evidence, is that intelligent entities have manipulated our ancestors. We don’t think it’s likely or necessary, but OK, it’s possible that one could find evidence supporting such a scenario.

As for Plantinga’s assertion that intelligence was required to create the diversity of life we have, well, please Alvin, at least wipe your filthy feet before spasming out on the carpet.

But there are things that I, as a New Atheist, am certain about, even if I remain open to the possibility of evolutionary interventionists of an undefined nature.

I am certain that “god” is a useless term. It’s utterly incoherent; some people babble about the god of the Christian Bible, which is an anthropomorphic being with vast magic powers and the emotional stability of an 8-year-old on meth. Others talk about an all-pervasive force in the universe, or use meaningless phrases like “the ground state of all being”, or chatter about a reified emotion like “love”. The really annoying thing about discussions with these people is that they’ll cheerfully switch definitions on you in mid-stream. Getting battered because the whole concept of an omnipotent being existing in the form of an Iron Age patriarch in the sky is silly? No problem! Just announce that god is everywhere and in you and that god is love. Trying hard to justify your regressive social policies using an amorphous principle like love, and finding the atheists turning the whole principle of benignity back on you? No problem! Just announce that god so loved us that he became a man, and if you’re opponents reject that concept, they’ll be thrown into Hell by God the Judge.

Plantinga is an excellent example of this theological muddle. On the one hand, he wants to argue for a cosmos-spanning Mind; on the other, a bigoted narrow being with a chosen race and a preferred position for sexual intercourse, who wants to be cosseted and praised for all eternity. Pick a clear definition for god, and be consistent about it, please. And then persuade all the other theologians that your definition is the correct one. Then come argue with the atheists when you know what the hell you’re talking about.

I am certain that theists have no credible evidence for their claims. Oh, sure, they can say their holy book says so, which is a kind of weak evidence; they can recite anecdotes; they can point to people who believe. But that’s about it, and it’s not adequate. I want to see independently verifiable empirical evidence that can be assessed independently of one’s sociocultural background; I want to see the stuff that would convince a Christian that Islam is an accurate description of the universe, or vice versa, and that would persuade a scientist that here’s some preliminary support for a phenomenon that is worth pursuing. Theologians don’t have that. They’ve never had that.

Religions have grown most often by the sword, or by fostering fear and emotional dependency, or by hijacking secular institutions and forcing beliefs on others, but they never expand by right of reason. Why isn’t a specific god-belief a universal, like mathematics or physics? Because unlike math or physics, religion doesn’t actually deliver on its promises. The power of religion has always been in psychological manipulation of the human mind, empowering a priesthood at the expense of genuine human advancement and understanding.

I am certain that evolution occurred. The evidence is in; the process occurred and is occurring, there are no known barriers to natural processes producing modern life from proto-life/chemistry over the course of 3.8 billion years, and all the evidence we do have shows modern forms being incrementally modified versions of earlier forms. We don’t know all the details, of course, and just maybe someone somewhere could discover a real hurdle that could not have been overcome without intelligent aid, but I know for a fact that no creationist has ever come up with a defensible objection, and that nearly all the creationists who pontificate so ponderously on the impossibility of biology, Plantinga among them, always turn out to be profoundly ignorant of the science. There’s a good inverse correlation between knowledge of biology and certainty that evolution can’t work.

So Plantinga-style arguments, that evolution cannot occur without intelligent guidance, therefore god, leave me cold. They begin with a false premise, easily refuted by the evidence, and therefore the credibility of their entire line of reasoning collapses. This true of all of the Intelligent Design creationism arguments that rest on showing natural selection (it’s the only mechanism they’ve heard of, sadly) doesn’t work, therefore you have to accept the only other alternative they offer, which is godly intervention. Not only is it bad science, it’s bad logic.

Unfortunately for them, the alternative to taking potshots at an explanation that works is to provide specific positive evidence for design, and they can’t do that. For instance, they could say, “My designer enhanced human brain performance by introducing a specific allele of microcephalin into select populations 37,000 years ago”, but then they’d have to face those awkward, demanding questions from scientists: “How do you know? What’s your evidence? Why couldn’t natural mechanisms of genetic variance have produced that specific allele?” And we know they can’t cope with those questions, because their only reason for believing that is that they wish it were so.

That’s all Plantinga has got: a peculiar historical myth-figure that he can’t define without making his whole enterprise look ridiculous, a total lack of reasonable objective evidence to support his myth, and a reliance on criticizing a science he doesn’t understand in the hope that if he stirs up enough doubt, people will cling to his myth rather than all the other myths swirling about in the confusion of his own creation. It’s pathetic. And this is from “one of the most influential philosophers of the twentieth and early twenty-first centuries”? How sad that would be for philosophy, if it were true.

A Krauss concession

Lawrence Krauss annoyed quite a few people with his jokes about the uselessness of philosophy in recent talks. He has now published an apology — he actually has a qualified dislike of certain kinds of philosophy, that which ignores empirical evidence, but otherwise appreciates the views of many other philosophers.

So, to those philosophers I may have unjustly offended by seemingly blanket statements about the field, I apologize. I value your intelligent conversation and the insights of anyone who thinks carefully about our universe and who is willing to guide their thinking based on the evidence of reality. To those who wish to impose their definition of reality abstractly, independent of emerging empirical knowledge and the changing questions that go with it, and call that either philosophy or theology, I would say this: Please go on talking to each other, and let the rest of us get on with the goal of learning more about nature.

Conservative self-identifies with single-celled brainless organism

Among my usual flood of daily email, I frequently get tossed onto mailing lists for conservative think tanks. Why? I don’t know. I suspect that it’s for the same reason I also get a lot of gay porn in my email: not because I follow it or asked to be added, but because some tired d-bag with no imagination thinks its funny to dun me with more junk. The joke’s on them, though: I might keep it around and skim the stuff now and then to get inspiration for a blog post, and then click-click — a few presses of a button and I add the source to my junk mail filter, and never see it again.

No, I didn’t get inspired by gay porn today, but by drivel from some freakish conservative think tank called the Witherspoon Institute, about which I know next to nothing except that they’re another of those organizations that cloak themselves in the Holy Founding Fathers of America to promote illiberal non-freethinking anti-government BS. This latest is by a philosopher criticizing a book about modern reproductive biotechnologies. He doesn’t like ’em. Not one bit, no sir.

But you know an essay from a philosopher is going to be pretty much worthless when it opens and closes with references to… C.S. Lewis. I don’t know why that man gets so much happy clappy press from believers. I suspect he must have sold his soul to the devil.

Anyway, the bizarre part is in the middle, where Justin Barnard is poleaxed by the author’s, Steven Potter’s, willingness to destroy human embryos. Potter apparently considers several of the sides of the debate, but fails to come down on the side of the Religious Right, that is, that embryos are absolutely and undeniably full human beings from the instant of fertilization, instead espousing the dreadful notion that the definition of personhood falls into a huge gray area.

Potter’s own attempt to wrestle with the morality of destroying human embryos is philosophically, if not biologically, confused from the start. He begins by claiming that “each egg and sperm has the potential to make a person.” Biologically, this is simply false. Gametes, by themselves, have no intrinsic developmental potential for human personhood. Of course, Potter knows this. So his use of “potential” is likely more latitudinarian. Still, three pages later, Potter describes the zygote as having “remarkable potential.” “It can,” he explains, “turn itself into a person.” Ironically, Potter fails to recognize that this potentialist understanding of human personhood is at odds with his rather surprising admission of the embryological facts. Potter writes, “Of course we all began as a zygote. Everyone does.” What is shocking about this concession is what it so obviously entails–an entailment that seems lost on Potter. If I, the human being I am today, “began as a zygote,” then the zygote that began the-human-being-I-am-today was me–i.e., it was a human person. It was not merely a cell with “remarkable potential” to become me. It was me.

If anyone is confused here, it’s Barnard. Of course each egg and sperm has the potential to form a person, especially when we throw biotechnology into the equation, as the book he’s reviewing explicitly does. We already have techniques to revert and differentiate a sperm cell into an egg. For that matter, given time and research, we’ll be able to reprogram just about any cell into a totipotent state, and clone someone from a cheek swab. Does Mr Barnard regard every cell he sheds as a potential person?

Perhaps he wants to argue that a sperm or egg cell doesn’t have the potential for personhood without a human assist. But then by that limitation the zygote has to be excluded as well — no human zygote can develop to term without the extreme cooperation of another individual. Try it; extract a fertilized egg and set it in a beaker by your nightstand, and wait for a baby to crawl out. Won’t happen. A uterus and attendant physiological and behavioral meat construct, i.e., woman, is also an amazing piece of biotechnology that is a necessary component of the developmental process.

But the real blow to this whole “potential” argument is damaged irreparably by Barnard’s last few sentences — was he going for a reductio here? Is the entire essay an exercise in irony? ‘Cause that dope was dumb.

Yes, Mr Barnard began as a zygote. That does not mean the zygote was Mr Barnard. My car began as a stack of metal ingots and barrels of plastics; that does not imply that an ingot of iron is a car. My house began as a set of blueprints and an idea in an architect’s mind; nobody is going to pay the architect rent for living in his cranium or on a stack of paper in a cabinet. The zygote was not Justin Barnard, unless Justin Barnard is still a vegetating single-celled blob, in which case I’d like to know how he typed his essay.

Since Barnard claims to be a philosopher, I’ll cite another, a guy named Aristotle. This is a quote I use in the classroom when I try to explain to them how epigenesis works, in contrast to preformation. Aristotle did some basic poking around in chicken eggs and in semen, and he noticed something rather obvious—there were no bones in there, nor blood, nor anything meatlike or gristly or brainy. So he made the simple suggestion that they weren’t there.

Why not admit straight away that the semen…is such that out of it blood and flesh can be formed, instead of maintaining that semen is both blood and flesh?

Barnard is making the classic preformationist error of assuming that everything had to be there in the beginning: I am made of bones and blood and flesh and brains and guts and consciousness and self-identity, therefore the zygote must have contained bones and blood and flesh and brains and guts and consciousness and self-identity.

It didn’t.

Why not admit straight away that the zygote is such that out of it selfhood may arise, rather than maintaining that the zygote is the self?

In that case we have to recognize that the person is not present instantaneously at one discrete moment, but emerges gradually over months to years of time, that there were moments when self was not present and other moments when self clearly was present, and moments in between where there is ambiguity or partial identity or otherwise blurry gray boundaries. This is a conclusion that makes conservative ideologues wince and shy away — I think it’s too complicated for their brains, which may in some ways be equivalent to the gormless reflexive metabolic state of the zygote — but it is how science understands the process of development.

Kim Stanley Robinson at Duke

I haven’t had a chance yet to listen to the whole of Kim Stanley Robinson’s talk at Duke, but what I’ve seen so far is very good. I’m more posting this here so I have a reminder to watch the rest once I get home, but nothing is stopping you all from enjoying it now.

science is a Utopian project; it began as a Utopian project and it has remained so ever since, an attempt to make a better world. And this is not always the view taken of science because its origins and its life have been so completely wrapped up with capitalism itself. They began together. You could consider them to be some kind of conjoined twins, Siamese twins that hate each other, Hindu gods that are permanently at odds, or even just a DNA strand wrapped around each other forever: some kind of completely imbricated and implicated co-leadership of the world, cultural dominance–so that science is not capitalism’s research and development division, or enabler, but a counterforce within it. And so despite the fact that as Galileo says that science was born with a gun to its head, and has always been under orders to facilitate the rise and expansion of capital, the two of them in their increasing power together are what you might call semi-autonomous, and science has been the Utopian thrust to alleviate suffering and make a better world.

There is a bit farther in where I have to disagree — he equates science with a new kind of religion. I understand why he’s making that argument, but I consider it lazy thinking; it’s like saying a car is a horse, because they share some basic function, but at some point in the transformation of a concept, you have to stop and say, “Wait a minute…this is something new.” Both a car and a horse may be useful for transportation, but a car is not a horse: we have a very different relationship to the two, their prevalence bends culture in very different way, their differences are far, far greater than their similarities. In the same way, Robinson can say “It’s a religion in the sense of religio, it’s what binds us together. It’s a form of devotion: the scientific study of the world is simply a kind of worship of it, a very detailed, painstaking, and often tedious daily worship, like Zen,” but that glosses over the fundamental differences. Science changes the world and our understanding of it in ways that religion cannot.

A Natural History of Seeing: The Art and Science of Vision

Simon Ings has written a wonderful survey of the eye, called A Natural History of Seeing: The Art and Science of Vision(amzn/b&n/abe/pwll), and it’s another of those books you ought to be sticking on your Christmas lists right now. The title give you an idea of its content. It’s a “natural history”, so don’t expect some dry exposition on deep details, but instead look forward to a light and readable exploration of the many facets of vision.

There is a discussion of the evolution of eyes, of course, but the topics are wide-ranging — Ings covers optics, chemistry, physiology, optical illusions, decapitated heads, Edgar Rice Burroughs’ many-legged, compound-eyed apts, pointillisme, cephalopods (how could he not?), scurvy, phacopids, Purkinje shifts…you get the idea. It’s a hodge-podge, a little bit of everything, a fascinating cabinet of curiousities where every door opened reveals some peculiar variant of an eye.

Don’t think it’s lacking in science, though, or is entirely superficial. This is a book that asks the good questions: how do we know what we know? Each topic is addressed by digging deep to see how scientists came to their conclusion, and often that means we get an entertaining story from history or philosophy or the lab. Explaining the evolution of our theories of vision, for example, leads to the story of Abu’Ali al-Hasan ibn al-Hasan ibn al-Haythem, who pretended to be mad to avoid the cruelty of a despotic Caliph, and who spent 12 years in a darkened house doing experiments in optics (perhaps calling him “mad” really wasn’t much of a stretch), and emerged at the death of the tyrant with an understanding of refraction and a good theory of optics that involved light, instead of mysterious vision rays emerging from an eye. Ings is also a novelist, and it shows — these are stories that inform and lead to a deeper understanding.

If the book has any shortcoming, though, it is that some subjects are barely touched upon. Signal transduction and molecular evolution are given short shrift, for example, but then, if every sub-discipline were given the depth given to basic optics, this book would be unmanageably immense. Enjoy it for what it is: a literate exploration of the major questions people have asked about eyes and vision for the last few thousand years.