Please, please people: stop using the naive dictionary meaning of words in place of context

I know I’m notorious for complaining about those goofy definitions of atheism (“it just means “not believing in god”, nothing more!”, the superficial mob will say) because the word has acquired much deeper resonances that we ignore at our peril, and has implications far greater than simple rejection of one assertion. But the other word that people love to abuse is “freethought”. The same superficial twits all think it means simply that you’re allowed to think whatever you want (which, it turns out, you can do even in a theocracy) and that it’s a kind of hedonism of the mind in which all things are permissible.

It’s not. It’s a word with a long history, a real meaning, and a greater substance than the poseurs know of. Alex Gabriel does a marvelous job scouring the ignoramuses on the meaning of freethought.

Objections to Freethought‘s place in our masthead are among the laziest, glibbest soundbites our critics have, but more than that display a failure to grasp even the term’s most basic history. Freethought is not ‘free thought’ or uninhibited inquiry – to think so boasts the same green literalism as thinking a Friends’ Meeting House is a shared beach hut or that Scotch pancakes contain Scotch  – though even if it were, it’s silly and inane to assume one’s critics are automatons or say loose collective viewpoints mean dictatorship. Freethought is a specified tradition, European in the main, whose constituents have by and large been countercultural, radical and leftist, everything Condell and cohorts viscerally despise.

I am so fed up with people who say that they understand the meaning of the word “free” and the meaning of the word “thought”, and therefore they understand everything they need to know about “freethought”. And these are often the same people who claim that their tradition is one of knowledge and learning and skepticism, yet they want to replace the complex world of knowledge with a kind of naive literalism.

A philosopher agrees with me

I don’t know whether this is a good thing, or a bad thing, but at least he’s agreeing for a different reason. On the question of whether we’ll someday be able to download brains into a computer, I’ve said no for largely procedural reasons: there is no way to capture the state of all the complex molecules (and the simple ions, either) of the brain in any kind of snapshot, and the excuses of the download proponents are ludicrously ignorant of even the current state of biology. John Wilkins says no for a different and interesting reasons: a simulation (and that’s all a computer version of a person could be) is not the thing itself. The map is not the territory. So even surrendering the idea of a high-fidelity transfer and saying that you’ll just develop a model of a brain doesn’t get you anywhere close to solving the problem of immortalizing “me” in a machine.

Sorry, trans-humanists. I can believe that there can be a post-humanity, but it won’t include us, so I don’t know why you aspire to it. I can sort of imagine an earth transformed by human activities into a warmer, wetter, even more oceanic place that allows more squid to flourish, and it’s even a somewhat attractive future, but it’s not a human transformation.

Oh, those secular ethics

In case you’re interested, DJ Grothe will be speaking at the Midwest Philosophy Colloquium on the University of Minnesota Morris campus next week. I can’t attend; it’s scheduled at the same time as one of our HHMI student research events.

He’s speaking on secular ethics.

By the way, of no possible relevance at all, I’m sure, Grothe is threatening legal action against Women Thinking, Inc., and is holding up publication of a survey on vaccination outreach, because he doesn’t like that someone reported a bad joke that he made. Which he denies.

Secular ethics in action!

Man, am I glad I have a good excuse to not attend that talk. I’m going to enjoy celebrating students’ summer research instead.

Oh, yay! More examples of secular ethics!

Can we rehabilitate post-modernism, please?

I’m about to alienate even more knee-jerk skeptics (and good riddance!) by saying something incredibly daring: post-modernism isn’t so bad. Skeptics ought to embrace it. It’s sad that so few do: Mano Singham seems to be the rare one. I think maybe because he actually understands it.

Many scientists hate what they think of as postmodernism, mainly because of its denial of the possibility of an objective truth and its questioning of the concomitant idea that knowledge is somehow progressing. The idea that scientific knowledge is not necessarily advancing towards something that we can call ‘truth’ disturbs them. This radical break with past ideas that scientific progress was necessarily leading towards truth one of Thomas Kuhn’s key ideas in his highly influential monograph The Structure of Scientific Revolutions. But rather than engage with this important idea (and it is difficult to refute and has not been done, as far as I am aware), the term ‘postmodernism’ is often used as an epithet used by scientists against their critics, the way that ‘scientism’ is used as a weapon against science.

Most people don’t seem to know anything about post-modernism other than the Sokal hoax. This was a notorious paper submitted by a physics professor to the postmodern journal, Social Text, in which he cobbled together strings of buzzwords and nonsense into a jabberwocky of a paper…and it got accepted. Cue immediate jeers and contempt for the entirety of post-modernism.

There is no excuse for the Sokal paper — it was total garbage, and the editors should have been embarrassed. But somehow it became cause to dismiss the entire field. Why, it’s as if we decided that developmental biology was a total joke because we have journals with a fondness for publishing bad science about donuts.

But you know what post-modernism is, right? It’s a skeptical approach to literature, art, even science, that attempts to deconstruct the premises and presuppositions and cultural influences on a work. It’s an acknowledgment that nothing humans create appears out of a vacuum and that perfect objectivity is an illusion. Yeah, it’s got jargon, lots of jargon, that can be abused and that allows airheads to give the illusion of wisdom by babbling in cliches, but it’s also a useful tool that is used wisely by many academics.

For instance, there’s a lot of wisdom in what Michael Bérubé has written about the subject. Try reading this one paragraph and think. It will sound very familiar to those of us who have been actively opposing the pretense of absolute objective knowledge, and suggesting that maybe there are other unscientific phenomena that we ought to engage.

Sokal’s admirers have projected almost anything they desire–and they have desired many things. In early 1997, Sokal came to the University of Illinois, and quite graciously offered to share the stage with me so that we could have a debate about the relation of postmodern philosophy to politics. It was there that I first unveiled my counterargument, namely, that the world really is divvied up into “brute fact” and “social fact,” just as philosopher John Searle says it is, but the distinction between brute fact and social fact is itself a social fact, not a brute fact, which is why the history of science is so interesting. Moreover, there are many things–like Down syndrome, as my second son has taught me–that reside squarely at the intersection between brute fact and social fact, such that new social facts (like policies of inclusion and early intervention) can help determine the brute facts of people’s lives (like their health and well-being).

I had to emphasize that one sentence in the middle because it says so much about why the demarcation problem is non-trivial, but that last sentence is also essential — what we shall do with science and technology is as important as the science and technology themselves.

There have been many battles and many books published both for and against a postmodernist view of science, and I think the opposition is largely wrong. Post-modernism did not begin and end with Sokal. And while there is a painful amount of lefty nuttiness in post-modernist circles, there’s also a lot that’s worth learning.

A couple of physicists had clearly read Paul Gross and Norman Levitt’s then-recent book, Higher Superstition: The Academic Left and Its Quarrels with Science, a free-swinging polemic against science studies, feminism, Jeremy Rifkin, jargon, and much more, and they were mightily pissed off about this Andrew Ross fellow, who had written a science-studies book, Strange Weather, which he dedicated to “all the science teachers I never had. It could only have been written without them.”

Well, yes, I had to admit, Ross’s dedication was rather cheeky. But it was not in itself evidence that Ross did not know his subject matter. Besides, I added, when in Strange Weather Ross called for science “that will be publicly answerable and of some service to progressive interests,” and Gross and Levitt responded by writing, “ ‘Of some service to progressive interests’ seems reasonably clear, if frighteningly Stalinist in tone and root,” weren’t Gross and Levitt being kind of…nutty? Hysterical, perhaps? What was wrong with wanting medicine or engineering or environmental science to be publicly answerable and of some service to progressive interests? Why shouldn’t we try to build a world that affords greater public access to people with disabilities, for instance? And since conservatives had even then largely abandoned their early-twentieth-century commitment to conserve the Earth’s natural resources, wasn’t “environmental science” now a “progressive ” in and of itself? It’s not as if Ross was calling for a Liberation Astronomy. Would Ross’s sentence sound out of place in a bulletin issued by the Union of Concerned Scientists?

Science has to be answerable to public interest, and the goals of scientists (and atheists!) should include progressive values. We live to make a better world, right? So why should we not respect and appreciate a critical analysis of the social context of what we do?

I’m happy to accept Bérubé’s deal.

So these days, when I talk to my scientist friends, I offer them a deal. I say: I’ll admit that you were right about the potential for science studies to go horribly wrong and give fuel to deeply ignorant and/or reactionary people. And in return, you’ll admit that I was right about the culture wars, and right that the natural sciences would not be held harmless from the right-wing noise machine. And if you’ll go further, and acknowledge that some circumspect, well-informed critiques of actually existing science have merit (such as the criticism that the postwar medicalization of pregnancy and childbirth had some ill effects), I’ll go further too, and acknowledge that many humanists’ critiques of science and reason are neither circumspect nor well-informed. Then perhaps we can get down to the business of how to develop safe, sustainable energy and other social practices that will keep the planet habitable.

I’ll also extend the deal and say that we are obligated to pursue a humanist agenda ourselves — that simply accumulating deeper understanding of the universe without consideration of our place in it is ultimately destructive. I’m reminded of my late genetics mentor, George Streisinger, who considered ethical issues as important as the science, and spoke out in the 1980s about what were the important concerns.

I see the danger of global nuclear war as imminent. The use of poison warfare, the widespread use of chemicals that may be hazardous, the lack of any serious attempt to deal with population growth, the lack of any real concern about the just incredibly unequal distribution of wealth.

People have to be part of our equations.

Repudiating scientism, rather than surrendering to it

When I heard that Steven Pinker had written a new piece decrying the accusations of scientism, I was anxious to read it. “Scientism” is a blunt instrument that gets swung in my direction often enough; I consider it entirely inappropriate in almost every case I hear it used.

Here’s the thing: when I say that there is no evidence for a god, that there’s no sign that there is a single specific thing this imagined being has done, I am not unfairly asking people to adopt the protocols of science — I am expecting to judge by their own standards and expectations. They are praying to Jesus in the expectation of a reward, not as, for instance, an exercise in artistic expression, so it is perfectly legitimate to point out they aren’t getting anything, and their concept of Jesus contradicts their own expectations. When I mock Karen Armstrong’s goofy deepities praising her nebulous cosmic being, I’m not saying she’s wrong because her god won’t fit in a test tube or grow in a petri dish, but because she’s doing bad philosophy and reasoning poorly — disciplines which are greater than and more universal than science.

Science is a fantastic tool (our only tool, actually) for probing material realities. Respect it for what it is. But please, also recognize that there’s more to the human experience than measurement and the acquisition of knowledge about physical processes, and that science is a relatively recent and revolutionary way of thinking, but not the only one — and that humans lived and thrived and progressed for thousands of years (and many still do, even within our technological culture!) without even the concept of science.

Scientism is the idea that only science is the proper mode of human thought, and in particular, a blinkered, narrow notion that every human advance is the product of scientific, rational, empirical thinking. Much as I love science, and am personally a committed practitioner who also has a hard time shaking myself out of this path (I find scientific thinking very natural), I’ve got enough breadth in my education and current experience to recognize that there are other ways of progressing. Notice that I don’t use the phrase “ways of knowing” here — I have a rigorous enough expectation of what knowledge represents to reject other claims of knowledge outside of the empirical collection of information.

It’s the curse of teaching at a liberal arts university and rubbing elbows with people in the arts and humanities all the time.

Which is why I was disappointed with Pinker’s article. I expected two things: an explanation that science is one valid path to knowledge with wide applicability, so simply applying science is not the same as scientism; and an acknowledgment that other disciplines have made significant contributions to human well-being, and therefore we should not pretend to be all-encompassing.

And then I read the first couple of paragraphs of his essay, and was aghast. This was unbelievable hubris; he actually is practicing scientism!

The great thinkers of the Age of Reason and the Enlightenment were scientists. Not only did many of them contribute to mathematics, physics, and physiology, but all of them were avid theorists in the sciences of human nature. They were cognitive neuroscientists, who tried to explain thought and emotion in terms of physical mechanisms of the nervous system. They were evolutionary psychologists, who speculated on life in a state of nature and on animal instincts that are “infused into our bosoms.” And they were social psychologists, who wrote of the moral sentiments that draw us together, the selfish passions that inflame us, and the foibles of shortsightedness that frustrate our best-laid plans.

These thinkers—Descartes, Spinoza, Hobbes, Locke, Hume, Rousseau, Leibniz, Kant, Smith—are all the more remarkable for having crafted their ideas in the absence of formal theory and empirical data. The mathematical theories of information, computation, and games had yet to be invented. The words “neuron,” “hormone,” and “gene” meant nothing to them. When reading these thinkers, I often long to travel back in time and offer them some bit of twenty-first-century freshman science that would fill a gap in their arguments or guide them around a stumbling block. What would these Fausts have given for such knowledge? What could they have done with it?

Hooooly craaaaaap.

Look, there’s some reasonable stuff deeper in, but that opening…could he possibly have been more arrogant, patronizing, and ahistorical? Not only is he appropriating philosophers into the fold of science, but worse, he’s placing them in his favored disciplines of cognitive neuroscience, evolutionary psychology, and social psychology. Does the man ever step outside of his office building on the Harvard campus?

Descartes and Hume were not evolutionary psychologists. He’s doing great violence to the intellectual contributions of those men — and further, he’s turning evolutionary psychology into an amorphous and meaningless grab-bag which can swallow up every thought in the world. The latter, at least, is a common practice within evo psych, but please. Hume was a philosopher. He was not a psychologist, a biologist, or a chemist. He was not doing science, even though he thought a lot about science.

I probably know more about the biological side of how the brain functions than Pinker does, with my background in neuroscience, cell biology, and molecular biology. But I have no illusions. If I could travel in time to visit Hume or Spinoza, I might be able to deliver the occasional enlightening fact that they would find interesting, but most of my knowledge would be irrelevant to their concerns, while their ideas would have broader applicability and would enlighten me. When I imagine visiting these great contributors to the philosophy of science (Hume and Bacon would be at the top of my list), I see myself as a supplicant, hoping to learn more, not as the font of wisdom come to deliver them from their errors. Alright, I might argue some with them, but Jesus…they have their own domains of understanding in which they are acknowledged masters, domains in which I am only a dabbler.

He’s committing the fallacy of progress and scientism. There is no denying that we have better knowledge of science and engineering now, but that does not mean that we’re universally better, smarter, wiser, and more informed about everything. What I know would be utterly useless to a native hunter in New Guinea, or to an 18th century philosopher; it’s useful within a specific context, in a narrow subdomain of a 21st technological society. I think Pinker’s fantasy is not one of informing a knowledgeable person, but of imposing the imagined authority of a modern science on someone from a less technologically advanced culture.

It’s actually an encounter I’d love to see happen. I don’t think evolutionary psychology would hold up at all under the inquisitory scrutiny of Hume.

I tried to put myself in the place of one of my colleagues outside the sciences reading that essay, and when I did that, I choked on the title: “Science Is Not Your Enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians”. How condescending! I know there are a few odd professors out there who have some bizarre ideas about science — they’re as ignorant of science as Pinker seems to be of the humanities — but the majority of the people I talk to who are professors of English or Philosophy or Art or whatever do not have the idea at all that science is an enemy. They see it as a complementary discipline that’s prone to a kind of overweening imperialism. I get that: I feel the same way when I see physicists condescend to mere biologists. We’re just a subset of physics, don’t you know, and don’t really have an independent history, a novel perspective and a deep understanding of a very different set of problems than the ones physicists study.

Just as biologists freely use the tools of physics, scholars in the humanities will use the tools of science where appropriate and helpful. They do not therefore bow down in fealty to the one true intellectual discipline, great Science. I have never known a one to reject rigor, analysis, data collection, or statistics and measurement…although they can get rather pissy if you try to tell them that the basic tools of the academic are copyright Science.

And, dear god, Pinker tells this ridiculous and offensive anecdote:

Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done.

Oh, fucking nonsense. Humanities scholars are just as interested in making new discoveries as evolutionary psychologists, and are just as enthusiastic about pursuing ideas. What I’ve seen is that university presidents and provosts are typically completely clueless about what scholars do — does anyone really believe Larry Summers had the slightest appreciation of the virtues of knowledge? — so it’s bizarre in the first place to cite the opinions of our administrative bureaucrats. What this anecdote actually translates to is that a scientist stops by with an idea that needs funding that will lead to big grants and possible patent opportunities, and president’s brain goes KA-CHING; humanities scholar stops by with a great insight about French Impressionism or the history of the Spanish Civil War, asks for travel funds (or more likely, pennies for paper and ink), and president’s brain fizzles and can’t figure out how this will bring in a million dollar NIH grant, so what good is it? Why can’t this deadwood get with it and do something with cancer genes or clinical trials?

Perhaps I would have been more receptive to Pinker’s message if I hadn’t sat through a meeting this afternoon with an administrator from the big campus in Minneapolis/St Paul. It was a strange meeting; he’s clearly got grand plans that are of benefit to us, he’s supportive of science, but this was a meeting attended by research faculty in all of the campus disciplines: science, humanities, social sciences, the arts. It was odd to hear all the talk that was focused on a purely science-oriented strategy, when there were all these people around me who are doing research that doesn’t involve equipment grants, NIH funding, and patent opportunities. One of my colleagues spoke up and mentioned that he seemed to be treating the humanities as supporting infrastructure for biology or chemistry, rather than as a respectable scholarly endeavor in its own right.

I was feeling the same way. I’m a biologist, but I do biology because it’s beautiful and I love it and it inspires students. This was all about doing science because it brings in big money to the university. What was in my head was this quote from D’Arcy Wentworth Thompson:

“The harmony of the world is made manifest in Form and Number, and the heart and soul and all the poetry of Natural Philosophy are embodied in the concept of mathematical beauty.”

Substitute “Science” for “Natural Philosophy” (a perfectly reasonable replacement, given what Thompson understood the phrase to mean), and you’ve got a rebuttal to scientism. Heart, soul, poetry, beauty are not grist for the analytical mill of science, but they really are the core, and if you don’t appreciate that, the breadth of your education is lacking.

I’ve been harsh to Pinker’s claims, but you probably shouldn’t see it as a disagreement. Read further into his essay, if you can bear it, and you’ll discover that rather than rejecting scientism he proudly claims it for his own. To accuse him of scientism is no insult, then; it’s only the term for what he happily embraces.

I don’t think I’ll join him in that isolation tank, though.

Dammit. It used to be that I was the guy with a reputation for vehemence, but I’ve got nothin’ on this.

This is the kind of thing that gives philosophy a bad name

In the NY Times Opinionator, Gary Gutting indulges in a little public philosophical masturbation: did Zeus exist? And he concludes that we can’t decide that he didn’t.

On reflection, then, I’m inclined to say that an atheistic denial of Zeus is ungrounded. There is no current evidence of his present existence, but to deny that he existed in his Grecian heyday we need to assume that there was no good evidence for his existence available to the ancient Greeks. We have no reason to make this assumption. Further, supposing that Zeus did exist in ancient times, do we really have evidence that he has ceased to exist? He may, for all we know, just be in hiding (as Heine’s delightful “Gods in Exile” suggests), now that other gods have won humankind’s allegiance. Or it may be that we have lost the ability to perceive the divine. In any case, to the question, “May we properly remain agnostic about whether Zeus ever existed?” the answer is “Yes, we may.”

I’d tear that up, except I don’t have to: The Digital Cuttlefish beat me to it, and includes a poem, too.

Two things, then. One, I’m surprised that a philosophy prof is conflating ideas of belief with ideas of knowledge. Disbelief in Zeus is absolutely grounded. Without convincing evidence (this is where “knowledge” comes in, and where his objections actually matter), Zeus has not passed the threshold for my belief. I have no obligation to believe in something that has no positive evidence for it, just because there is no evidence against it.

Which leads to my second thing. Presuppositional arguments may be logically airtight, but this example shows why good logic can lead to bad conclusions. It is absolutely true that science has to presuppose that there are no supernatural entities intervening, in order to examine the natural world. And we, therefore, cannot conclude there is no supernatural, since that would simply be circular logic, assuming our conclusions. And since our conclusions about the supernatural depend on our assumptions, the logic is no help at all.

I have two things, too, though. One is that DC is using philosophy to argue against Gutting, so let’s not make this a blanket condemnation of all philosophy.

The other is a point of disagreement: “It is absolutely true that science has to presuppose that there are no supernatural entities intervening”. I disagree strongly with that. If they are intervening, they are having an effect on the natural world that can be examined with the tools of science, even if the supernatural entities themselves are completely invisible to us. If every time I mumbled a magic word before throwing a die, it would come up six, and this effect was statistically robust and worked with such reliability that I could clean up at the craps table in Vegas, I’d have to postulate a force outside of our understanding to explain it. I’d still be able to investigate the effect scientifically, however, and clearly it would demand extensive replication…say, a grand tour of every casino in the country.

I agree that we cannot conclude that there is no supernatural that is operating outside of our universe. We can conclude that there has been no consistent detectable supernatural phenomenon meddling within our universe.

Head and heart, atheists

Talk about sucking all the motivation out of me…I was all primed to write today about this Islamophobia nonsense that is still going around. It seems to be the latest bogus argument against atheism: why, atheists are just all bigots who hate Muslims, the complainers say, instead of actually addressing the fact that religion a) lacks a truthful foundation, b) lacks any method for investigating the accuracy of its claims, and c) uses that lack of evidence to excuse the most odious social behaviors. While there certainly are islamophobic individuals, to claim that this is the primary motivation for New Atheism is simply ridiculous and contrary to everything the major proponents (I refuse to call them “leaders”) of this movement have written.

And then Sam Harris wrote his response to the controversy.

I just give up. And not in a good way, mind…I think he shot himself in the foot again. He has made a set of arguments that completely ignore what the critics have been saying and don’t rebut much of anything at all.

First off, beginning by accusing all of your critics of being bigoted poopyheads for calling you a bigoted poopyhead…not a good move.

A general point about the mechanics of defamation: It is impossible to effectively defend oneself against unethical critics. If nothing else, the law of entropy is on their side, because it will always be easier to make a mess than to clean it up. It is, for instance, easier to call a person a “racist,” a “bigot,” a “misogynist,” etc. than it is for one’s target to prove that he isn’t any of these things. In fact, the very act of defending himself against such accusations quickly becomes debasing. Whether or not the original charges can be made to stick, the victim immediately seems thin-skinned and overly concerned about his reputation. And, rebutted or not, the original charges will be repeated in blogs and comment threads, and many readers will assume that where there’s smoke, there must be fire.

If calling Sam Harris a “racist” is a low blow and unfair and difficult to disprove, what about calling people “unethical”? I don’t think Glenn Greenwald is unethical at all; I think he has been a consistent and ethical proponent of liberal and progressive values throughout his career. He has not shown the kind of frothing derangement at confronting atheists that Chris Hedges has shown, for instance. Greenwald objects to things Harris has written, and explains why. Harris does seem thin-skinned. He has said a few things that many others disagree with, me included, and to get upset at principled disagreement on those matters reeks a bit of objecting to any criticism at all.

I don’t think Harris is islamophobic, but I disagree on other things, and for disagreeing with him on racial profiling and agreeing that the atheist movement is not perfect, I got labeled “odious”, “unscrupulous”, a “troll”, and responsible for distorting his views and damaging his reputation. The mechanics of defamation can work both ways, Dr Harris, and you seem to be very capable of it yourself, while simultaneously placing your affronted dignity on a pedestal and being outraged that anyone would question it. Defending your views would look less thin-skinned if you weren’t constantly prefacing your defense with that exasperated sigh that it is so unfair and demeaning that you have to do so.

It’s just more footshooting. And then, for further target practice on distal digits, the third paragraph is a beautifully written, lucid distillation of exactly what annoys many people about Harris. He’s got a real talent for this.

Such defamation is made all the easier if one writes and speaks on extremely controversial topics and with a philosopher’s penchant for describing the corner cases—the ticking time bomb, the perfect weapon, the magic wand, the mind-reading machine, etc.—in search of conceptual clarity. It literally becomes child’s play to find quotations that make the author look morally suspect, even depraved.

Aaargh. That’s the whole problem. Look, Spock is a caricature, not a paragon; retreating behind the fog of philosophical abstraction is precisely the kind of behavior that has given atheists a bad name. When talking about profiling people to improve airport security, forget about the fact that it is targeting human beings for special indignities. When talking about the possibility that torture might work sometimes, forget about the reality of human beings causing and receiving dehumanizing agony. When considering the possibility that Muslim fanatics might get nuclear weapons, argue that we might just be justified in vaporizing millions of human beings to prevent that possibility.

There’s a place for playing philosophical games when thinking about trolleys and vats and logic puzzles, but when it comes down to real world thinking, reducing hugely complex problems to simplified abstractions does not provide clarity at all, only confusion and false conclusions. Right now, this country is facing the consequences (well, a good portion of the country is trying to ignore the consequences) of this kind of robotic pseudophilosophical argument. We had people making rationalizations for all-out warfare against a country that we claimed to be a clear and present danger on the basis of having weapons of mass destruction, that we argued was ruled by a brutal dictator who should be prevented from doing more harm, and on the basis of those widely promoted “corner cases”, we murdered hundreds of thousands of civilians, shattered a country’s infrastructure and opened it up to corporate exploitation, and drained our finances dry pouring more and more cash and blood into a brutal war.

You do not get to make these cold calculations while leaving out the human element — the fact that we atheists, as a people supposedly dedicated to reality and truth and respect for the potential of the human mind, can so callously dismiss personal experience and the lives of the people at the heart of these hypothetic scenarios and thought experiments is precisely the reason their author is so easily made to look “morally suspect, even depraved.”

Harris does a good job of bringing up the fuller context of some of the quotes that he feels have been excerpted to misinterpret him, but he seems incapable of recognizing that what he considers a justification merely compounds the problem. Somehow, the moral calculus only goes one way. We are allowed to contemplate (in a rarefied philosophical way, of course) bombing or torturing or isolating people who have a slim chance of contributing to harm to us, but somehow we never consider that perhaps the people on the other side are making the very same calculation, considering that they are amply justified in bombing or torturing or isolating those privileged Westerners, because we might harm them.

And sadly, they have better empirical evidence of real threat.

Now I’m not excusing terrorist actions. Quite the opposite: I reject them unambiguously and fault them for failing to appreciate the humanity of their opponents. And if I do that, I cannot fail to similarly reject such actions taken to protect my side. No excuse can justify nuking or torturing my people, so no excuse can justify nuking or torturing anyone else…especially considering that the United States has more blood on its hands than any other nation.

This is not the time to invent elaborate philosophical justifications for abhorrent actions — it is time to unhesitatingly reject them, to express our grief and shame and horror at these options. It is not enough to bloodlessly pretend it’s a philospher’s penchant. We need to consider the human cost, and weight that most heavily.

Harris’s ability to distance himself from everything and view people’s personal pain dispassionately, as he does in all of his responses, is what’s hurting him, and he doesn’t even seem to be able to recognize it. Even when I share his respect for philosophy and science, I cringe at his inability to express a proper appreciation of the humanity of his subjects. I don’t think he’s a robot, but when he dries up and goes all academic and philosophical, he gives an awfully good impression of one, and I think he makes a lot of his arguments from that arid ground of the abstract, rather than the heart of his humanity. I’d pass along a suggestion from another philosopher who was able to see the importance of the individual:

We have to touch people.

I am not alone!

I guess I’m not the only one bemused by the recent weird backlash among some scientists against philosophy. Michael Krämer also defends philosophy.

So then, should we physicists listen to philosophers?

An emphatic "No!", if philosophers want to impose their preconceptions of how science should be done. I do not subscribe to Feyerabend’s provocative claim that "anything goes" in science, but I believe that many things go, and certainly many things should be tried.

But then, "Yes!", we should listen, as philosophy can provide a critical assessment of our methods, in particular if we consider physics to be more than predicting numbers and collecting data, but rather an attempt to understand and explain the world. And even if philosophy might be of no direct help to science, it may be of help to scientists through its educational role, and sharpen our awareness of conceptional problems in our research.

Unfortunately, he also sounds like he’s got the physicist’s disease of sounding like physics is the only science in the world. Every word also applies to biology, chemistry, psychology, you name it…

We need a sociologist of science…or a philosopher

There’s another paper out debunking the ENCODE consortium’s absurd interpretation of their data. ENCODE, you may recall, published a rather controversial paper in which they claimed to have found that 80% of the human genome was ‘functional’ — for an extraordinarily loose definition of function — and further revealed that several of the project leaders were working with the peculiar assumption that 100% must be functional. It was a godawful mess, and compromised the value of a huge investment in big science.

Now W. Ford Doolittle has joined the ranks of many scientists who immediately leapt into the argument. He has published “Is junk DNA bunk? A critique of ENCODE” in PNAS.

Do data from the Encyclopedia Of DNA Elements (ENCODE) project render the notion of junk DNA obsolete? Here, I review older arguments for junk grounded in the C-value paradox and propose a thought experiment to challenge ENCODE’s ontology. Specifically, what would we expect for the number of functional elements (as ENCODE defines them) in genomes much larger than our own genome? If the number were to stay more or less constant, it would seem sensible to consider the rest of the DNA of larger genomes to be junk or, at least, assign it a different sort of role (structural rather than informational). If, however, the number of functional elements were to rise significantly with C-value then, (i) organisms with genomes larger than our genome are more complex phenotypically than we are, (ii) ENCODE’s definition of functional element identifies many sites that would not be considered functional or phenotype-determining by standard uses in biology, or (iii) the same phenotypic functions are often determined in a more diffuse fashion in larger-genomed organisms. Good cases can be made for propositions ii and iii. A larger theoretical framework, embracing informational and structural roles for DNA, neutral as well as adaptive causes of complexity, and selection as a multilevel phenomenon, is needed.

In the paper, he makes an argument similar to one T. Ryan Gregory has made many times before. There are organisms that have much larger genomes than humans; lungfish, for example, have 130 billion base pairs, compared to the 3 billion humans have. If the ENCODE consortium had studied lungfish instead, would they still be arguing that the organism had function for 104 billion bases (80% of 130 billion)? Or would they be suggesting that yes, lungfish were full of junk DNA?

If they claim that lungfish that lungfish have 44 times as much functional sequence as we do, well, what is it doing? Does that imply that lungfish are far more phenotypically complex than we are? And if they grant that junk DNA exists in great abundance in some species, just not in ours, does that imply that we’re somehow sitting in the perfect sweet spot of genetic optimality? If that’s the case, what about species like fugu, that have genomes one eighth the size of ours?

It’s really a devastating argument, but then, all of the arguments against ENCODE’s interpretations have been solid and knock the whole thing out of the park. It’s been solidly demonstrated that the conclusions of the ENCODE program were shit.


So why, Yale, why? The Winter edition of the Yale Medicine magazine features as a cover article Junk No More, an awful piece of PR fluff that announces in the first line “R.I.P., junk DNA” and goes on to tout the same nonsense that every paper published since the ENCODE announcement has refuted.

The consortium found biological activity in 80 percent of the genome and identified about 4 million sites that play a role in regulating genes. Some noncoding sections, as had long been known, regulate genes. Some noncoding regions bind regulatory proteins, while others code for strands of RNA that regulate gene expression. Yale scientists, who played a key role in this project, also found “fossils,” genes that date to our nonhuman ancestors and may still have a function. Mark B. Gerstein, Ph.D., the Albert L. Williams Professor of Biomedical Informatics and professor of molecular biophysics and biochemistry, and computer science, led a team that unraveled the network of connections between coding and noncoding sections of the genome.

Arguably the project’s greatest achievement is the repository of new information that will give scientists a stronger grasp of human biology and disease, and pave the way for novel medical treatments. Once verified for accuracy, the data sets generated by the project are posted on the Internet, available to anyone. Even before the project’s September announcement, more than 150 scientists not connected to ENCODE had used its data in their research.

“We’ve come a long way,” said Ewan Birney, Ph.D., of the European Bioinformatics Institute (EBI) in the United Kingdom, lead analysis coordinator for ENCODE. “By carefully piecing together a simply staggering variety of data, we’ve shown that the human genome is simply alive with switches, turning our genes on and off and controlling when and where proteins are produced. ENCODE has taken our knowledge of the genome to the next level, and all of that knowledge is being shared openly.”

Oh, Christ. Not only is it claiming that the 80% figure is for biological activity (it isn’t), but it trots out the usual university press relations crap about how the study is all about medicine. It wasn’t and isn’t. It’s just that dumbasses can only think of one way to explain biological research to the public, and that is to suggest that it will cure cancer.

As for Birney’s remarks, they are offensively ignorant. No, the ENCODE research did not show that the human genome is actively regulated. We’ve known that for fifty years.

That’s not the only ahistorical part of the article. They also claim that the idea of junk DNA has been discredited for years.

Some early press coverage credited ENCODE with discovering that so-called junk DNA has a function, but that was old news. The term had been floating around since the 1990s and suggested that the bulk of noncoding DNA serves no purpose; however, articles in scholarly journals had reported for decades that DNA in these “junk” regions does play a regulatory role. In a 2007 issue of Genome Research, Gerstein had suggested that the ENCODE project might prompt a new definition of what a gene is, based on “the discrepancy between our previous protein-centric view of the gene and one that is revealed by the extensive transcriptional activity of the genome.” Researchers had known for some time that the noncoding regions are alive with activity. ENCODE demonstrated just how much action there is and defined what is happening in 80 percent of the genome. That is not to say that 80 percent was found to have a regulatory function, only that some biochemical activity is going on. The space between genes was also found to contain sites where DNA transcription into RNA begins and areas that encode RNA transcripts that might have regulatory roles even though they are not translated into proteins.

I swear, I’m reading this article and finding it indistinguishable from the kind of bad science I’d see from ICR or Answers in Genesis.

I have to mention one other revelation from the article. There has been a tendency to throw a lot of the blame for the inane 80% number on Ewan Birney alone…he threw in that interpretation in the lead paper, but it wasn’t endorsed by every participant in the project. But look at this:

The day in September that the news embargo on the ENCODE project’s findings was lifted, Gerstein saw an article about the project in The New York Times on his smartphone. There was a problem. A graphic hadn’t been reproduced accurately. “I was just so panicked,” he recalled. “I was literally walking around Sterling Hall of Medicine between meetings talking with The Times on the phone.” He finally reached a graphics editor who fixed it.

So Gerstein was so concerned about accuracy that he panicked over an article in the popular press, but had no problem with the big claim in the Birney paper, the one that would utterly undermine confidence in the whole body of work, did not perturb him? And now months later, he’s collaborating with the Yale PR department on a puff piece that blithely sails past all the objections people have raised? Remarkable.

This is what boggles my mind, and why I hope some sociologist of science is studying this whole process right now. It’s a revealing peek at the politics and culture of science. We have a body of very well funded, high ranking scientists working at prestigious institutions who are actively and obviously fitting the data to a set of unworkable theoretical presuppositions, and completely ignoring the rebuttals that are appearing at a rapid clip. The idea that the entirety of the genome is both functional and adaptive is untenable and unsupportable; we instead have hundreds of scientists who have been bamboozled into treating noise as evidence of function. It’s looking like N rays or polywater on a large and extremely richly budgeted level. And it’s going on right now.

If we can’t have a sociologist making an academic study of it all, can we at least have a science journalist writing a book about it? This stuff is fascinating.

I have my own explanation for what is going on. What I think we’re seeing is an emerging clash between scientists and technicians. I’ve seen a lot of biomedical grad students going through training in pushing buttons and running gels and sucking numerical data out of machines, and we’ve got the tools to generate so much data right now that we need people who can manage that. But it’s not science. It’s technology. There’s a difference.

A scientist has to be able to think about the data they’re generating, put it into a larger context, and ask the kinds of questions that probe deeper than a superficial analysis can deliver. A scientist has to be more broadly trained than the person who runs the gadgetry.

This might get me burned at the stake worse than sneering at ENCODE, but a good scientist has to be…a philosopher. They may not have formal training in philosophy, but the good ones have to be at least roughly intuitive natural philosophers (ooh, I’ve heard that phrase somewhere before). If I were designing a biology curriculum today, I’d want to make at least some basic introduction to the philosophy of science an essential and early part of the training.

I know, I’m going against the grain — there have been a lot of big name scientists who openly dismiss philosophy. Richard Feynman, for instance, said “Philosophy of science is about as useful to scientists as ornithology is to birds.” But Feynman was wrong, and ironically so. Reading Feynman is actually like reading philosophy — a strange kind of philosophy that squirms and wiggles trying to avoid the hated label, but it’s still philosophy.

I think the conflict arises because, like everything, 90% of philosophy is garbage, and scientists don’t want to be associated with a lot of the masturbatory nonsense some philosophers pump out. But let’s not lose sight of the fact that some science, like ENCODE, is nonsense, too — and the quantity of garbage is only going to rise if we don’t pay attention to understanding as much as we do accumulating data. We need the input of philosophy.

A quote from Ed Abbey, who died 24 years ago today

The geologic approach is certainly primary and fundamental, underlying the attitude and outlook that best support all others, including the insights of poetry and the wisdom of religion. Just as the earth itself forms the indispensable ground for the only kind of life we know, providing the sole sustenance of our minds and bodies, so does empirical truth constitute the foundation of higher truths. (If there is such a thing as higher truth.)

It seems to me that Keats was wrong when he asked, rhetorically, “Do not all charms fly … at the mere touch of cold philosophy?” The word “philosophy” standing, in his day, for what we now call “physical science.” But Keats was wrong, I say, because there is more charm in one “mere” fact, confirmed by test and observation, linked to other facts through coherent theory into a rational system, than in a whole brainful of fancy and fantasy. I see more poetry in a chunk of quartzite than in a make-believe wood nymph, more beauty in the revelations of a verifiable intellectual construction than in whole misty empires of obsolete mythology.

The moral I labor toward is that a landscape as splendid as that of the Colorado Plateau can best be understood and given human significance by poets who have their feet planted in concrete — concrete data — and by scientists whose heads and hearts have not lost the capacity for wonder. Any good poet, in our age at least, must begin with the scientific view of the world; and any scientist worth listening to must be something of a poet, must possess the ability to communicate to the rest of us his sense of love and wonder at what his work discovers.