Imagine a perfectly spherical sacred cow…

The BBC is reporting the imminent extinction of religion. This is an end result to be hoped for, which just makes me all the more critical, and I have to say up front that this is the work of mathematicians, engineers, and physicists modeling sociology. It’s interesting stuff that looks at the very biggest picture without addressing the details, and it could very well be entirely true, but I’m always going to be a little bit suspicious of academics crossing boundaries that much. Sociologists are not stupid people; I’d like to see more of them pick up on this mode of analysis, and then I’ll trust it more.

You can read the paper for yourself, it’s available on arxiv, and it’s not a piece of crackpot pseudoscience; it analyzes gross historical trends away from religious belief in diverse regions around the world, and fits a reasonable curve to the pattern using an extremely simple model of group dynamics. The simplicity of the model is the troubling part — I’m a biologist, I don’t believe in simple any more — but the fact that the model works well for at least the selected regions is a little reassuring. Here’s the short summary of what they did:

Here we use a minimal model of competition for members between social groups to explain historical census data on the growth of religious non-affiliation in 85 regions around the world. According to the model, a single parameter quantifying the perceived utility of adhering to a religion determines whether the unaffiliated group will grow in a society. The model predicts that for societies in which the perceived utility of not adhering is greater than the utility of adhering, religion will be driven toward extinction.

The data look wonderfully clean, too.

i-edc406d0750653a2c940d4944ef1f534-relextinct.jpeg

(About that rescaled time axis: the data from different regions show different rates of the deconversion process, with timescales from decades to centuries; they all fit their model with different parameters for the perceived utility of religion. The rescaling shows that the model provides a good fit to all of the data, but you can’t use this to predict the date of the worldwide Atheist Rapture — it’ll happen at different times in different regions.)

The authors also express reasonable reservations. I was wondering about these questions, myself.

Our assumption that the perceived utility of a social group remains constant may be approximately true for long stretches of time, but there may also be abrupt changes in perceived utility, a possibility that is not included in the model. We speculate that for most of human history, the perceived utility of religion was high and of non-affiliation low. Religiously non-affiliated people persisted but in small numbers. With the birth of modern secular societies, the perceived utility of adherence to religion versus non-affiliation has changed significantly in numerous countries, such as those with census data shown in Fig. 1, and the United States, where
non-affiliation is growing rapidly.

That is a real concern. Their mathematical models are built around a parameter called perceived utility, ux, which they extract from the overall data — it’s not something that can be measured directly in individuals or populations, but is derived from historical trends and then used to calculate future trends, which is a little bit circular. I’d be more confident in their prediction if perceived utility had some independent measure that could be used in the curve fitting.

And of course, as they note, it’s not at all certain that that perceived utility will remain constant — it can’t have, for one thing, or the process of deconversion would have started a long time ago, we’d be further along the curve, and we’d all be atheists now. And unfortunately, the work doesn’t address the interesting question of what caused the historical shift in the perceived utility of religion, and without that, we can’t know what kind of factors might cause it to shift back.

I’ll still hope the math is a good predictor of the fate of faith.

Support cancer research now!

I made this post a few years ago, and I’m updating it now because my family back home in the Seattle-Tacoma area has a tradition: every year they join the Relay for Life to raise money for cancer research, in honor of my sister-in-law, Karen Myers, who died of melanoma. That’s my family listed there, doing good. If anyone wants to chip in to help out, that would be nice — I’m planning to donate to my mother’s page, since I like her best, but they’re all nice people and it’s a great cause. Or if you’d prefer to donate to the one who’ll probably expend the most energy running around the track, Alex Hahn is the littlest ball of fire.


i-b558f93a3ca94abc5a6f1a7c5acb620d-karen_myers.jpg

This is my sister-in-law, Karen Myers — mother to 3, shy but always cheerful, and with a wonderful laugh that you were sure to hear any time you were with her. You would have liked her if you’d known her…unfortunately, she was slowly eaten alive by an implacable melanoma several years ago. It doesn’t matter what kind of person you are; lots of good people — and you probably have known some yourself — are killed by cancer every year.

About 20 years ago, I was funded by a cancer training grant which required me to experience a fair amount of clinical training in oncology. It is not one of my happiest memories. What I saw were lots of dying people, in pain, with treatments that caused more pain — or were palliative because the patient was expected to die. Pediatric oncology was the worst, because they were dying children. I’m afraid my training convinced me to run screaming from anything clinical.

So last week, I met Beth Villavicencio, who told me she was a pediatric oncologist. The first words out of my mouth were something like, “That’s funny — you don’t look depressed or suicidal.” And she wasn’t. She looked awfully happy for someone who works with critically ill kids … so she turned me around 180°. She wasn’t miserable, because people bring dying kids to her and she saves them — she has a job where she is literally taking people who would be dying otherwise and she makes them healthy again with excellent success rates, which sounds like something that would make anyone cheerful.

[Read more…]

21st century science publishing will be multilevel and multimedia

I have to call your attention to this article, Stalking the Fourth Domain in Metagenomic Data: Searching for, Discovering, and Interpreting Novel, Deep Branches in Marker Gene Phylogenetic Trees, just published in PLoS One. It’s cool in itself; it’s about the analysis of metagenomic data, which may have exposed a fourth major branch in the tree of life, beyond the bacteria, eukaryotes, and archaea…or it may have just exposed some very weird, highly derived viruses. This is work spawned from Craig Venter’s wonderfully fascinating work of just doing shotgun sequencing of sea water, processing all of the DNA from the crazy assortment of organisms present there, and sorting them out afterwards.

But something else that’s special about it is that the author, Jonathan Eisen, has bypassed his university’s press office and not written a formal press release at all. Instead, he has provided informal commentary on the paper on his own blog, which isn’t novel, except in its conscious effort to change the game (Eisen has also been important in open publishing, as in PLoS). This is awesome, and scientists ought to get a little nervous. It maintains the formality and structured writing of a standard peer-reviewed paper, which is good — we don’t want new media to violate the discipline of well-tested, successful formats. But it also adds another layer of effort to the work, in which the author breaks out from the conventional structure and talks about the work as he or she would in a seminar or in meeting with other scientists. A paper provides the data and major interpretations, but it’s this kind of conversational interaction that can let you see the bigger picture.

I say scientists might want to be a little bit nervous about this, because I can imagine a day when this kind of presentation becomes de rigueur for everything you publish, just as it’s now understood that you could give a talk on a paper. It’s a different skill set, too, and it’s going to require a different kind of talent to be able to address fellow scientists, the lay public, and science journalists. Those are important skills to have, and this kind of thing could end up making them better appreciated in the science community.

Are any of your grad students and post-docs blogging? You might want to think about getting them trained in this brave new world now, before it’s too late. And you might want to consider getting started yourself, if you aren’t already.

Will radiation hormesis protect us from exploding nuclear reactors?

That reputable scientist, Ann Coulter, recently wrote a genuinely irresponsible and dishonest column on radiation hormesis. She claims we shouldn’t worry about the damaged Japanese reactors because they’ll make the locals healthier!

With the terrible earthquake and resulting tsunami that have devastated Japan, the only good news is that anyone exposed to excess radiation from the nuclear power plants is now probably much less likely to get cancer.

This only seems counterintuitive because of media hysteria for the past 20 years trying to convince Americans that radiation at any dose is bad. There is, however, burgeoning evidence that excess radiation operates as a sort of cancer vaccine.

As The New York Times science section reported in 2001, an increasing number of scientists believe that at some level — much higher than the minimums set by the U.S. government — radiation is good for you.

But wait! If that isn’t enough stupid for you, she went on the O’Reilly show to argue about it. Yes! Coulter and O’Reilly, arguing over science. America really has become an idiocracy.

I only know about hormesis from my dabbling in teratology; a pharmacologist or toxicologist would be a far better source. But I know enough about hormesis to tell you that she’s wrong. She has taken a tiny grain of truth and mangled it to make an entirely fallacious argument.

Radiation is always harmful — it breaks DNA, for instance, and can produce free radicals that damage cells. You want to minimize exposure as much as possible, all right? However, your cells also have repair and protective mechanisms that they can switch on or up-regulate and produce a positive effect. So: radiation is bad for you, cellular defense mechanisms are good for you.

Hormesis refers to a biphasic dose response curve. That is, when exposed to a toxic agent at very low doses, you may observe an initial reduction in deleterious effects; as the dose is increased, you begin to see a dose-dependent increase in the effects. The most likely mechanism is an upregulation of cellular defenses that overcompensates for the damage the agent is doing. This is real (I told you there’s a grain of truth to what she wrote), and it’s been observed in multiple situations. I can even give an example from my own work.

Alcohol is a teratogenic substance — it causes severe deformities in zebrafish embryos at high doses and prolonged exposure, on the order of several percent for several hours. I’ve done concentration series, where we give sets of embryos exposures at increasing concentrations, and we get a nice linear curve out of it: more alcohol leads to increasing frequency and severity of midline and branchial arch defects. With one exception: at low concentrations of about 0.5% alcohol, the treated embryos actually have reduced mortality rates relative to the controls, and no developmental anomalies.

If Ann Coulter got her hands on that work, she’d probably be arguing that pregnant women ought to run out and party all night.

We think there is probably a combination of factors going on. One is that alcohol is actually a fuel, so what they’re getting is a little extra dose of energy; it’s also deleterious to pathogens, so we’re probably killing off bacteria that might otherwise harm the embryos, and we’re killing those faster than we are killing healthy embryonic cells. It’s the same principle behind medieval beer and wine drinking — it was healthier than the water because the alcohol killed the germs.

However, the key thing to note about hormetic effects is that they only apply at low dosages. Low dosages tend to be where the damaging effects are weakest, anyway, and where the data are also the poorest. The US government recommendations for radiation exposure are based on a linear no threshold model in which there is no hormesis to reduced effects at low concentrations for a couple of reasons. One is methodological. The data we can get from high exposures to toxic agents tends to be much more robust and consistent, and we do see simple relationships like a ten-fold increase in dose produces a ten-fold increase in effect, whereas at low doses, where the effects are much weaker, variability adds so much noise to the measurements that it may be difficult to get a repeatable and consistent relationship. So the strategy is to determine the relationships at high doses and extrapolate backwards.

Then, of course, the major reason recommendations are made on the simple linear model is that it is the most conservative model. The data are weaker at the low end; there is more variability from individual to individual; the safest bet is always to recommend lower exposures than are known to be harmful.

In the low dosage regime, these responses get complicated at the same time the data gets harder to collect. This is why it’s a bad idea to base public policy on the weakest information. I’ll quote a chunk from a review by Calabrese (2008) that describes why you have to be careful in interpreting these data.

In 2002, Calabrese and Baldwin published a paper entitled “Defining hormesis” in which they argued that hormesis is a dose-response relationship with specific quantitative and temporal characteristics. It was further argued that the concept of benefit or harm should be decoupled from that definition. To fail to do so has the potential of politicizing the scientific evaluation of the dose-response relationship, especially in the area of risk assessment. Calabrese and Baldwin also recognized that benefit or harm had the distinct potential to be seen from specific points of view. For example, in a highly heterogeneous population with considerable inter-individual variation, a beneficial dose for one subgroup may be a harmful dose for another subgroup. In addition, it is now known that low doses of antiviral, antibacterial, and antitumor drugs can enhance the growth of these potentially harmful agents (i.e., viruses), cells, and organisms while possibly harming the human patient receiving the drug. In such cases, a low concentration of these agents may be hormetic for the disease-causing organisms but harmful to people. In many assessments of immune responses, it was determined that approximately 80% of the reported hormetic responses that were assessed with respect to clinical implications were thought to be beneficial to humans. This suggested, however, that approximately 20% of the hormetic-like low-dose stimulatory responses may be potentially adverse. Most antianxiety drugs at low doses display hormetic dose-response relationships, thereby showing beneficial responses to animal models and human subjects. Some antianxiety drugs enhance anxiety in the low-dose stimulatory zone while decreasing anxiety at higher inhibitory doses. In these two cases, the hormetic stimulation is either decreasing or increasing anxiety, depending on the agent and the animal model]. Thus, the concepts of beneficial or harmful are important to apply to dose-response relationships and need to be seen within a broad biological, clinical, and societal context. The dose-response relationship itself, however, should be seen in a manner that is distinct from these necessary and yet subsequent applications.

I know, the Cabrese quote may have been a little dense for most. Let me give you another real world example with which I’m familiar, and you probably are, too.

Here in Minnesota in the winter we get very snowy, icy conditions. If I’m driving down the road and I sense a slippery patch, what I will immediately do is become more alert, slow down, and drive more carefully — I will effectively reduce my risk of an accident on that road because I detected ice. This does not in any way imply that ice reduces traffic accidents. Again, with the way Ann Coulter’s mind works, she’d argue that what we ought to do to encourage more responsible driving is to send trucks out before a storm to hose the roads down with water instead of salt.

Ann Coulter is blithely ignoring competent scientists’ informed recommendations to promote a dangerous complacency in the face of a radiation hazard. She’s using a childish, lazy interpretation of a complex phenomenon to tell people lies.


Calabrese EJ (2008) Hormesis: Why it is important to toxicology and toxicologists. Environmental Toxicology and Chemistry 27(7):1451-1474.

Brachiopods: another piece in the puzzle of eye evolution

i-e88a953e59c2ce6c5e2ac4568c7f0c36-rb.png

About 600 million years ago, or a little more, there was a population of small wormlike creatures that were the forebears of all modern bilaterian animals. They were small, soft-bodied, and simple, not much more than a jellyfish in structure, and they lived by crawling sluglike over the soft muck of the sea bottom. We have no fossils of them, and no direct picture of their form, but we know a surprising amount about them because we can infer the nature of their genes.

These animals would have been the predecessors of flies and squid, cats and starfish, and what we can do is look at the genes that these diverse modern animals have, and those that are held in common we all inherited together from that distant ancestor. So we know that flies and cats both have hearts that are initiated in early development by the same genes, nkx2.5 and tinman, and infer that our common ancestor had a heart induced by those genes…and that it was only a simple muscular tube. We know that modern animals all have a body plan demarcated by expression of Hox genes, containing muscles expressing myoD, so it’s reasonable to deduce that our last common ancestor had a muscular and longitudinally patterned body. And all of us have anterior eyes demarcated by early expression of pax6, as did our ancient many-times-great grandparent worm.

i-273ccf6ddad17af1bbd4674e0fbc722e-matrix.jpeg

We do not have fossils of these small, soft organisms, but that’s no obstacle to picturing them. You just have to see the world like a modern molecular or developmental biologist. One of the graphical conceits of the Matrix movies was that the hero could see the hidden mathematical structure of the world, which was visualized as green streams of symbols flowing over everything. We aspire to the same understanding of the structure of life, only what we see are patterns of genetic circuity, shared modules that are whirring away throughout development to produce the forms we see with our eyes; and also, unfortunately, we currently only see these patterns spottily and murkily. There is no developmental biologist with the power of Neo yet, but give us a few decades.

There’s another thing we know about these ancient ancestors: they had two kinds of eyes. ciliary and rhabomeric. Your eyes contain ciliary photoreceptors; they have a particular cellular structure, and they use a recognizable form of opsin. A squid has a distinctly different kind of photoreceptor, called rhabdomeric, with a different cell structure and a different form of opsin. We humans also have some rhabdomeric receptors tucked away in our retinas, while invertebrates have ciliary receptors as well, so we know the common ancestor had both.

Now this ancestral population eventually split into two great tribes, the protostomes, which includes squid and flies, and the deuterostomes, which includes cats and starfish. It should be an obvious indication of the general state of that ancestor that it represents all that those four diverse animals have in common. It also tells us that while that ancestor had eyes, they were almost certainly very simple, and could have been nothing more than a patch of light-sensitive cells, or perhaps even single cells, as we see in some larval eyes.

What we think happened at this division is that both tribes took the primitive eyes and specialized them independently. Each group evolved under similar constraints: they needed directionally-sensitive eyes that could tell what direction a source of illumination was coming from (and these would eventually form true image-forming eyes), and they also needed sensors to detect general light levels — is it day or night, are we in the open or under a rock? Think of it like a camera system: there is a part that gets all the attention, the lens and image-forming chip, but there’s also a light meter that senses ambient light levels.

The two tribes made different choices, though. The protostomes pulled the rhabdomeric photoreceptor out of their toolbox, and used that to make the camera; they used the ciliary photoreceptor to make their light meter. The deuterostomes (actually, just us chordates) instead used the ciliary photoreceptor for their camera, and the rhabdomeric photoreceptor for the light meter. It’s the same ancestral toolkit, but we’ve just specialized in different ways.

At least, that’s the general model we’ve been exploring. A new discovery at the Kewalo Marine Laboratory, one of the premiere labs for evo-devo research, has made the interpretation a little more complex.

That discovery is that brachiopod larvae, which are protostomes, have been found to have directionally sensitive eyes…which are ciliary. A protostome should have directionally sensitive eyes that are rhabdomeric. How interesting!

i-d60159d94316ec8d38e4c091e84a429f-brachiopodeyes.jpeg
Brightfield microscopy of a Terebratalia transversa larva, with red eye spots visible in the apical lobe (black arrows). (A) Dorsal view. (B) Lateral view.

In addition to being ciliary in structure, these eyes express ciliary opsin. They are also true cerebral eyes, also expressing pax6 and having a nervous connection to the central nervous system.

Notice what is going on here: a protostome is building a camera, and unlike all the other protostomes we’ve observed, it’s pulled a ciliary photoreceptor out of its pocket to make it. This is a surprise, but it doesn’t upset any theories too much — it just means we need to explore a couple of alternative explanations. We don’t have answers to resolve these hypotheses yet — we need more data and experiments — but it’ll be fun to watch the work roll onward.

One explanation is illustrated in A, below. The initial animal state was to build directional, cerebral eyes using rhabdomeric photoreceptors. The vertebrates are oddballs who swapped in ciliary receptors instead, while these larval eyes in brachiopods are major peculiarities, an evolutionary novelty which resembles a cerebral eye, but is actually non-homologous. This seems unlikely to me; there are multiple elements of the eye circuitry at work in these eyes, and if they’re using the same gene circuitry, we ought to recognize them as homologous at the molecular level…the only one that counts.

The second explanation in B is that all of these cerebral eyes are homologous, but that the receptor type is more plastic than we thought — it’s relatively easy to switch on the ciliary module vs. the rhabodmeric module, so we would expect to see multiple flip-flops in the evolutionary record.

If we accept that it’s easy to switch receptor type, though, then why assume that the last common ancestor had a directional, cerebral eye that was rhabdomeric? It could have been ciliary, which is also a more parsimonious explanation, because it requires only one switch of types in the protostomes, shown in C.

i-ce7a3df55993c986b97d516e84e9041e-models-thumb-400x141-62593.jpeg
(Click for larger image)

Alternative hypothesis on the evolution of photoreceptor deployment in cerebral eyes. Schematic representation of three hypotheses accounting for the deployment of ciliary photoreceptors in the cerebral eyes of Terebratalia and vertebrates, versus rhabdomeric photoreceptors in Platynereis and other protostomes. (A) Deployment of rhabdomeric photoreceptors as the ancestral state in cerebral eyes, with the larval eyes of Terebratalia, containing ciliary photoreceptors, representing an evolutionary novelty. The deployment of ciliary photoreceptors is the result of a substitution (with ciliary photoreceptors having replaced rhabdomeric photoreceptors in the cerebral eyes) early in the chordate lineage. (B) Larval eyes in Terebratalia are homologous to the cerebral eyes in other protostomes, but ciliary photoreceptors have been substituted for rhabdomeric photoreceptors, as in the vertebrates. (C) Ciliary photoreceptors in cerebral eyes represent the ancestral condition, inherited by Terebratalia and vertebrates. Deployment of rhabdomeric photoreceptors in the cerebral eyes of Platynereis and other protostomes are the result of substitution events.

Whichever hypothesis pans out, though, the important message is that photoreceptor type is a more evolutionarily labile choice than previously thought. What I want to see is more research into photoreceptor development in more exotic invertebrates — that’s where we’ll learn more about our evolutionary history.


I have to mention a couple of other cool features of this paper. If you ever want to see a minimalist directional eye, here it is: the larval eye sensor of brachiopods consists of two cells, a lens cell that actually does the job of light detection, and a pigment cell that acts as a shade, preventing light from one direction from striking the lens cell. That’s all it takes.

i-b1738b6556ff1a04b804d4e4f22e11ce-2celleye.jpeg

I lied! That isn’t a minimal directional eye at all: here it is.

i-855bd089740480150447e707ace1fd5f-gastrula.jpeg

This rather blew my mind. The brachiopod gastrula senses light. The figure above is of a very early stage in development, when the organism is little more than a couple of sheets of cells with no organs at all, only tisses in the process of forming up into rough structures. It definitely has no brain, no nervous tissue at all, and no eyes…and there it is, that dark blue smear is a region selectively expressing ciliary opsin as if it were a retina. Furthermore, when tested behaviorally (mind blown again…behavior, in a gastrula), populations in a light box show a statistical tendency to drift into the light. Presumably, light stimulation of the opsin is coupled to the activity of cilia used for motility in the outer epithelium of the embryo.

Amazing. It suggests how eyes evolved in multicellular organisms, as well — initially, it was just localized general expression of light-sensitive molecules coupled directly to motors in the skin, no brain required.


Passamaneck Y, Furchheim N, Hejnol A, Martindale MQ, Lüter C (2011) Ciliary photoreceptors in the cerebral eyes of a protostome larva. EvoDevo, 2:6.

I am getting a very poor impression of astrobiology

I received email from one of those astrobiologists, the people behind the Journal of Cosmology, in this case Carl H. Gibson. I was…amused.

Dear Professor Meyers:

I understand you have some problem with our interpretation of Richard Hoover’s article proposed for the Journal of Cosmology. I certainly hope you will write up your comments for publication in a peer review, along with the article.

Attached is an article that might interest you on the subject of astrobiology. Have you written anything in this area?

Regards,
Carl

Ah. He understands that I had some problem with Hoover’s article. I think if he takes a slightly closer look at what I wrote, he might be able to notice that I think the whole article was a creaky, broken cart loaded with rotting donkey bollocks. I thought it was perfectly clear, but I guess I have a thing or two to learn about expressing my opinions unflinchingly.

No, I haven’t published anything in the field of astrobiology. It’s not my area of interest at all, and I don’t seem to meet any of the qualifications, all of which involve being an engineer, a physicist, or a crackpot. I’m only a biologist.

I do have to thank Dr Gibson for the very interesting article he sent along. It was quite the silliest thing I’ve read in days … which is saying something, given the kind of stuff creationists like to throw over the transom. I had no idea the field was such a mucking ground for foolishness.

The paper is titled, “The origin of life from primordial planets”, by Carl H. Gibson, Rudolph E. Schild, and N. Chandra Wickramasinghe, and you can find it in the International Journal of Astrobiology 10 (2): 83-98 (2011), if you’re really interested. Almost all of it is physics and cosmology, and it’s way over my head, so that part could be absolutely brilliant, and these guys really could be shaking up the entire discipline of cosmology and I wouldn’t be aware of it. So let me just grant them that part of their story, although to be honest, the parts that I do understand make me really, really suspicious.

Anyway, they’re pushing a new cosmological model called HGD (hydro-gravitational-dynamics) in opposition to the standard ΔCDMHC model (that stands for dark energy cold-dark-matter-hierarchical clustering). They really like their acronyms, which made the paper a hard slog, but my impression is that they’re arguing that planets formed first out of turbulence in cosmic gases, congealing into dark clumps that were home to life first, and then colliding together to form stars. I have no way to tell if the physics is BS, other than that it isn’t any part of the standard models I’ve read in popular physics books, but the basic premise is that first masses condensed, then life evolved, then stars formed. Yeah, seriously.

The onset of prebiotic chemistry and the emergence of life templates as a culmination of such a process must await the condensation of water molecules and organics first into solid grains and thence into planetary cores. Assuming the collapsing proto-planet cloud keeps track with the background radiation temperature, this can be shown to happen between ~2-30 My after the plasma to neutral transition. With radioactive nuclides 26Al and 60Fe maintaining warm liquid interiors for tens of My, and with frequent exchanges of material taking place between planets, the entire Universe would essentially constitute a connected primordial soup.

Life would have an incomparably better chance to originate in such a cosmological setting than at any later time in the history of the Universe. Once a cosmological origin of life is achieved in the framework of our HGD cosmology, exponential self-replication and propagation continues, seeded by planets and comets expelled to close-by proto-galaxies.

That’s right. Life arose 14 billion years ago. They say it again in the abstract: Life originated following the plasma-to-gas transition between 2 and 20 Myr after the big bang, while planetary core oceans were between critical and freezing temperatures, and interchanges of material between planets constituted essentially a cosmological primordial soup.” We’ve also got a diagram.

i-a4f05a8a7c9c8d45eba6124517386b3e-bigbang.jpeg

That is awesomely weird. So, somehow, life evolved under the bizarre physical conditions of the early universe, under conditions completely unlike anything on earth, survived the formation of stars, incredibly low population densities, extreme variations in temperature and radiation, and drifted through space for billions of years to finally settle on the relatively warm, wet, thick oceans of ancient Earth, and found itself right at home.

And this is somehow a better explanation than that life arose natively.

Why? All they’ve got to justify this nonsense is the long discredited views of Hoyle and Wickramasinghe that 4½ billion years is not enough. And their alternative explanation is that the Big Bang produced a universe-spanning interconnected soup in which evolution occurred.

In view of the grotesquely small improbability of the origin of the first template for life (Hoyle & Wickramasinghe 1982) it is obvious that it would pay handsomely for abio- genesis to embrace the largest available cosmic setting. The requirement is for a connected set of cosmic domains where prebiology and steps towards a viable set of life templates could take place and evolve. In the present HGD model of cosmology the optimal setting for this is in events that follow the plasma-to-gas transition 300000 years after the big bang. A substantial fraction of the mass of the entire Universe at this stage will be in the form of frozen planets, enriched in heavy elements, and with radioactive heat sources maintaining much of their interior as liquid for some million years. The close proximity between such objects (mean separations typically 10-30 AU) will permit exchanges of intermediate templates and co-evolution that ultimately leads to the emergence of a fully fledged living system. No later stage in the evolution of the Universe would provide so ideal a setting for the de novo origination of life.

Never mind. I don’t think any serious biologist has any significant problems with the probability of life originating on this planet, but I think we’d all agree that the ancient planetary nebula was an even more hostile environment than the Hadean earth. I think their team needs some more competent biologists contributing — they may have the “astro” part down, but the “biology” part is looking laughable.

I do hope there is intelligent life in astrobiology, and that there are better qualified scientists who will take some time to criticize the cranks in their field.

NASA speaks out boldly on the ‘bacteria from space’ claims

That was sarcasm in the title, everyone. NASA has released a public statement in which they gingerly edge away from Hoover’s paper.

NASA is a scientific and technical agency committed to a culture of openness with the media and public. While we value the free exchange of ideas, data, and information as part of scientific and technical inquiry, NASA cannot stand behind or support a scientific claim unless it has been peer-reviewed or thoroughly examined by other qualified experts. This paper was submitted in 2007 to the International Journal of Astrobiology. However, the peer review process was not completed for that submission. NASA also was unaware of the recent submission of the paper to the Journal of Cosmology or of the paper’s subsequent publication. Additional questions should be directed to the author of the paper.

In other words, “What paper? We don’t know nothin’ about that paper.”

It’s disappointing cover-their-butts bureaucratese from an organization that, as a science and engineering institution, ought to have a rather more demanding attitude about rigor and evidence. Oh, well. It’s one small step for a bureaucracy; one giant leap…which a bureaucrat won’t take.

By the way, my phone has been busy over this nonsense today. I don’t quite know what it is, but for some reason the initial claim of “Life in space!” struck a chord with many people, and the fact that a number of scientists are quickly replying with “No, it isn’t” is making some people very, very angry. We’re also seeing a lot of that infuriated rejection of the rejection in the comments here.

I think many confuse their wish to see evidence of extraterrestrial life with the evidence for extraterrestrial life. Personally, I’d love to see the discovery of life that originated elsewhere other than our world — that would provide a radically different insight into evolution. I know there has been evidence of organic molecules in space, and I suspect that life does exist on other planets (possibly even other planets in our solar system), but I’m not going to accept a claim of discovery without adequate evidence.

And I’m sorry, but Hoover’s paper is poorly written, sloppy work that uses a non-biologist’s impressions of complex textures in a mineral to imply morphological evidence for fossilized bacteria. You’d think NASA would know better: we had a similar phenomenon a few years ago, in which people claimed to see a “face on Mars,” a claim that NASA effectively debunked. This is the same thing. It’s a shame that NASA isn’t being as quick to dismiss bad science this time around.

Go to Dublin…for the science!

Since the World Atheist Conference is in Dublin this June, you should go just to test this scientific conclusion: the Guinness does taste better in Ireland. I think so, too. So here’s the experiment: buy a glass of Guinness in your airport bar, fly to Ireland, drink some more there. Attend the atheist conference to cleanse the palate, as it were. Drink more Guinness, get on the plane and fly home, and have another one.

Compare.

So now you have another reason to go. It’s an Experiment!