2001: A Space Odyssey

The American Film Institute recently ranked the top ten films in each of ten genres. All such ‘best of’ rankings are, of course, just for fun and meant to provoke vigorous debate about films that did not make the cut as well as the unworthy ones that did. They are not meant to be taken more seriously than that. I was puzzled, however, as to why comedies were not included as a separate genre, the closest category being the vaguer ‘romantic comedies.’ The omission of musicals as a genre was also puzzling. Maybe those lists will come out later.

I had only two major objections. I was shocked that Walt Disney’s Jungle Book did not even make it into the list of best animations, even though to my mind it is easily the best of that genre, and one of my favorite films in any genre. That favorite of film critics Pulp Fiction of course made the list in the gangster category, although I hated the film, with its gratuitous violence and racially offensive language. I vowed never to see a Quentin Tarantino film again after that.

It turns out that I have seen a lot of the top 100 films (63), a sign of a happily wasted life. I recall one year when I was about 16 when I kept a log of the all the films I had seen that calendar year. I counted over one hundred, or on average one every three days, all in the movie theater. I was able to do this because the theater was walking distance from my home and the manager was a friend of my father and gave us a pass to see films free. Since my parents did not stop me from this indulgence as long as I was keeping up with my schoolwork, I saw almost every film that was shown. I have to admit that I saw a whole lot of lousy films. Time seems much more precious to me now and so I am much more choosy about what films I watch.

I have seen all ten of the top animations listed by the AFI. The other genres that I have seen most of were westerns (8), mystery (8), and courtroom dramas (7), while the least was fantasy (4).

I have seen all of the #1 ranked films except for The Searchers in the western category, which I plan to see soon, and City Lights in the romantic comedy category. I have always been a fan of good westerns, many of which had strong stories and characters and promoted values of honor and justice.

While one can quibble with the top rankings in each genre, the one film whose #1 will be unquestioned is 2001: A Space Odyssey in the science fiction category.

I recall seeing it in a wide-screen theater when it was first released in 1968 and it stunned me with its brilliance. My impression of it was so vivid that I did not want to see it again on the small screen using videotape or DVD. Instead I waited and waited for it to be re-released on the big screen, to capture again the awe of space that it inspired. There had been rumors of this being done in 2001 but that did not occur. I then thought that it might happen this year on its 40th anniversary but when that did not seem likely to happen, decided to give up and watch the DVD.

There is always danger in re-watching a film that one has fond memories of from the distant past, the fear that one will be disappointed. 2001 is not one of those films. Watching it again, even on a small screen, was a wonderfully rewarding experience. Stanley Kubrick and Arthur C. Clarke combined to make one of the truly great films of all time, something that lifted science fiction films from cliché-ridden, quasi-horror, gimmicky films with cartoon-like aliens creatures into a true work of art.

What impressed me is how well the film stood up 40 years later. Not only did the science still remain credible, the special effects were also wonderful, which is amazing when you consider that Kubrick did not have the benefit of computer graphics, and all the visual effects had to be captured directly on film.

The film may not appeal to modern filmgoers, jaded by the action fantasies of films like Star Wars. In 2001, the plot is simple and there is no frantic action, no explosions, no shoot outs with laser guns, no light sabers, no love story, no sex, not even human conflict. 2001 played down these traditional film staples. In fact, all the actors seemed to be deliberately underplaying their roles, leaving the enigmatic computer HAL 9000 that runs the spaceship as the most interesting character. And yet, all these things that sound like negatives actually combine to make the film utterly engrossing.

Although 2001 grabbed the imagination of two young boys George Lucas and Steven Spielberg as to the tremendous possibilities of science fiction film making, their own films in this genre went off in different, and in my view, inferior directions.

2001 is a highly visual film, almost ballet-like with its minimal dialogue. The first half-hour is totally word-free, leading up to one of the most memorable visual transitions in the history of filmmaking. The last half-hour is also wordless. Kubrick does not rush scenes or have frequent jump cuts, exploiting the seemingly slow pacing and the ambient sounds of breathing to capture the silence and immensity of space. The attention to detail of how things work in space (how people can walk when weightless, how to simulate weak gravity on a spaceship, how to eat and drink, the difficulty of using toilets, etc.) gives the film a scientific credibility and timelessness that will ensure that it remains the top film for the next hundred years.

The film was not well received when it first came out. Its measured pacing bored some who were used to the action clichés of the older films in this genre and the famous enigmatic ending confused the general public as to what was going on. But science fiction fans had hours of fun debating what it all meant.

I also recently watched another science fiction film that I had never heard of previously, and that was Colossus: The Forbin Project which also deals with a computer that decides to take control, this time on Earth. The film was interesting mainly because of its probing, like 2001, of what might result if a computer becomes a truly intelligent, self-aware, self-learning device, and raises the notion of the nature of consciousness and whether computers will be able to create it. The excellent website Machines Like Us probes just these issues and its editor was the one who tipped me off to the existence of this film.

Watching Colossus so soon after the re-watching of 2001 was perhaps a mistake. Although the ideas the former film explored were intriguing, the quality of the filmmaking was nowhere close to that of the latter. The execution of the idea needed the genius of a Kubrick to really do it justice.

If you have never seen 2001: A Space Odyssey, you have missed a treat. It is a landmark in filmmaking.

POST SCRIPT: How to avoid discussing the election

The difference between human and other animal communication

In his book The Language Instinct (1994) Steven Pinker pointed out two fundamental facts about human language that were used by linguist Noam Chomsky to develop his theory about how we learn language. The first is that each one of us is capable of producing brand new sentences never before uttered in the history of the universe. This means that:

[A] language cannot be a repertoire of responses; the brain must contain a recipe or program that can build an unlimited set of sentences out of a finite list of words. That program may be called a mental grammar (not to be confused with pedagogical or stylistic “grammars,” which are just guides to the etiquette of written prose.)

The second fundamental fact is that children develop these complex grammars rapidly and without formal instruction and grow up to give consistent interpretations to novel sentence constructions that they have never before encountered. Therefore, [Chomsky] argued, children must be innately equipped with a plan common to the grammars of all languages, a Universal Grammar, that tells them how to distill the syntactic patters out of speech of their parents. (Pinker, p. 9)

Children have the ability to produce much greater language output than they receive as input but it is not done idiosyncratically. The language they produce follows the same generalized grammatical rules as others. This leads Chomsky to conclude that (quoted in Pinker, p. 10):

The language each person acquires is a rich and complex construction hopelessly underdetermined by the fragmentary evidence available [to the child]. Nevertheless individuals in a speech community have developed essentially the same language. This fact can be explained only on the assumption that these individuals employ highly restrictive principles that guide the construction of grammar.

The more we understand how human language works, the more we begin to realize how different human speech is from the communication systems of other animals.

Language is obviously as different from other animals’ communication systems as the elephant’s truck is different from other animals’ nostrils. Nonhuman communication systems are based on one of three designs: a finite repertory of calls (one for warnings of predators, one for claims of territory, and so on), a continuous analog signal that registers the magnitude of some state (the livelier the dance of the bee, the richer the food source that it is telling its hivemates about), or a series of random variations on a theme (a birdsong repeated with a new twist each time: Charlie Parker with feathers). As we have seen, human language has a very different design. The discrete combinatorial system called “grammar” makes human language infinite (there is no limit to the number of complex words or sentence in a language), digital (this infinity is achieved by rearranging discrete elements in particular orders and combinations, not by varying some signal along a continuum like the mercury in a thermometer), and compositional (each of the finite combinations has a different meaning predictable from the meanings of its parts and the rules and principles arranging them). (Pinker, p. 342)

This difference between human and nonhuman communication is also reflected in the role that different parts of the brain plays in language as opposed to other forms of vocalization.

Even the seat of human language in the brain is special. The vocal calls of primates are controlled not by their cerebral cortex but by phylogenetically older neural structures in the brain stem and limbic systems, structures that are heavily involved in emotion. Human vocalizations other than language, like sobbing, laughing, moaning, and shouting in pain, are also controlled subcortically. Subcortical structures even control the swearing that follows the arrival of a hammer on a thumb, that emerges as an involuntary tic in Tourette’s syndrome, and that can survive as Broca’s aphasic’s only speech. Genuine language . . . is seated in the cerebral cortex, primarily in the left perisylvian region. (Pinker, p. 342)

Rather than view the different forms of communication found in animals as a hierarchy, it is better to view them as adaptations that arose from the necessity to occupy certain evolutionary niches. Chimpanzees did not develop the language ability because they did not need to. Their lifestyles did not require the ability. Humans, on the other hand, even in the hunter-gatherer stage, would have benefited enormously from being able to share kind of detailed information about plants and animals and the like, and thus there could have been an evolutionary pressure that drove the development of language.

Human language was related to the evolution of the physical apparatus that enabled complex sound production along with the associated brain adaptations, though the causal links between them is not fully understood. Did the brain increase in size to cope with rising language ability or did the increasing use of language drive brain development? We really don’t know yet.

The argument against a linguistic hierarchy in animals can be seen in the fact that different aspects of language can be found to be best developed in different animals.

The most receptive trainee for an artificial language with a syntax and semantics has been a parrot; the species with the best claim to recursive structure in its signaling has been the starling; the best vocal imitators are birds and dolphins; and when it comes to reading human intentions, chimps are bested by man’s best friend, Canis familiaris. (Pinker, PS20)

It seems clear that we are unlikely to ever fully communicate with other species the way we do with each other. But the inability of other animals to speak the way we do is no more a sign of their evolutionary backwardness than our nose’s lack of versatility compared to the elephant’s trunk, or our inability to use our hands to fly the way bats can, are signs that we are evolutionarily inferior compared to them

We just occupy different end points on the evolutionary bush.

POST SCRIPT: But isn’t everyone deeply interested in golf?

If you want yet more reasons why TV news is not worth watching . . .

Can animals talk?

One of the most interesting questions in language is whether animals can talk or at least be taught to talk. Clearly animals can communicate in some rudimentary ways, some more so than others. Some researchers are convinced that animals can talk and have spent considerable efforts to try and do so but with very limited results. In the comments to an earlier post, Greg referred to the efforts by Sue Savage-Rumbaugh (and Duane Rumbaugh) to train the bonobo chimpanzee Kanzi to speak, and Lenen referred to the development of spontaneous language in children who had been kept in a dungeon. There have been other attempts with chimps and gorillas named Washoe, Koko, Lana, and Sarah.

One thing that is clear is that humans seem to have an instinctive ability to create and use language. By instinctive, I mean that evolution has produced in us the kinds of bodies and brains that make learning language easy, especially at a young age. It is argued that all humans are born possessing the neural wiring that contains the rules for a universal grammar. The five thousand different languages that exist today, although seeming to differ widely, all have an underlying grammatical similarity that is suggestive of this fact. For example, this grammar affects things like the subject-verb-object ordering in sentences. In English, we would say “I went home” (subject-verb-object) while in Tamil it would be “I home went” (subject-object-verb).

What is interesting is that of all the grammars that are theoretically possible, only a very limited set is actually found in existence. We do not find, for example, languages where people say “Home went I” (object-verb-subject). What early exposure to language does is turn certain switches on and off in the universal grammar wiring in our brains, so that we end up using the particular form of grammar of the community we grow up in. This suggests that language structures are restricted and not infinitely flexible, indicating a biological limitation.

The instinctive nature of language can be seen in a natural experiment that occurred in Nicaragua. There used to be no sign language at all in that country because the children were isolated from one another. When the Sandinistas took over in 1979, they created schools for the deaf. Their efforts to formally teach the children lip reading and speech failed dismally. But because the deaf children were now thrown together in the school buses and playgrounds, the children spontaneously developed their own sign language that developed and grew more sophisticated and is now officially a language that follows the same underlying grammatical rules as other spoken and sign languages. (Steven Pinker, The Language Instinct, 1994, p. 24)

What about animals? Many of us, especially those of us who have pets, would love to think that animals can communicate. As a result, we are far more credulous than we should be of claims (reported in the media) by researchers that they have taught animals to speak. But others, like linguist Steven Pinker, are highly skeptical. When looked at closely, the more spectacular elements of the claims disappear, leaving just rudimentary communication using symbols. The idea that some chimps can be taught to identify and use some symbols or follow some simple spoken commands does not imply that they possess underlying language abilities comparable to humans. The suggestion that animals use sign ‘language’ mistakenly conflates the sophisticated and complex grammatical structures of American Sign Language and other sign languages with that of a few suggestive gestures.

The belief that animals can, or should be able to, communicate using language seems to stem from two sources. One lies in a mistaken image of evolution as a linear process in which existing life forms can be arranged from lower to higher and more evolved forms. One sees this in posters in which evolution is shown as a sequence: amoebas→ sponges→ jellyfish→ flatworms→ trout→ frogs→ lizards→ dinosaurs→ anteaters→ monkeys→ chimpanzees→ Homo sapiens. (Pinker, p. 352) In this model, humans are the most evolved and it makes sense to think that perhaps chimpanzees have a slightly less evolved linguistic ability than we do but that it can be nudged along with some human help. Some people are also convinced that to think that animals cannot speak is a sign of a deplorable species superiority on our part.

But that linear model of evolution is wrong. Evolution is a branching theory, more like a spreading bush. Starting from some primitive form, it branched out into other forms, and these in turn branched out into yet more forms and so on, until we had a vast number of branches at the periphery. All the species I listed in the previous paragraph are like the tips of the twigs on the canopy of the bush, except that some (like the dinosaurs) are now extinct. Although all existing species have evolved from some earlier and more primitive forms, none of the existing species is more evolved than any other. All existing species have the same evolutionary status. They are merely different.

In the bush image, it is perfectly reasonable to suppose that one branch (species) may possess a unique feature (speech) that is not possessed by the others, just like the elephant possesses a highly useful organ (the trunk) possessed by no other species. All that this signifies is that that feature evolved after that branch separated from the rest of the bush and hence is not shared by others. The fact that nonhuman animals cannot speak despite extensive efforts at tutoring them is not a sign that they are somehow inferior or less evolved than us.

Some efforts to teach animals language skills seem to stem from a sense of misguided solidarity. It is as if the more features we share with animals, the closer we feel we are to them and the better we are likely to treat them. It is undoubtedly true that the closer we identify with some other living thing, the more empathy we have for it. But the solution to that is to have empathy for all living creatures, and not try to convince ourselves that we are alike in some specific ways.

As Pinker says:

What an irony it is that the supposed attempt to bring Homo sapiens down a few notches in the natural order has taken the form of us humans hectoring another species into emulating our instinctive form of communication, or some artificial form we have invented, as if that were a measure of biological worth. The chimpanzees’ resistance is no shame to them; a human would surely do no better if trained to hoot and shriek like a chimp, a symmetrical project that makes about as much scientific sense. In fact, the idea that some species needs our intervention before its members can display a useful skill, like some bird that could not fly until given a human education, is far from humble! (p. 351)

While any animal lover would dearly love to think that they can talk with animals, we may have to reconcile ourselves to the fact that it just cannot happen, because they lack the physical and perhaps cognitive apparatus to do so.

Next: The differences between animal and human communication.

POST SCRIPT: Superstitions

One of the negative consequences of religious beliefs is that it leads to more general magical thinking, one form of which is superstitions. Steve Benen lists all the superstitions that John McCain believes in.

It bothers me when political leaders are superstitious. Decision-makers should not be influenced by factors that have no bearing whatsoever on events.

When did language originate?

Trying to discover the origins of language is a fascinating scientific problem but the evidence is necessarily indirect. Clearly our bodies’ physical capacity to articulate sounds is a biological development. Language had to be preceded by the evolution of the physical organs responsible for vocalization. Those organs must have co-evolved with those parts of the brain that can process language. But this evolutionary history is hard to reconstruct since the voice organs and brains are made of soft tissue and are thus unlikely to fossilize. Even if we could get an accurate fix on when the actual physical ability to speak came into being, this the could only be used to set a limit on the earliest time at which language could have occurred, but tells us nothing of when it actually did.

Since humans have these language organs and our closest existing cousins the chimpanzees do not, and since our branch of mammals split off from chimpanzees about 5-7 million years (or about 350,000 generations) ago, it is theoretically possible for language to be that old and still be consistent with only humans being able to speak.

At the other end, the discovery of cave art in Europe consisting of depictions of animals and humans in carved and painted and sculpted forms by Cro-Magnon humans in the Upper Paleolithic era about 35,000 years ago indicate complex social thinking indicative of the presence of language, suggesting that this sets a limit on the latest time for the origin of language.

But 35,000 to 5-7 million years is a huge time interval and attempts have been made to get a more precise fix on the origin of language. Various approaches have been attempted. One avenue of exploration comes from linguistics: the study of languages themselves and how they evolved. Another is to look at the physiological development of the human body. A third method is to look at the development of lifestyles to discern levels of complexity that suggest the kinds of social organization that would require language. A fourth is to look at the use of tools, to see if there is sophistication and uniformity over a wide area suggesting that knowledge was being shared and transmitted to distant locales.
While these are all promising avenues of research, unfortunately the lines of evidence from these different approaches currently do not converge on a single time, suggesting that we still have a long way to go in determining when language might have arisen.

Starting with linguistics, it is known that the structure of languages is very analogous to the biological tree of living organisms. Just as the fossil and DNA evidence all point to all living things being descended from a common ancestor, the approximately five thousand languages that currently exist exhibit grammar and vocabulary relationships strongly suggestive of the fact that they are all derived from a single common proto-language that existed long ago that evolved and split into branches the way that living organisms did. By tracing that linguistic tree back in time, we may be able to fix narrower bounds on the date of origin of that proto-language.

Steven Pinker argues that since modern humans Homo sapiens first appeared about 200,000 years ago and spread out of Africa about 100,000 years ago, and since all modern humans have identical language abilities along with a universal grammar, it seems likely that language appeared concurrently with the first appearance of modern humans. (Steven Pinker, The Language Instinct, 1994, p. 363, 364) Furthermore, there was a more than a tripling of brain size (from 400cc to 1350cc) during the period between the first appearance the genus Homo (in the form of Homo habilis) about two million years ago until Homo sapiens appeared, suggesting that the brain developed in that period partly in order to accommodate the new language centers. Pinker suggests that since Homo sapiens are us, it seems reasonable that language came into being as long ago as 200,000 years ago.

As for biological development. Richard Leakey explains what it is about the human body that enables speech. (The Origin of Humankind, 1994)

Humans are able to make a wide range of sounds because the larynx is situated low in the throat, thus creating a large sound-chamber, the pharynx, above the vocal chords . . . the expanded pharynx is the key to producing fully articulate speech . . . In all mammals except humans the larynx is high in the throat, which allows the animal to breathe and drink at the same time. As a corollary, the small pharyngeal cavity limits the range of sounds that can be produced. . . Although the low position of the larynx allows human to produce a greater range of sounds, it also means that we cannot drink and breathe simultaneously. We exhibit the dubious liability for choking.

Human babies are born with the larynx high in the throat, like typical mammals, and can simultaneously breathe and drink, as they must during nursing. After about eighteen month, the larynx begins to migrate down the throat, reaching the adult position when the child is about fourteen months old. (p. 130)

The unique position of the larynx in human speech suggests that if were able to identify when it got lowered to its present position, we might be able to determine when we first had the ability to speak. But the problem is that those parts of the body are made of soft tissues and do not fossilize easily. However, the shape of the bottom of the skull called the basicranium is arched for humans and essentially flat for other mammals and this part of the skull is an indicator of how well it can articulate sounds. “The earliest time in the fossil record that you find a fully flexed basicranium is about 300,000 to 400,000 years ago, in what people call archaic Homo sapiens.” (Leakey, p. 132)

But of course that does not mean that language developed simultaneously with the basicranium. Leakey says that it is unlikely that language was fully developed among archaic Homo sapiens.

The brain is another indicator of possible language origins. The part of the brain known as Broca’s area is a raised lump near the left temple associated with language and the use of tools. Furthermore, the left hemisphere of the brain (which is associated with language) is larger than the right. So if we can find fossilized skulls that indicate the presence of either of these features, that would also indicate the onset of possible linguistic ability. A fossil found nearly two million years ago seems to have just such features. Combined with the discovery of tool-making around this time Leakey thinks it is possible that it was with the advent of Homo habilis (the handyman) about two million years ago that language first started to appear, at least in a very crude form. (Leakey, p.129)

Another strategy is to look at the various tools and other artifacts that humans created and see if there is an increase in sophistication and increased spread of similar designs, which would suggest the sharing of knowledge and ideas and thus speech. The more complex the social structures in which people lived, the greater the need for language. As for tools, although they started being made about two million years ago, the earliest kinds were opportunistic in nature. More conscious tool making began about 250,000 years ago but then stayed static for about 200,000 years. The kinds of ordering of tools that are really suggestive of language does not seem to occur until suddenly about 35,000 years ago, coinciding with the sudden spurt in cave art in the Upper Paleolithic period. (Leakey, p. 134)

So basically the situation is confused. While it is possible that language began to appear in some primitive form as early as two million years ago, it seems more likely that real language skills began about 200,000 years ago. Also it is not clear whether language evolved gradually since that time or whether it remained in a low and more-or-less static state before suddenly exploding about 35,000 years ago into the complex language structures that we now have.

Next: Can animals talk?

POST SCRIPT: Fred and Wilma? Who knew?

The most unforgettable act of the 1969 Woodstock festival was Joe Cocker’s rendering of the Beatles’ A little help from my friends, a gentle song sung by Ringo Starr, which Cocker turned into an over-the top, weird, air-guitar-playing, frenzied, incoherent performance that looked like he was having some kind of seizure. Throughout it, you kept wondering what the hell he was singing since the lyrics seemed to have only a passing resemblance to the original.

Some helpful soul has now provided captions for Cocker’s words. It all makes sense now. Or maybe not.

(Thanks to Jesus’s General.)

The power of language

One of the things that makes some people uneasy about the theory of evolution is its implication that humans are just one branch in the tree of life, connected to every other living thing through common ancestors, and thus not special in any mysterious way. It is surely tempting to think that we must be somehow unique. Look at the art and culture and science and technology we have produced and for which nothing comparable exists by any other species. How can we explain that if we are not possessed of some quality not present in other species?

One doesn’t have to look far to find one feature that distinguishes the human species from all its cousins in the evolutionary tree of life. It is language. Somehow, at some point, we developed the capacity to speak and communicate with each other through well-articulated sounds and that has had a profound impact on our subsequent development. Although the number of phonemes (units of sound) that humans can make (about fifty) is not vastly greater than the number available to apes (about a dozen), we can use them to generate an average vocabulary of about 100,000 words. “As a consequence, the capacity of Homo sapiens for rapid, detailed communication and richness of thought is unmatched in the world of nature.” (Richard Leakey, The Origin of Humankind, 1994, p. 122)

Without language, the knowledge of animals is restricted to what they are born with as a result of their evolutionary development (i.e., their instincts) and what they acquire during their own lifetimes. That is necessarily restricted and each generation essentially starts life at the same point in knowledge space as the previous one.

But with language, all that changes. Now knowledge can be passed on from generation to generation and we can learn from our ancestors. Knowledge becomes cumulative and the process accelerated with the discovery of writing about 6,000 years ago, resulting in the ability to store and retrieve knowledge over long times and long distances.

I have sometimes wondered why religious people, always on the lookout for a sign that humans are special in god’s eyes and possessed of some quality that could not be accounted for evolutionarily, have not seized on language as that which makes us uniquely human. Why don’t intelligent design advocates suggest that it was god’s intervention that enabled us to develop the ability to speak?

One advantage to religious people of using the introduction of language as a mysterious sign of god’s actions is that it is hard to pin down exactly when and how language started, and thus might make it hard to explain scientifically, making it an even better choice for a religious explanation than the bacterial flagellum or even the origin of life. Language was a significant development in our evolutionary history but how it came about is murky because spoken language leaves no trace.

Of course, the fact that we humans possess a unique feature does not necessarily imply that we are special. After all, elephants can also boast of a uniquely useful organ, the trunk, that can do truly amazing things. It is strong enough to uproot trees and stack them carefully in place. It is delicate enough that it can pick a thorn, draw characters on paper with a pencil, or pick up a pin. It is dexterous enough that it can uncork a bottle and unbolt a latch. It is sensitive enough to smell a python or food up to a mile away. It can be used as a siphon and a snorkel. And it can do many more things, both strong and delicate. (Steven Pinker, The Language Instinct, 1994, p. 340)

Why did only elephants evolve this extremely useful organ compared to which the human nose seems so inadequate? It presumably developed according to the laws of natural selection, just like everything else. But if elephants were religious, they might well be tempted to argue that having a trunk was a sign from god that they were special and made in god’s image, and thus that god must have a trunk too.

So uniqueness alone doesn’t imply that we are possessed of some spiritual essence. But even if the ability to speak does not confer on us a mystical power, the question of when and how humans developed this profound and incredibly useful ability is well worth studying.

Next: When did language originate?

POST SCRIPT: George Carlin on language

I had written this post on language last week but then learned that comedian George Carlin died yesterday at the age of 71. He pushed the boundaries of comedy and many of his riffs dealt with the hypocritical use of language. His famous routine “Seven words you can’t say on TV” ended up in 1973 as a case in the Supreme Court, which ruled that the government did have a right to limit the words used on broadcasts.

That routine is below. As to be expected, there is extensive and repeated use of the seven naughty words so don’t watch if such language offends you.

Bonus video: George Carlin was also an atheist who poked fun at the lack of logic underlying religious beliefs.

Cloning and stem cell research

(This series of posts reviews in detail Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

In the Appendix of his book The Language of God: A Scientist Presents Evidence for Belief (2006), Francis Collins gives a very clear and brief exposition of the issues involved in stem cell research and cloning, which are not the same thing despite popular impressions.

A human being starts out as a single cell formed by the union of an egg and a sperm. The nucleus of this cell contains the contributions of DNA from each of the two parents and thus all the genetic instructions, while the region outside the nucleus, called the cytoplasm, contains the nutrients and signaling mechanisms that enable the cell to do whatever it is meant to do.

The single cell starts multiplying by copying itself, a process known as mitosis. In the very early stages, all the cells are identical and capable of eventually becoming any specialized cell like a liver cell, blood cell, etc. Such cells are called ‘pluripotent’ because of their ability to become any of the tissues that make up the body and it is these cells that are called embryonic stem cells and the center of the ethical debate.

Soon these embryonic cells begin to specialize and differentiate into cells that begin to form different organ tissues. They do this by having the DNA start turning switches on and off in its genes. Some of these specialized cells, such as those found in limited amounts in bone marrow, become what are known as adult stem cells in that while they still have the ability to differentiate further, they can do so only into a much more limited variety of adult tissues. Such stem cells are called ‘multipotent’.

The promise of stem cell research is that one can use a person’s own stem cells to regenerate tissues lost or damaged by all kinds of diseases. Since these cells are not perceived as foreign matter, this would not trigger the body’s immune mechanism that rejects foreign tissues, as occurs currently with transplants. At present, this immune response has to be suppressed with powerful drugs, leaving the patient vulnerable to other infections.

The ethical problem is that although adult stem cells can be obtained and used from an adult without harming that person, they have only a very limited flexibility. Pluripotent cells are preferred but at present using such cells results in the loss of the embryos from which they are taken, and this immediately raises the ethical issue of whether by destroying an embryo, we are destroying life.

Currently pluripotent stem cell lines are created during the process of in-vitro fertilization, by taking an egg from a woman, fertilizing it in a petri dish with sperm from a man, and growing the resulting cell in solution containing the necessary nutrients for its growth. After about five days, what is called a ‘blastocyst’ is formed which consists of about 70-100 cells. This consists of an outer wall of cells encompassing a hollow cavity, and an inner clump of about 30 cells (called the inner cell mass) at one end of the cavity. It is the inner cell mass that eventually turns into the tissues that make up the growing fetus, while the outer wall becomes the placenta.

In-vitro fertilization is done to assist childless couples. The selected blastocyst is implanted in the uterus of either the person who donated the egg (the biological mother) or a surrogate, and once it adheres to the wall of the uterus, it receives oxygen and other nutrients from the mother and develops as any other fetus.

The ethical dilemma arises because the process is not 100% certain, and thus many more fertilized eggs and blastocysts are created this way than are currently used to generate actual pregnancies, and this has resulted in hundreds of thousands of unused fertilized eggs. They are currently kept frozen.

Researchers suggest that these fertilized eggs be used (with the donors’ permission) to generate embryonic stem cell lines that can be used for research purposes. To do this, the inner cell mass is extracted from the blastocyst and transferred into a dish containing a culture that enables it to grow. When this is done, the blastocyst is effectively destroyed and cannot be used to create a human.

Opponents of embryonic stem cell research say that even a single fertilized egg cell is a human life and thus the blastocyst created this way should never be destroyed. Others argue that a blastocyst has none of the qualities that we associate with being human and thus destroying it not taking a life.

This dilemma created by scientific advances may be resolved by further scientific advances.

One possible compromise arises from the discovery of the process by which animals have been cloned, starting with the famous cloned sheep Dolly. This process is known as somatic cell nuclear transfer (SCNT). What happened with Dolly is that a single cell was taken from the udder of an adult sheep and its nucleus (containing all the genetic information) was extracted. Then an egg cell was taken and its nucleus removed and replaced with the nucleus that had been extracted from the udder cell.
What one might have expected to have created was a cell that was specialized for udders since one had taken a cell from the udder of an adult and by that time the cell should have become specialized for just that purpose. It was once thought that this process of specilization was irreversible. i.e., once a pluripotent embryonic stem cell becomes an adult stem cell or an adult specialized cell, there was no going back to its unspecialized state.

What researchers found to their amazement was that when the udder cell nucleus was inserted into the egg cell that had had its nucleus removed, the nucleus seemed to effectively go back in time and become like the original embryonic cell that had eventually resulted in the sheep from which the udder cell was obtained. When this was then implanted in a sheep, it grew as if from a single fertilized egg and gave rise to a new sheep (Dolly) that had genes identical to those of the sheep from which the original udder cell was taken.

This process has now been repeated with other mammals like horses, cows, dogs, and cats. Although the Raelians made the spectacular claim that they had used this technique to clone a human being, that seems like a hoax.

As a result of this research, it looks like it should be possible to take a nucleus from (say) the skin cell of an adult human and insert it into an egg cell that has had its nucleus removed and thus create cells that have all the properties of embryonic stem cells. Thus it should be possible to create blastocysts in the laboratory without having them originate in the fusion of sperm and egg, the traditional way in which children are conceived. These stem cells would have DNA identical to those of the adult whose skin cell the nucleus was taken from, and not a fusion of mother and father DNA information, the way an embryo is normally formed.

Of course, if this cell is implanted in a uterus, one could potentially create a cloned human being but no one is suggesting that that be done. In fact, there is strong worldwide opposition to such an act. But if the cell is grown in a petri dish, then it could generate the equivalent of embryonic stem cells for both research and therapeutic purposes.

Would the process of SCNT be considered sufficiently different from the usual process of creating a fertilized egg to be considered not a potential human and thus overcome the ethical problems of stem cell research? That remains to be seen.

POST SCRIPT: Tough times

We know that the troubled economy is hurting many people. The Daily Show looks at how it is affecting the people of Beverly Hills.

Bioethical dilemmas

(This series of posts reviews in detail Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

In the Appendix of Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief (2006), he tackles the difficult ethical issues raised by advances in science and medicine, especially in the field of molecular biology. His own major contributions to the human genome have undoubtedly made him acutely conscious of these issues. Collins’s describes the science and the issues arising from them very clearly and this Appendix is well worth reading.

Having mapped out the entire human genome, scientists are now in the position of being potentially able to identify the presence of genes that may predispose people to certain diseases or behaviors long before those things have manifested themselves in observable ways. This ability has, of course, some obvious advantages in the prevention and treatment of diseases.

For example, breast cancer has a hereditary component that can be identified by the presence of a dangerous mutation in the gene BRCA1 on chromosome 17. This mutation, which also creates a greater risk for ovarian cancer, can be carried by fathers as well, even though they themselves may not have the disease. In those families in which breast cancer is prevalent, knowing who has the mutated gene and who hasn’t may influence how closely they are monitored and what treatments they might be given.

As time goes by, our genetic predisposition to more and more hereditary diseases will be revealed. But is this an unqualified good thing?

On the plus side, having this knowledge may enable those people at risk to take steps (diet, exercise, preventative treatment) that can reduce their risk of actually contracting the disease. After all, genes are usually not the only (or even the main) factor in causing disease and we often have some degree of control over the other risk factors for diseases such as diabetes or blood clotting.

We may also be able to treat more genetic diseases by actually changing an individual’s genes, although currently the only changes being made are to the genes in the somatic cells (the ones that make up our bodies) and not the ones in the ‘germ’ line cells (the ones that are passed on to children via the egg and sperm). At present, there is a scientific and medical consensus that influencing the genes of future generations by changing the germ line is not something we should do.

Furthermore, our bodies’ reaction to drugs is also often affected by our genes. That knowledge can be used to individualize treatment, to determine which drug should be given to which patient, and even to design drugs that take maximum advantage of an individual’s genetic makeup. This kind of personalized medicine lies in our future.

But there are negatives to this brave new world of treatment. Should everyone have their DNA mapped to identify potential risk factors? And who should have access to a person’s genetic information?

Some people may prefer not to know the likelihood of what diseases they are predisposed to, especially in those cases where nothing much can be done to avert the disease or what needs to be done would diminish by too much the quality of life of the individual. Furthermore, they may fear that this information could be used against them. If they have a predisposition for a major disease and this knowledge reaches the health insurance companies, the latter may charge them higher premiums or even decline to cover them at all. After all, the profit-making basis on which these companies run makes them want to only insure the pool of healthy people and deny as much coverage as possible to those who actually need it.

It works the other way too. If someone knows they have a potential health problem but the insurance companies don’t, they may choose health (and life) insurance policies that work to their advantage.

So genetic information can become a pawn in the chess game played between the individual and the health (and life) insurance agencies.

This is, by the way, another major flaw of the current employer-based private health insurance schemes in the US. If we had a single-payer, universal health care system as is the case in every other developed country, and even in many developing countries, this problem regarding genetic knowledge would not even arise. Everyone would be covered automatically irrespective of their history, the risk would be spread over the entire population, and the only question would be the extent to which the taxpayers wanted to fund the system in order to cover treatment. That would be a matter determined by public policy rather than private profit. There would still be ethical issues to be debated (such as on what basis to prioritize and allocate treatment) but the drive to minimize treatment to maximize private profit would be absent, and that is a huge plus.

There are other issues to consider. What if we find a gene that has a propensity for its bearer to commit crimes or other forms of antisocial behavior? Would it be wrong to use this knowledge to preventively profile and incarcerate people? It has to be emphasized that our genes almost always are not determinants of behavior but at best provide small probabilistic estimates. But as I have written before, probability and statistics is not easy to understand, and the knowledge that someone has a slightly greater chance of committing a crime can, if publicly known, be a stigma that person can never shake, however upstanding and moral a person he or she tries to be.

There is also the question of what to do with people who want to use treatments that have been developed for therapeutic purposes in order to make themselves (or their children) bigger, taller, stronger, faster, better-looking, and even smarter (or so they think) so that they will have an advantage over others. That thought-provoking film Gattaca (1997) envisions a future where parents create many fertilized eggs, examine the DNA of each, and select only those which contain the most advantageous genetic combinations to implant in the uterus. Collins points out that while this is theoretically possible, in practice it cannot be used to select for more than two or three genes. Even then, there are no guarantees that environmental effects as the child is growing up may not swamp the effects of the carefully selected genes. (p. 354)

Collins argues, and I agree with him, that these are important ethical decisions that should not be left only to scientists but should involve the entire spectrum of society. He appeals to the Moral Law as general guidance for dealing with these issues (p. 320). In particular he advocates four ethical principles (formulated by T. L. Beauchamp and J. F. Childress in their book Principles of Biomedical Ethics, 1994) that we might all be able to agree on in making such decisions. They are:

  1. Respect for autonomy – the principle that a rational individual should be given freedom in personal decision making, without undue outside coercion.
  2. Justice – the requirement for fair, moral, and impartial treatment of all persons
  3. Beneficence – the mandate to treat others in their best interest
  4. Nonmaleficence – “First do no harm” (as in the Hippocratic Oath)

These are good guidelines, though many problems will undoubtedly arise when such general secular ethical principles collide with the demands of specific religious beliefs and cultural practices. When supposedly infallible religious texts become part of the discussion, it makes it almost impossible to seek underlying unifying moral and ethical principles on which to base judgments.

POST SCRIPT: Brace yourself

Matt Taibbi warns that this presidential election is going to be very rough.

The Language of God-9: An appeal to the scientifically minded

(This series of posts reviews in detail Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

At the very end of his book, Collins appeals to those who may feel that science is incompatible with belief on god.

Have you been concerned that belief in God requires a descent into irrationality, a compromise of logic, or even intellectual suicide? It is hoped that the arguments presented within this book will provide at least a partial antidote to that view, and will convince you that of all the possible worldviews, atheism is the least rational. (p. 304)

I am afraid that this is a forlorn hope. If anything, this book with its mish-mash of faulty logic, ad hoc assumptions, contradictions, and question-begging rationalizations may actually achieve just the opposite. After all, if this is the best that an eminent scientist like Collins can come up with in defense of religion, then the situation is truly hopeless.

It may be that there are other scientists who can come up with better attempts and reconciling god with current scientific knowledge. Finding Darwin’s God by biologist Kenneth Miller tries to use the uncertainty principle of quantum mechanics to get around the question of how god can influence the course of events without being detected, but that argument has no credibility whatsoever. Also Miller’s book does not have the breadth of Collins’s work. Whatever the faults of Collins’s book, and there are many, he has to be commended on facing up squarely to the major problems and trying to come to terms with them.

In reading Collins’s book, one finds a refreshing honesty and lack of guile. You get the sense that he knows he is grappling with very difficult issues of science and faith and genuinely believes what he writes. This is in contrast with much of the writing emerging from (say) the intelligent design creationism camp that, while also sophisticated, strikes one as propagandistic, that they understand the weakness of their case but are trying to cover it up.

Collins’s problem is just that his solutions to the problems are so inadequate. But even here, the fault cannot be laid entirely at his feet. It is partially due to society at large which has given belief in god a respectability that has persuaded even people who should know better that it must have a rational basis, even though all the evidence is against it. Once Collins had taken the step to decide to believe in god, he simply cannot avoid slowly sinking into the sea of contradictions that eventually engulfs him.

Although I have tried to review Collins’s book fairly, some readers may think I have been too harsh. If so, they are not going to like Sam Harris’s review at all. He gives his review the title of The Language of Ignorance and says:

Francis Collins—physical chemist, medical geneticist and head of the Human Genome Project—has written a book entitled “The Language of God.” In it, he attempts to demonstrate that there is “a consistent and profoundly satisfying harmony” between 21st-century science and evangelical Christianity. To say that he fails at his task does not quite get at the inadequacy of his efforts. He fails the way a surgeon would fail if he attempted to operate using only his toes. His failure is predictable, spectacular and vile. “The Language of God” reads like a hoax text, and the knowledge that it is not a hoax should be disturbing to anyone who cares about the future of intellectual and political discourse in the United States.
. . .
If one wonders how beguiled, self-deceived and carefree in the service of fallacy a scientist can be in the United States in the 21st century, “The Language of God” provides the answer. The only thing that mitigates the harm this book will do to the stature of science in the United States is that it will be mostly read by people for whom science has little stature already. Viewed from abroad, “The Language of God” will be seen as another reason to wonder about the fate of American society. Indeed, it is rare that one sees the thumbprint of historical contingency so visible on the lens of intellectual discourse. This is an American book, attesting to American ignorance, written for Americans who believe that ignorance is stronger than death. Reading it should provoke feelings of collective guilt in any sensitive secularist. We should be ashamed that this book was written in our own time.

Collins’s hope expressed towards the end of the book that scientists who read it will be persuaded that “of all the possible worldviews, atheism is the least rational” is a statement revealing wishful thinking on a massive scale. My own feeling is that anyone who reads his book without suspending their powers of logic and reasoning will arrive at exactly the opposite conclusion.

Although I have been critical of Collins’s attempts at arguing for the existence of god, there is no question that when dealing just with science he writes and argues well. In fact, the Appendix of his book The Moral Practice of Science and Medicine: Bioethics is an excellent primer on some of the critical ethical issues facing us today as a result of the rapid advances in science in which he has played such an important role.

I will write about them in the next two posts.

POST SCRIPT: The Two Johns discuss Bush’s policies in the Middle East

The Language of God-8: The problem of free will, omnipotence, and omniscience

(This series of posts reviews in detail Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

The one new (to me at least) and interesting argument in The Language of God was the attempt by Francis Collins to reconcile the idea of free will with god’s omnipotence and omniscience. This knotty problem is caused by religious people wanting to hold on to three beliefs simultaneously: (1) We have free will. (2) God is omnipotent (all-powerful). (3) God is omniscient (knows everything in the past present and future).

Can all three things be simultaneously true? In the absence of a comparison with data, the only way that one can judge whether a proposition is false by reason alone is if it leads to a logical contradiction. Most people would immediately see that these three assumptions lead to irreconcilable contradictions and that one has to relinquish at least one of them. But Collins, like a lot of religious people, cannot bring himself to do that. He wants to believe in the traditional properties of god.

He also has the problem that although evolution by natural selection is not purely a chance-driven process, chance does play a role in one part of the process, that which causes mutations and variety. Chance can also play a role in the kinds of events that can change the environment in which an organism finds itself and thus change the way that the non-random natural selection process operates. For example, the asteroid collision that occurred about 65 million years ago and wiped out the dinosaurs profoundly affected the subsequent evolution process because it created opportunities for other species to emerge that might otherwise have been destroyed by dinosaurs.

This element of chance prevents religious people from simply assuming that god created the universe with its laws and then let it run its course because there is no guarantee that chance events like that asteroid collision would have occurred. Then how can you guarantee that humans would emerge without God intervening? The idea that humans emerged because of that chance collision does not cause atheists any problems. We are just thankful to have had that lucky break. But for religious people, humans were the goal of creation and their appearance cannot be left to chance. So what to do?

In item #4 of his basic tenets of BioLogos, Collins says that god never intervenes in the evolutionary process once he sets it in motion along with its associated laws. But he knows that there are contingent factors in evolution. So how can he ensure that humans must eventually appear? Again, to his credit, he does not duck the question or try to pretend it is not there.

Collins tries to deal with it by greatly expanding his concept of god:

The solution is actually readily to hand once one ceases to apply human limitations to God. If God is outside of nature, then He is outside of space and time. In that context, God could in the moment of creation of the universe also know every detail of the future. That could include the formation of stars, planets, and galaxies, all of the chemistry physics, geology, and biology that led to the formation of life on earth, and the moment of your reading this book – and beyond. In that context, evolution could appear to us to be driven by chance, but from God’s perspective the outcome would be entirely specified. Thus, God could be completely and intimately involved in the creation of all species, while from our perspective, limited as it is by the tyranny of linear time, this would appear a random and undirected process. (p. 272)

This is a truly remarkable passage, essentially saying that everything that would eventually occur was known by god at the time of creation, although to us it may seem like we have random events.

He had foreshadowed this extraordinary claim in an earlier part of the book (p. 113, 114) where he laid out his claims step-by-step.

  • If God exists, he is supernatural
  • If He is supernatural, then He is not limited by natural laws.
  • If He is not limited by natural laws, there is no reason He should be limited by time.
  • If He is not limited by time, then he is in the past, the present, and the future.

The consequence of those conclusions would include:

  • He could exist before the Big Bang and He could exist after the universe fades away, if it ever does.
  • He could know the precise outcome of the formation of the universe even before it started.
  • He could have foreknowledge of a planet near the outer rim of an average spiral galaxy that would have just the right characteristics to allow life.
  • He could have foreknowledge that the planet would lead to the development of sentient creatures, through the mechanism of evolution by natural selection.
  • He could even know in advance the thoughts and actions of those creatures, even though they themselves have free will

It seems to me that in this passage, Collins has given up on free will altogether and reverted to a strict determinism, although free will was invoked by him in defense of suffering caused by people and free will is an essential component of the concept of sin. After all, sin has no meaning if we are all just automatons playing out our pre-ordained roles in a drama authored and directed by god.

But as is usually the case, trying to adjust god’s qualities to take care of one problem immediately creates new problems elsewhere. John Allen Paulos (Irreligion: A mathematician explains why the arguments for god just don’t add up, 2008), points out one:

[E]fforts by some to put God, the putative first cause, completely outside of time and space give up entirely on the notion of cause, which is defined in terms of time. After all, A causes B only if A comes before B, and the first cause comes – surprise – first, before its consequences. (Placing God outside of space and time would also preclude any sort of later divine intervention in worldly affairs.)” (Paulos, p. 5,6)

Paulos also points out that Collins’s efforts to have both omniscience and omnipotence runs into another well-known contradiction.

Being omniscient, God knows everything that will happen: He can predict the future trajectory of every snowflake, the sprouting of every blade of grass, and the deeds of every human being, as well as all of His own actions. But being omnipotent, He can act in any way and do anything He wants, including behaving in ways different from those He’d predicted, making his expectations uncertain and fallible. He thus can’t be both omnipotent and omniscient. (Paulos, p. 41)

I think that these are insurmountable obstacles for any religious believer to overcome. Francis Collins tackles them gamely but is defeated by them.

POST SCRIPT: Man Crushes

James Wolcott analyzes this phenomenon that exists amongst male journalists and politicians.

The Language of God-7: The problem of theodicy

(This series of posts reviews in detail Francis Collins’s book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

Any defense of god has to confront a tough question: Why would a benevolent and omnipotent god allow suffering? The Greek philosopher Epicurus (341-271 BCE) posed the essential and, to my mind, ultimate contradiction that believers in god face: How to explain the existence of evil.

Is god willing to prevent evil but not able? Then he is not omnipotent.
Is he able but not willing? Then he is malevolent.
Is god both able and willing? Then whence cometh evil?
Is he neither able nor willing? Then why call him god?

Collins has nothing really new to say about this age-old problem but to his credit he does not avoid it. On the question of suffering to people caused by other people, he blames free will.

[We] have somehow been given free will, the ability to do as we please. We use this ability frequently to disobey the Moral Law. And when we do so, we shouldn’t then blame God for the consequences. (p. 64)

Collins seems to give a curious excuse for the evil caused by religious people, the very people who should be acutely able to distinguish between right and wrong.

In some unusual cultures the [Moral Law] takes on surprising trappings – consider witch burning in seventeenth century America. Yet when surveyed closely, these apparent aberrations can be seen to arise from strongly held but misguided conclusions about who or what is good or evil. If you firmly believed that a witch is the personification of evil on earth, an apostle of the devil himself, would it not then seem justified to take such drastic actions? (p. 39)

He also points to the suffering caused by non-religious people throughout history, as if that explained anything. I hear this argument often and always find it an odd one for religious people to make, even accepting for the moment the dubious proposition that throughout the course of history nonbelievers have caused more suffering than religious people. Is it really considered an argument in favor of a benevolent and omnipotent god that his followers have caused less suffering than non-believers?

On the more difficult question of suffering caused by natural disasters that god presumably has the power to avert and in which free will is not involved, Collins gives a confused answer, suggesting that these occur due to ‘natural’ laws and causes, and for god to prevent such events would require him to make repeated interventions in contravention of these laws. He says that this, for some reason, would be bad.

Science reveals that the universe, our own planet, and life itself are engaged in an evolutionary process. The consequences of that can include the unpredictability of the weather, the slippage of a tectonic plate, or the misspelling of a cancer gene in the normal process of cell division. If at the beginning of time God chose to use these forces to create human beings, then the inevitability of these other painful consequences was also assured. Frequent miraculous interventions would be at least as chaotic in the physical realm and would be interfering with human acts of free will. (p. 65-68)

The notion that people prefer suffering to the ‘chaos’ caused by repeated intervention by god in the world is a specious argument. If parents had a child who was dying of cancer, I bet that they would want more than anything for god to intervene and cure her, and wouldn’t give a damn if that caused ‘chaos’ for anyone else, including those scientists doing cancer research. In fact, religious people are always praying for god to intervene in such ways. That is when their need for god is greatest. If the people god supposedly created and whom he supposedly loves deeply want god to intervene to do a manifestly good thing and don’t care about chaos, why does god care? Or if he really wants natural laws to work but also cares about curing people of cancer, why doesn’t he whisper in Collins’s or other scientists’ ears the mechanism he used to cause cancer cells to emerge and how they can cure it?

Recognizing that saying what is effectively “Hey, stuff happens!” is weak consolation for massive and widespread suffering due to natural disasters or the actions of people, Collins inevitably retreats to a reliable refuge and plays that old get-out-of-jail-free card, the ‘mysterious ways clause’.

[If] God is loving and wishes for the best of us, then perhaps His plan is not the same as our plan . . . We may never fully understand the reasons for these painful experiences, but we can begin to accept the idea that there may be such reasons. (p. 65-68)
. . .
Recognize that a great deal of suffering is brought upon us by our own actions or those of others, and that in a world where humans practice free will, it is inevitable. Understand, also, that if God is real, His purposes will often not be the same as ours. Hard though it is to accept, a complete absence of suffering may not be in the best interest of our spiritual growth. (p. 305)

In other words, suffering might be good for us. But while pleading ignorance of god’s intent when it comes to suffering, like all religious believers, Collins seems to have extraordinary knowledge of god’s character and nature when it works to his advantage, like when he knows which act is a miracle of god and which isn’t or how god has chosen to act. For example, when arguing against young Earth creationism ideas, he says “Is this consistent with everything else we know about God from the Bible, from the Moral Law, and from every other source – namely, that He is loving, logical, and consistent?” (p. 237)

Eventually this is what all believers in god end up doing: Defining god in such a way that it suits their own personal emotional needs, adding ad hoc assumptions to deal with any and all problems created by their definition, and invoking the mysterious ways clause as a last resort when even the ad hoc additions aren’t sufficient.

Francis Collins, for all his sophistication and scientific expertise, is no different.

POST SCRIPT: Tim Russert

It should be no surprise that his fellow Villagers are praising the late Tim Russert as a great journalist. He was, after all, one of them, serving their interests faithfully. But while I am sorry that he died suddenly at an early age, Jonathan Schwarz captures my feelings exactly about how people like Russert endlessly drive their preferred chosen narrative, even if it is contradicted by facts.

How Tim Russert Planted The Seeds For Iraq War

December 19, 1999: With Al Gore as guest, Tim Russert says on Meet the Press: “One year ago Saddam Hussein threw out all the inspectors who could find his chemical or nuclear capability.” Russert asks Gore what he’s going to do about this.

Soon afterward: Sam Husseini leaves a message on Russert’s answering machine, and speaks to two of his assistants, telling them the inspectors were withdrawn by the UN at the request of the United States.

January 2, 2000: With Madeleine Albright as guest, Tim Russert repeats the error on Meet the Press: “One year ago, the inspectors were told, ‘Get out,’ by Saddam Hussein.” Russert asks Albright what she’s going to do about this.

January 21, 2000: Sam Husseini writes a letter to Russert, again laying out the facts, and requests a correction.

January 22, 2000-March 19, 2003: Russert never corrects his error.

March 19, 2003-present: Hundreds of thousands of people die in Iraq War. Russert dies, not in Iraq War. Official Washington weeps copious tears for Russert and his Extraordinary Journalistic Standards.

Notice that even if Husseini was not considered important enough to be listened to, it looks as if none of the many, many Village journalists who knew Russert bothered to tell him the truth about the inspectors either. They all live together in their Village and believe their Village myths, and then foist them on us. It was because of this relentless driving of the White House’s preferred war narrative that so many people, even now, believe so many false things about the Iraq war.

David North and David Walsh provide a much better review of Russert and his career than the hagiography that went on over the weekend.