Atheism and Agnosticism

In an interview, Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist,” explains why he uses that term (thanks to onegoodmove):

I think I use the term radical rather loosely, just for emphasis. If you describe yourself as “Atheist,” some people will say, “Don’t you mean ‘Agnostic’?” I have to reply that I really do mean Atheist. I really do not believe that there is a god – in fact I am convinced that there is not a god (a subtle difference). I see not a shred of evidence to suggest that there is one. It’s easier to say that I am a radical Atheist, just to signal that I really mean it, have thought about it a great deal, and that it’s an opinion I hold seriously…

People will then often say “But surely it’s better to remain an Agnostic just in case?” This, to me, suggests such a level of silliness and muddle that I usually edge out of the conversation rather than get sucked into it. (If it turns out that I’ve been wrong all along, and there is in fact a god, and if it further turned out that this kind of legalistic, cross-your-fingers-behind-your-back, Clintonian hair-splitting impressed him, then I think I would chose not to worship him anyway.) . . .

And making the move from Agnosticism to Atheism takes, I think, much more commitment to intellectual effort than most people are ready to put in. (italics in original)

I think Adams is exactly right. When I tell people that I am an atheist, they also tend to suggest that surely I must really mean that I am an agnostic. (See here for an earlier discussion of the distinction between the two terms.) After all, how can I be sure that there is no god? In that purely logical sense they are right, of course. You cannot prove a negative so there is always the chance that not only that a god exists but, if you take radical clerics Pat Robertson and Jerry Falwell seriously, has a petty, spiteful, vengeful, and cruel personality.

When I say that I am atheist, I am not making that assertion based on logical or evidentiary proofs of non-existence. It is that I have been convinced that the case for no god is far stronger than the case for god. It is the same reasoning that makes me convinced that quantum mechanics is the theory to use for understanding sub-atomic phenomena or natural selection is the theory to be preferred for understanding the diversity of life. There is always the possibility that these theories are ‘wrong’ in some sense and will be superceded by other theories, but those theories will have to have convincing evidence in their favor.

If, on the other hand, I ask myself what evidence there is for the existence of a god, I come up empty. All I have are the assurances of clergy and assertions in certain books. I have no personal experience of it and there is no scientific evidence for it.

Of course, as long time readers of this blog are aware, I used to be quite religious for most of my life, even an ordained lay preacher of the Methodist Church. How could I have switched? It turns out that my experience is remarkably similar to that of Adams, who describes why he switched from Christianity to atheism.

As a teenager I was a committed Christian. It was in my background. I used to work for the school chapel in fact. Then one day when I was about eighteen I was walking down the street when I heard a street evangelist and, dutifully, stopped to listen. As I listened it began to be borne in on me that he was talking complete nonsense, and that I had better have a bit of a think about it.

I’ve put that a bit glibly. When I say I realized he was talking nonsense, what I mean is this. In the years I’d spent learning History, Physics, Latin, Math, I’d learnt (the hard way) something about standards of argument, standards of proof, standards of logic, etc. In fact we had just been learning how to spot the different types of logical fallacy, and it suddenly became apparent to me that these standards simply didn’t seem to apply in religious matters. In religious education we were asked to listen respectfully to arguments which, if they had been put forward in support of a view of, say, why the Corn Laws came to be abolished when they were, would have been laughed at as silly and childish and – in terms of logic and proof -just plain wrong. Why was this?
. . .
I was already familiar with and (I’m afraid) accepting of, the view that you couldn’t apply the logic of physics to religion, that they were dealing with different types of ‘truth’. (I now think this is baloney, but to continue…) What astonished me, however, was the realization that the arguments in favor of religious ideas were so feeble and silly next to the robust arguments of something as interpretative and opinionated as history. In fact they were embarrassingly childish. They were never subject to the kind of outright challenge which was the normal stock in trade of any other area of intellectual endeavor whatsoever. Why not? Because they wouldn’t stand up to it.
. . .
Sometime around my early thirties I stumbled upon evolutionary biology, particularly in the form of Richard Dawkins’s books The Selfish Gene and then The Blind Watchmaker and suddenly (on, I think the second reading of The Selfish Gene) it all fell into place. It was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I’d take the awe of understanding over the awe of ignorance any day.

What Adams is describing is the conversion experience that I described earlier when suddenly, switching your perspective seems to make everything fall into place and make sense.

For me, like Adams, I realized that I was applying completely different standards for religious beliefs than I was for every other aspect of my life. And I could not explain why I should do so. Once I jettisoned the need for that kind of distinction, atheism just naturally emerged as the preferred explanation. Belief in a god required much more explaining away of inconvenient facts than not believing in a god.

POST SCRIPT: The Gospel According to Judas

There was a time in my life when I would have been all a-twitter over the discovery of a new manuscript that sheds a dramatically different light on the standard Gospel story of Jesus and Judas. I would have wondered how it affected my view of Jesus and god and my faith.

Now this kind of news strikes me as an interesting curiosity, but one that does not affect my life or thinking at all. Strange.

On writing-3: Why do people plagiarize?

(See part 1 and part 2 in the series.)

Just last week, it was reported that twenty one Ohio University engineering graduates had plagiarized their master’s theses. Why would they do that?

I think it is rare that people deliberately set out to use other people’s words and ideas while hiding the source. Timothy Noah in his Chatterbox column has a good article in Slate where he points to Harvard’s guidelines to students which state that unintentional plagiarism is a frequent culprit:

Most often . . . the plagiarist has started out with good intentions but hasn’t left enough time to do the reading and thinking that the assignment requires, has become desperate, and just wants the whole thing done with. At this point, in one common scenario, the student gets careless while taking notes on a source or incorporating notes into a draft, so the source’s words and ideas blur into those of the student.

But lack of intent is not a valid defense against the charge of plagiarism. That has not prevented even eminent scholars like Doris Kearns Goodwin from trying to invoke it. But as Noah writes, the American Historical Association’s (AHA) and the Organization of American Historians’ (OAH) statement on plagiarism is quite clear on this point:
[Read more…]

The politics of V for Vendetta (no spoilers)

I believe that V for Vendetta will go down in film history as a classic of political cinema. Just as Dr. Strangelove captured the zeitgeist of the cold war, this film does it for the perpetual war on terrorism.

The claim that this film is so significant may sound a little strange, considering that the film’s premise is based on a comic book series written two decades ago and set in a futuristic Britain. Let me explain why I think that this is something well worth seeing.
[Read more…]

Don’t miss V for Vendetta!

I don’t normally post on the weekends but last night I saw the film V for Vendetta and it blew me away. It is a brilliant political thriller with disturbing parallels to what is currently going on in the US. It kept me completely absorbed.

I’ll write more about it next week but this is just to urge people to see it before it ends its run.

Tiktaalik bridges the gap

As you can imagine, the world of science has been abuzz with the news yesterday of the release of a paper in the prestigious science journal Nature heralding the discovery of a 375-million year old transitional fossil between fish and land animal, which has been named Tiktaalik. The fossil seems to provide evidence of a key evolutionary idea that land animals evolved from fish.
[Read more…]

Precision in language

Some time ago, a commenter to this blog sent me a private email expressing this view:

Have you ever noticed people say “Do you believe in evolution?” just as you would ask “Do you believe in God?” as if both schools of thought have equal footing? I respect others’ religious beliefs as I realize I cannot disprove God just as anyone cannot prove His existence, but given the amount of evidence for evolution, shouldn’t we insist on asking “Do you accept evolution?”

It may just be semantics, but I feel that the latter wording carries an implied affirmation just as “Do you accept that 2+2=4?” carries a different meaning than “Do you believe 2+2=4?”

I guess the point I’m trying to make is that by stating something as a belief, it opens the debate to the possibility that something is untrue. While this may fine for discussions of religion, shouldn’t the scientific community be more insistent that a theory well supported by physical evidence, such as evolution, is not up for debate?

It’s a good point. To be fair, scientists themselves are partly responsible for this confusion because we also say that we “believe” in this or that scientific theory, and one cannot blame the general public from picking up on that terminology. What is important to realize, though, is that the word ‘believe’ is being used by scientists in a different sense from the way it is used in religion.

The late and deeply lamented Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist” puts it nicely (thanks to onegoodmove):

First of all I do not believe-that-there-is-not-a-god. I don’t see what belief has got to do with it. I believe or don’t believe my four-year old daughter when she tells me that she didn’t make that mess on the floor. I believe in justice and fair play (though I don’t know exactly how we achieve them, other than by continually trying against all possible odds of success). I also believe that England should enter the European Monetary Union. I am not remotely enough of an economist to argue the issue vigorously with someone who is, but what little I do know, reinforced with a hefty dollop of gut feeling, strongly suggests to me that it’s the right course. I could very easily turn out to be wrong, and I know that. These seem to me to be legitimate uses for the word believe. As a carapace for the protection of irrational notions from legitimate questions, however, I think that the word has a lot of mischief to answer for. So, I do not believe-that-there-is-no-god. I am, however, convinced that there is no god, which is a totally different stance. . .

There is such a thing as the burden of proof, and in the case of god, as in the case of the composition of the moon, this has shifted radically. God used to be the best explanation we’d got, and we’ve now got vastly better ones. God is no longer an explanation of anything, but has instead become something that would itself need an insurmountable amount of explaining…

Well, in history, even though the understanding of events, of cause and effect, is a matter of interpretation, and even though interpretation is in many ways a matter of opinion, nevertheless those opinions and interpretations are honed to within an inch of their lives in the withering crossfire of argument and counterargument, and those that are still standing are then subjected to a whole new round of challenges of fact and logic from the next generation of historians – and so on. All opinions are not equal. Some are a very great more robust, sophisticated and well supported in logic and argument than others.

When someone says that they believe in god, they mean that they believe something in the absence of, or even counter to, the evidence, and even to reason and logic. When scientists say they believe a particular theory, they mean that they believe that theory because of the evidence and reason and logic, and the more evidence there is, and the better the reasoning behind it, the more strongly they believe it. Scientists use the word ‘belief’ the way Adams says, as a kind of synonym for ‘convinced,’ because we know that no scientific theory can be proven with 100% certainty and so we have to accept things even in the face of this remaining doubt. But the word ‘believe’ definitely does not carry the same meaning in the two contexts.

This can lead to the generation of confusion as warned by the commenter but what can we do about it? One option is, as was suggested, to use different words, with scientists avoiding use of the word ‘believe.’ I would have agreed with this some years ago but I am becoming increasingly doubtful that we can control the way that words are used.

For example, there was a time when I used to be on a crusade against the erroneous use of the word ‘unique’. The Oxford English Dictionary is pretty clear about what this word means:

  • Of which there is only one; one and no other; single, sole, solitary.
  • That is or forms the only one of its kind; having no like or equal; standing alone in comparison with others, freq. by reason of superior excellence; unequalled, unparalleled, unrivalled.
  • Formed or consisting of one or a single thing
  • A thing of which there is only one example, copy, or specimen; esp., in early use, a coin or medal of this class.
  • A thing, fact, or circumstance which by reason of exceptional or special qualities stands alone and is without equal or parallel in its kind.

It means, in short, one of a kind, so something is either unique or it is not. There are no in-betweens. And yet, you often find people saying things like “quite unique” or “very unique” or “almost unique.” I used to try and correct this but have given up. Clearly, people in general think that unique means something like “rare” and I don’t know that we can ever change this even if we all become annoying pedants, correcting people all the time, avoided at parties because of our pursuit of linguistic purity.

Some battles, such as with the word unique are, I believe, lost for good and I expect the OED to add the new meaning of ‘rare’ some time in the near future. It is a pity because then we would then be left with no word with the unique meaning of ‘unique’, but there we are. We would have to say something like ‘absolutely unique’ to convey the meaning once reserved for just ‘unique.’

In science too we often use words with precise operational meanings while the same words are used in everyday language with much looser meanings. For example, in physics the word ‘velocity’ is defined operationally by the situation when you have an object moving along a ruler and, at two points along its motion, you take ruler readings and clock readings, where the clocks are located at the points where the ruler readings are taken, and have been previously synchronized. Then the velocity of the moving object is the number you get when you take the difference between the two ruler readings and divide by the difference between the two clock readings.

Most people (especially sports commentators) have no idea of this precise meaning when they use the word velocity in everyday language, and often use the word synonymously with speed or, even worse, acceleration, although those concepts have different operational meanings. Even students who have taken physics courses find it hard to use the word in its strict operational sense.

Take, for another example, the word ‘theory’. By now, as a result of the intelligent design creationism (IDC) controversy, everyone should be aware that the way this word is used by scientists is quite different from its everyday use. In science, a theory is a powerful explanatory construct. Science depends crucially on its theories because they are the things that give it is predictive power. “There is nothing so practical as a good theory” as Kurt Lewin famously said. But in everyday language, the word theory is used as meaning ‘not factual,’ something that can be false or ignored.

I don’t think that we can solve this problem by putting constraints on how words can be used. English is a wonderful language precisely because it grows and evolves and trying to fix the meanings of words too rigidly would perhaps be stultifying. I now think that we need to change our tactics.

I think that once the meanings of words enter mainstream consciousness we will not be successful in trying to restrict their meanings beyond their generally accepted usage. What we can do is to make people aware that all words have varying meanings depending on the context, and that scientific and other academic contexts tend to require very precise meanings in order to minimize ambiguity.

Heidi Cool has a nice entry where she talks about the importance of being aware of when you are using specialized vocabulary, and the need to know your audience when speaking or writing, so that some of the pitfalls arising from the imprecise use of words can be avoided.

We have to realize though that despite our best efforts, we can never be sure that the meaning that we intend to convey by our words is the same as the meaning constructed in the minds of the reader or listener. Words always contain an inherent ambiguity that allows the ideas expressed by them to be interpreted differently.

I used to be surprised when people read the stuff I wrote and got a different meaning than I had intended. No longer. I now realize that there is always some residual ambiguity in words that cannot be overcome. While we can and should strive for maximum precision, we can never be totally unambiguous.

I agree with philosopher Karl Popper when he said, “It is impossible to speak in such a way that you cannot be misunderstood.” The best we can hope for is to have some sort or negotiated consensus on the meanings of ideas.

On writing-2: Why do we cite other people’s work?

In the previous post on this topic, I discussed the plagiarism case of Ben Domenech, who had lifted entire chunks of other people’s writings and had passed them off as his own.

How could he have done such a thing? After all, all high school and college students get the standard lecture on plagiarism and why it is bad. And even though Domenech was home schooled, it seems unlikely that he thought this was acceptable practice. When he was confronted with his plagiarism, his defense was not one of surprise that it was considered wrong but merely that he had been ‘young’ when he did it or that he had got permission from the author to use their words or that the offending words had been inserted by his editors.

The cautionary lectures that students receive about plagiarism are usually cast in a moralistic way, that plagiarism is a form of stealing, that taking someone else’s words or ideas without proper attribution is as morally reprehensible as taking their money.

What is often overlooked in this kind of approach is that there are many other reasons why writers and academics cite other people’s works when appropriate. By focusing too much on this stealing aspect, we tend to not give students an important insight into how scholarship and research works.

Russ Hunt at St. Thomas University argues that writers cite others for a whole complex of reasons that have little to do with avoiding charges of plagiarism:

[P]ublished scholarly literature is full of examples of writers using the texts, words and ideas of others to serve their own immediate purposes. Here’s an example of the way two researchers opened their discussion of the context of their work in 1984:

To say that listeners attempt to construct points is not, however, to make clear just what sort of thing a ‘point’ actually is. Despite recent interest in the pragmatics of oral stories (Polanyi 1979, 1982; Robinson 1981), conversations (Schank et al. 1982), and narrative discourse generally (Prince 1983), definitions of point are hard to come by. Those that do exist are usually couched in negative terms: apparently it is easier to indicate what a point is not than to be clear about what it is. Perhaps the most memorable (negative) definition of point was that of Labov (1972: 366), who observed that a narrative without one is met with the “withering” rejoinder, “So what?” (Vipond & Hunt, 1984)

It is clear here that the motives of the writers do not include prevention of charges of plagiarism; moreover, it’s equally clear that they are not. . .attempting to “cite every piece of information that is not a) the result of your own research, or b) common knowledge.” What they are doing is more complex. The bouquet of citations offered in this paragraph is informing the reader that the writers know, and are comfortable with, the literature their article is addressing; they are moving to place their argument in an already existing written conversation about the pragmatics of stories; they are advertising to the readers of their article, likely to be interested in psychology or literature, that there is an area of inquiry — the sociology of discourse — that is relevant to studies in the psychology of literature; and they are establishing a tone of comfortable authority in that conversation by the acknowledgement of Labov’s contribution and by using his language –“withering” is picked out of Labov’s article because it is often cited as conveying the power of pointlessness to humiliate (I believe I speak with some authority for the authors’ motives, since I was one of them).

Scholars — writers generally — use citations for many things: they establish their own bona fides and currency, they advertise their alliances, they bring work to the attention of their reader, they assert ties of collegiality, they exemplify contending positions or define nuances of difference among competing theories or ideas. They do not use them to defend themselves against potential allegations of plagiarism.

The clearest difference between the way undergraduate students, writing essays, cite and quote and the way scholars do it in public is this: typically, the scholars are achieving something positive; the students are avoiding something negative. (my italics)

I think that Hunt has hit exactly the right note.

When you cite the works of others, you are strengthening your own argument because you are making them (and their allies) into your allies, and people who challenge what you say have to take on this entire army. When you cite reputable sources or credible authorities for facts or ideas, you become more credible because you are no longer alone and thus not easily dismissed, even if you personally are not famous or a recognized authority.

To be continued. . .

POST SCRIPT: It’s now Daylight Saving Time. Do you know where your spiritual plane is?

It seems like idiotic statements attributing natural events to supernatural causes are not restricted to Christian radical clerics like Pat Robertson. Some Sri Lankan Buddhist clergy are challenging him for the title of Religious Doofus.

Since Sri Lanka sits very close to the equator, the length of the day is the same all year round, not requiring the ‘spring-forward-fall-back’ biannual adjusting of the US. Sri Lankan time used to be 5.5 hours ahead of Universal Time (UT) but in 1996 the government made a one-time shift it to 6.5 hours in order to have sunset arrive later and save energy. But the influential Buddhist clergy were not happy with the change. As a compromise, the clocks were then again adjusted to make it just 6.0 ahead of UT as a compromise. Now the government is thinking of going back to the original 5.5. hours.

Some of the country’s Buddhist clergy are rejoicing at the prospect of a change because they say Sri Lanka’s “old” time fitted better with their rituals.

They believe a decade living in the “wrong” time has upset the country’s natural order with terrible effect.

The Venerable Gnanawimala says the change moved the country to a spiritual plane 500 miles east of where it should be.

“After this change I feel that many troubles have been caused to Sri Lanka. Tsunamis and other natural disasters have been taking place,” he says.

This is what happens when you mix religion and the state. You now have to worry about what your actions are doing to the longitudinal coordinates of your nation’s spiritual plane.

No more daft women!

Evan Hunter, who was the screenwriter on Alfred Hitchcock’s 1963 film The Birds recalled an incident that occurred when he was discussing the screenplay with the director.

I don’t know if you recall the movie. There’s a scene where after this massive bird attack on the house Mitch, the male character, is asleep in a chair and Melanie hears something. She takes a flashlight and she goes up to investigate, and this leads to the big scene in the attic where all the birds attack her. I was telling [Hitchcock] about this scene and he was listening very intently, and then he said, “Let me see if I understand this correctly. There has been a massive attack on the house and they have boarded it up and Mitch is asleep and she hears a sound and she goes to investigate?” I said, “Well, yes,” and he said, “Is she daft? Why doesn’t she wake him up?”

I remembered this story when I was watching the film The Interpreter with Nicole Kidman and Sean Penn. The Kidman character accidentally overhears something at the UN that puts her life at risk. After she complains to government agent Penn that no one seems to be bothered about protecting her from harm, Penn puts her on round-the-clock surveillance. So then what does Kidman do? She sneaks around, giving the slip to the very people assigned to protect her and refuses to tell Penn where she went and to whom she spoke and about what, causing herself and other people to be put at risk and even dying because of her actions. Hitchcock would have said, “Is she daft?”

This is one of my pet peeves about films, where the female character insists on doing something incredibly stupid that puts her and other people at peril. Surely in this day and age we have gone beyond the stale plot device of otherwise smart women behaving stupidly in order to create drama? Surely writers have more imagination than that? Do directors really think that viewers won’t notice how absurd that is?

According to Hunter, Hitchcock was always exploring the motivations of characters, trying to make their actions plausible. Hunter says:

[Hitchcock] would ask surprising questions. I would be in the middle of telling the story so far and he would say, “Has she called her father yet?” I’d say, “What?” “The girl, has she called her father?” And I’d say, “No.” “Well, she’s been away from San Francisco overnight. Does he know where she is? Has she called to tell him she’s staying in this town?” I said, “No.” And he said, “Don’t you think she should call him?” I said, “Yes.” “You know it’s not a difficult thing to have a person pick up the phone.” Questions like that.

(Incidentally, the above link has three screenwriters Arthur Laurents, who wrote Rope (1948), Joseph Stefano, who wrote Psycho (1960), and Evan Hunter reminiscing about working with Hitchcock. It is a fascinating glimpse behind the scenes of how a great director envisages and sets about creating films. The last quote actually reads in the original: “Yes, you know it’s not a difficult thing to have a person pick up the phone.” I changed it because my version makes more sense, and the original is a verbatim transcript of a panel discussion, in which such kinds of punctuation errors can easily occur.)

More generally, I hate it when characters in films and books behave in ways that are unbelievable. The problem is not with an implausible premise, which is often necessary to create a central core for the story. I can even accept the violation of a few laws of physics. For example, I can accept the premise of Superman that a baby with super powers (but susceptible to kryptonite) arrives on Earth from another planet and is adopted by a family and needs to keep his identity secret. I can accept of Batman that a millionaire like Bruce Wayne adopts a secret identity in order to fight crime.

What I cannot stand is when they and the other people act implausibly, when the stories built on this premise have logical holes that you can drive a Batmobile through. The latter, for example, is a flashy vehicle, to say the least, easily picked out in traffic. And yet, nobody in Gotham thinks of following it back to the Batcave, to see who this mysterious hero is. Is the entire population of that city daft?

And how exactly does the Bat-Signal that the Police Commissioner lights up the sky with supposed to work? You don’t need a physics degree to realize that shining a light, however bright, into the sky is not going to create a sharp image there. And what if it’s daytime? And if there are no clouds? (It’s been a long time since I read these comics. Maybe the later editions fixed these problems. But even as a child these things annoyed me.)

And don’t get me started on Spiderman going in and out of his apartment window in a building in the middle of a big city in broad daylight without anyone noticing.

As a fan of films, it really bugs me when filmmakers don’t take the trouble to write plots that make sense, and have characters who don’t behave the way that you would expect normal people to behave. How hard can it be to ensure this, especially when you have the budget to hire writers to create believable characters and a plausible storyline?

If any directors are reading this, I am willing to offer my services to identify and fix plot holes.

So please, no more daft women! No more ditzy damsels in distress! No more Perils of Pauline!

POST SCRIPT: CSA: Confederate States of America

I saw this film last week (see the post script to an earlier posting), just before it ended its very short run in Cleveland. It looks at what history would have been like if the south had won the civil war. Imagine, if you will, an America very much like what we have now except that owning black slaves is as commonplace as owning a dishwasher.

What was troubling is that although this is an imagined alternate history presented in a faux documentary format, much of it is plausible based on what we have now. What was most disturbing for me was seeing in the film racist images and acts that I thought were the over-the-top imaginings of the screenwriter about that might have happened in this alternate history, and then finding out that they actually happened in the real history.

Although the film is a clever satire in the style of This is Spinal Tap, I could not really laugh because the topic itself is so appalling. It is easy to laugh at the preening and pretensions of a rock band. It is hard to laugh at people in shackles.

But the film was well worth seeing, disturbing though it was.

On writing-1: Plagiarism at the Washington Post

If you blinked a couple of weeks ago, you might have missed the meteor that was the rise and fall of the career of Ben Domenech as a blogger for WashingtonPost.com.

This online version of the newspaper is apparently managed independently of the print edition and has its own Executive Editor Jim Brady. For reasons that are not wholly clear, Brady decided that he needed to hire a “conservative” blogger for the website.

The problem with this rationale for the hiring was that no “liberal” counterpart blogger existed at the paper. They did have a popular blogger in Dan Froomkin, someone with a journalistic background, who wrote about politics for the Post and who had on occasion been critical of the Bush White House. As I have written earlier, Glenn Greenwald has pointed out that anything but unswavering loyalty to Bush has become the basis for identifying someone as liberal, and maybe Brady had internalized this critique, prompting him to hire someone who could be counted upon to support Bush in all his actions.

For reasons that are even more obscure, rather than choose someone who had serious journalistic credentials for this new column, Brady selected the untested 24-year old Ben Domenech. It is true that Domenech was something of a boy wonder, at least on paper. He had been home-schooled by his affluent and well-connected Republican family. He then went to William and Mary and wrote for their student newspaper The Flat Hat. He dropped out of college before graduating and co-founded a conservative website called Redstate, where he wrote under the pseudonym Augustine.

His father was a Bush political appointee and his new online column for the Washington Post (called Red America) said in its inaugural posting on March 21 that young Ben “was sworn in as the youngest political appointee of President George W. Bush. Following a year as a speechwriter for HHS Secretary Tommy Thompson and two as the chief speechwriter for Texas Senator John Cornyn, Ben is now a book editor for Regnery Publishing, where he has edited multiple bestsellers and books by Michelle Malkin, Ramesh Ponnuru, and Hugh Hewitt.”

Not bad for a 24-year old without a college degree. And his bio lists even more accomplishments. But getting his own column in WashingtonPost.com was the peak. Soon after that things started going downhill very rapidly.

His decline began when bloggers looked into his writings and found that, as Augustine, he had written a column of the day of Coretta Scott King’s funeral calling her a Communist. This annoyed a lot of people who then started looking more closely at his other writings. It was then that someone discovered that he had plagiarized. And the plagiarism was not subtle. Take for example this excerpt from his review of the film Bringing out the Dead.

Instead of allowing for the incredible nuances that Cage always brings to his performances, the character of Frank sews it all up for him.

But there are those moments that allow Cage to do what he does best. When he’s trying to revive Mary’s father, the man’s family fanned out around him in the living room in frozen semi-circle, he blurts out, “Do you have any music?”

Now compare it with an earlier review posted on Salon.com,

Instead of allowing for the incredible nuance that Cage always brings to his performances, the character of Frank sews it all up for him. . . But there are those moments that allow Cage to do what he does best. When he’s trying to revive Mary’s father, the man’s family fanned out around him in the living room in frozen semi-circle, he blurts out, “Do you have any music?”

Or this sampling from P. J. O’Rourke’s book Modern Manners, which also found its way into Domenech’s columns:

O’Rourke, p.176: Office Christmas parties • Wine-tasting parties • Book-publishing parties • Parties with themes, such as “Las Vegas Nite” or “Waikiki Whoopee” • Parties at which anyone is wearing a blue velvet tuxedo jacket

BenDom: Christmas parties. Wine tasting parties. Book publishing parties. Parties with themes, such as “Las Vegas Nite” or “Waikiki Whoopee.” Parties at which anyone is wearing a blue velvet tuxedo jacket.

O’Rourke: It’s not a real party if it doesn’t end in an orgy or a food fight. • All your friends should still be there when you come to in the morning.

BenDom: It’s not a real party if it doesn’t end in an orgy or a food fight. All your friends should still be there when you come to in the morning.

These are not the kinds of accidental plagiarisms that anyone can fall prey to, where a turn of phrase that appealed to you when you read it a long time ago comes out of you when you are writing and you do not remember that you got it from someone else. These examples are undoubtedly deliberate cut-and-paste jobs.

Once the charges of plagiarism were seen to have some credibility, many people went to Google and the floodgates were opened, Kaloogian-style, with bloggers all over poring over his writings. Within the space of three days a torrent of further examples of plagiarism poured out. These new allegations dated back to his writings at his college newspaper and then later for National Review Online, and Domenech was found to have lifted material from Salon and even from National Review Online, the latter being the same publication for which he was writing, which adds the sin of ingratitude to the dishonesty.

On March 24, just three days after starting his Washington Post column, Ben Domenech resigned under pressure. Soon after, he also resigned as book editor at Regnery.

What can we learn from this? One lesson seemingly is that people can get away with plagiarism for a short while, especially if they are writing in obscurity for little known publications. While he was writing for his college newspaper and even for his own website, no one cared to closely look into his work. Even his future employers at WanshintonPost.com did not seem to have checked him out carefully. Apparently his well-connected family and sterling Bush loyalty was enough to satisfy them that he was a good addition to their masthead.

But as soon as a writer becomes high profile, the chances are very high these days that any plagiarism will come to light.

At one level, this is a familiar cautionary tale to everyone to cite other people’s work when using it. For us in the academic world, where plagiarism is a big no-no, the reasons for citing are not just there are high penalties if you get caught not doing it. The more important reasons arise from the very nature of scholarly academic activity, which I shall look at in a future posting.

To be continued. . .

Changing notions of death-4: Implications for animals

(See part 1, part 2 and part 3 of this series.)

If asked, any one of us would say that we value life, that we consider it precious and not to be taken lightly. While the specific phrase “valuing the culture of life” seems to have been co-opted by those who are specifically opposed to abortion, the general idea that it encapsulates, that life should not be taken casually or at all, is one that all of us would subscribe to.

But of course there are contradictions. People who say they value life often see no problem with supporting the death penalty. Another hypocrisy is with those who support killing in wars, even of civilians, and even in large numbers. We try to rationalize this by saying that civilians are killed inadvertently, but that is a false argument. Civilians are inevitably killed in wars, often deliberately, and we often do nothing to condemn it when it is done by ‘our side.’ To support wars is to support killing and absolve killers, however much we try to sugar coat this unpleasant fact. As Voltaire said, “It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets.”

In his lecture, Peter Singer pointed out that killing and eating animals, while opposing the withdrawal of life support of those in a persistent vegetative state, poses an ethical problem for people who say that they value a “culture of life.”

He gave as an example the fact that while the 3,000 or so victims of September 11, 2001 were deeply mourned, no one mourned the fact that millions of chickens were killed on that same day and every day before and since. But we do not mourn them the same way. Why not?

If we define death as heart dead or brain dead, then the chickens are as alive as any of us. Even when we lower the bar to thinking of someone in a persistent vegetative state as being ‘effectively dead’, that still does not get us off the hook since, as Singer argued, chickens and other animals have higher levels of consciousness than people in a persistent vegetative state. Free range chickens seem to show signs of happiness, curiosity, anxiety, fear, and the sense of self-awareness that, if present in humans, would definitely bar us from killing them. If that is the case, then if we oppose the withdrawal of life support systems even from those in a persistent vegetative state, then how can we justify killing chickens, or any other animal for that matter?

He posed the question of why the killing of human beings is deplored but that of chickens is not. He said that appealing to species chauvinism (“We are human, and so are justified in valuing human life over non-human animal life.”) was not really an ethically justifiable defense, though many people used it.

After all, if we allowed that particular chauvinist line of defense, where do we draw the line? What if I say that because I am male, I am justified in thinking that the lives of women are worth less than that of men? We would reject that line of argument as rank sexism. What if I say that because I am brown skinned, I am justified in treating non-brown people as inferior? We would reject that argument as rank racism. So why should we think that the argument “I am human so I am justified in valuing human life over animal life?” is acceptable?

Singer’s point was that as soon as we shift our definition of death from that defined by the complete lack of heart or brain function, and to judgments about the nature or level of the consciousness involved, we have gone into ethically tricky territory for those non-vegetarians who argue that because of belief in a “culture of life,” human beings must be kept alive at all costs. Because you cannot argue that people in a persistent vegetative state should be kept on life support while arguing that perfectly healthy animals can be killed.

People of certain religious traditions (like Christians, Jews, and Muslims) perhaps can find justification for this discrepant behavior by appealing to their religious beliefs that include species chauvinism as part of their doctrines. In the view of these religions, humans are specially favored by god and thus fundamentally different from, and superior to, other animals so valuing human life and disregarding non-human animal life is allowable. It is noteworthy that Buddhism and Hinduism do not assert such a species chauvinistic attitude. They seem to treat human and non-human animals on an equal footing and vegetarianism is advocated by both religions.

But if we leave out religious sanction and argue on strictly ethical grounds, it becomes hard to justify opposing the withdrawal of life support systems to people who are in a persistent vegetative state on the grounds that such people are still ‘alive’, and square it with the killing of healthy animals for food, as we routinely do.

Singer made a cogent argument that none of us can really ethically justify the killing of animals for food, when it is not necessary for survival. Singer himself is a vegetarian.

I am not sure if Singer was able to resolve some of the ethical issues of what constitutes death by the end of his talk, after I had left. But his ideas were very thought provoking.

POST SCRIPT: Juggling

Good jugglers are amazing. For a fine example of this art, go here and then click on “Watch Chris Bliss.”