Methodological naturalism

If our car developed a strange and disturbing noise, we would take it to a mechanic to diagnose the problem. If, after trying out just one or two ideas and failing, the mechanic threw up her hands and said that she gave up because the cause must be something mysterious and inexplicable, we would very likely switch to another mechanic.

We would do the same thing with a plumber who gave up on trying to find the source of a leak or a doctor who gave up trying to find the cause of an acute pain after merely ruling out gas and muscle pulls.

We want each of these people to keep investigating, to try and find the reason for the problem and not give up until they have solved it. If any one of them told us that the cause was some supernatural power, we would quickly dump that person and find a new one, even if we were ourselves were religious and we preferred to have religious people as our doctors and plumbers and mechanics.
[Read more…]

The name game

When I first started getting interested in the so-called ‘intelligent design creationist’ (IDC) movement, I noticed that they were very careful about terminology and insisted on using specific terms.

For example, IDC people would divide science up into two categories that they called ’empirical science’ and ‘origins science.’ Empirical science was defined by them as the kind of science where you could do experiments in laboratories or in the field. Origins science dealt with subjects that dealt with the origins of things and which had happened long ago. So theories of cosmology, astronomy and, most importantly (for them), evolution of life came under the heading of ‘origins’ science.

They were also very insistent on avoiding the use of the terms ‘creationist’ and ‘God’ and pushed for the use of the term ‘design’ which they used to mean things that were not randomly created. ‘Intelligent design’ was used by them to denote design by a human-like intelligence and not by (say) a computer program.

Since I came from the scientific world where what something is called is not what is important and it is the operational definition that matters, I was initially willing to go along with their terminology. The problem was that I discovered that rather than the names being used just as a harmless label for the underlying operational definition, as is the case in science, in the case of intelligent design, no operational definition was forthcoming. Instead, the names themselves became used as arguments, so that conceding to them the choice of names meant conceding a substantial portion of the argument.

Let me illustrate with some examples. Since they did not use the name God in their literature, they could proffer the claim that theirs was not a religious theory (“See, nowhere do we use ‘God’ in our work”). Also, since they did not use the name ‘creationist,’ they could dissociate themselves from the young-Earth creationist (YEC) movement and the old-earth creationist (OEC) movement, both of which explicitly mentioned god in their literature and had already been struck down by the courts as being religious in nature and thus inappropriate for inclusion in science classes. Also, the YEC and OEC were embarrassing to the IDC people in that they interpreted the Bible literally (to differing degrees) and thus alienated a lot of potential allies.

This attention to words and language has been part of a careful thought-out strategy. In testimony in the Dover, PA case, it was shown that in the book Of Pandas and People which the students were explicitly told to read as an ‘antidote’ to evolution, early drafts of the book used the words creationism but later replaced it with intelligent design. This enables the intelligent design people to claim that their theory does not involve god because they avoided providing an operational definition for intelligent design or an intelligent designer. If they did so, it would be hard to see how that operational definition was not functionally equivalent to an operational definition of god.

Robert T. Pennock in his book Tower of Babel points out that all these theories are variations of creationism, and he creates a classification scheme that lists them as YEC, OEC, and IDC (for intelligent design creationism). This is the terminology that I have adopted and will use henceforth so that the relationship of intelligent design to creationism is kept explicit, and IDC people cannot hide their creationist links.

The use of the empirical/origins science distinction is another example of this verbal sleight of hand. By dividing science in this way, and by putting evolution into the origins science category, they then try to imply that evolution is not an empirical theory! Since the word ’empirical’ implies data-driven and subject to the normal rules of scientific investigation, casting evolution as ‘origins science’ is part of an attempt by IDC people drive a wedge between evolution and other theories of science and make it seem less ‘scientific.’.

The IDC people also assert that the way we evaluate theories is different for the two categories. They assert that ’empirical science’ can be tested experimentally but that ‘origins science’ cannot.’ This assertion allows them to claim that how competing theories of ‘origins science’ should be evaluated is by seeing which theory ‘explains’ things better.

I have already shown that using ‘better’ explanations as a yardstick for measuring the quality of theories leads one down a bizarre path where the ‘best’ explanation could well be the Raelian theory (or ET-IDC using Pennock’s classification scheme). But it is important to see that the reason that the IDC people can even make such a claim is because of their artful attempt to divide science into ’empirical’ and ‘origins.’

The fact is that all science is empirical. All scientific theories ultimately relate to data and predictions. If one wants to make distinctions, one can say that there are historical sciences (evolution, cosmology, astronomy) that deal with one-time events, and non-historical sciences where controlled experiments can be done in laboratories. But both are empirical. It is just that in the historical sciences, the data already exists and we have to look for it rather than create it.

But IDC people don’t like to concede that all science as empirical since that would mean that they would have to provide data and make predictions for their own theory just like any other empirical theory, and they have been unable to do so. This is why it is important that the scientific community not concede them the right to categorize the different kinds of science in the way they wish, because it enables them to use words to avoid the hard questions.

The different use of terminology in scientific and political debates

I would like to revisit the question addressed earlier of why scientists are at a disadvantage when they try to debate in political forums, like those involving so-called intelligent design creationism. This time it deals with how terminology is introduced and used.

Scientists often need to introduce new terms into the vocabulary to accommodate a new concept, or seek to use a familiar everyday term or phrase with a more precise technical meaning.

The scientists who introduces the new concept usually has the freedom to name it and most of the time the community of scientists will go along with the name. The reasons for the name vary and can sometimes have whimsical origins. The physics term ‘quark’ for subnuclear particles for instance was named from the line “three quarks for Muster Mark” from James Joyce’s Finnegan’s Wake, and was invoked because it was thought at the time that there were only three subnuclear particles that made up the proton and the neutron. The proton consisted of two ‘up’ quarks and one ‘down’ quark, while the neutron consisted of one ‘up’ quark and two ‘down’ quarks. But then other particles were discovered which had unusual properties and these were dubbed to be ‘strange’ particles and so a third type of quark, the ‘strange’ quark, was postulated to explain their properties.

Later a fourth type of quark was required and this was called the ‘charm’ quark. Not all terminology sticks, however. When a fifth and a sixth type of quark came into being, initial attempts to name them ‘truth’ and ‘beauty’ seemed to most physicists to have crossed the line of acceptable whimsicality, and the names of those two quarks settled to the more mundane ‘top’ and ‘bottom’ quarks.

Although there are a variety of reasons for the names scientists select for new concepts, the success or failure of the ideas that are associated with the concept does not hinge on the choice of the name. This is because science concepts are more than names, they also have ‘operational definitions,’ and it is these definitions that are important. Many non-scientists do not understand the importance that scientists attach to operational definitions.

For example, if you ask a non-physicist to define ‘mass’, you will usually get some variation of ‘it is the amount of matter present in an object.’ This intuitive definition of mass may give a serviceable understanding of the concept that is adequate for general use but it is too vague for scientific purposes. It could, after all, just as well serve as a definition of volume. A definition that is so flexible that it can apply to two distinct concepts has no scientific value.

But an operational definition of mass is much more precise and usually involves describing a series of operations that enable one to measure the quantity. For mass, it might be involve something like: “Take an equal arm balance and balance the arms with nothing on the pans. Then place the object on one pan and place standardized units of mass on the other pan until balance is achieved again. The number of standardized units required for this purpose is the mass of the object on the other pan.”

For volume, the operational definition might be: “Take a calibrated measuring cylinder with water up to a certain level and note the level. Then immerse the object in the water and measure the new level of the water. The difference in the two level readings is the volume of the object.” We thus see that, unlike the case with intuitive definitions, there is a clear difference between the operational definitions of mass and volume.

It is possible for a concept to have more than one operational definition. For example, the mass of an object could also be defined operationally as placing something on a triple beam balance, moving the weights around until balance is achieved, and then taking the reading.

It does not matter if a concept has more than one operational definition. In fact that is usually the case. The point is that consistent operational definitions of mass would enable one to show that the different definitions are functionally equivalent, so that you can use any one of these mutually consistent operational definitions. If you actually want the mass of an object, all the various operational definitions would result in the same numerical value, so that mass is an unambiguous physical concept.

Such operational definitions enable scientists to avoid confusion and quickly agree on what names like mass and volume mean. The names themselves tend to be value neutral and by themselves do not advance an argument. Scientists tend to not challenge the ways things get named because it is the underlying operational definition that is crucial to scientific arguments. Scientists are quite content to go along with whatever names others give to concepts, because they rightly see the name as irrelevant to the merits of the debate.

This is quite different from what goes on in the political arena. There what you call something can be a crucial factor in whether the argument is won or lost. Take for example, what was known as the ‘estate tax.’ This is a tax on the estates of very wealthy people who become deceased. It affects only a tiny minority of people and was very uncontroversial for a long time. The term ‘estate tax’ is fairly descriptive because we associate the word ‘estate’ with the wealth passed on by rich people.

But there were interest groups who wanted to repeal this tax and one of the ways they achieved this goal was by renaming the tax as a ‘death tax,’ which seemed to imply that you were being taxed for dying. By getting this new terminology accepted in the debate to replace the old term, they have succeeded in getting quite considerable popular support for the removal of a very egalitarian tax, even though few of the people supporting the repeal would have estates large enough to worry about paying the tax.

Similarly the Bush administration at one time tried to get the media to use the term ‘homicide bombers’ instead of ‘suicide bombers,’ Perhaps they were thinking that ‘suicide bomber’ would remind people that the people doing this were making a great personal sacrifice and that raised awkward questions about their level of determination to remove US troops from their country and the reasons behind the determination. But that effort at renaming went nowhere because the old name was an accurate description of the person, while the new name was seen as being redundant and conveying less information.

In political battles, winning the name game is half the battle because accepting the name preferred by your opponent often means tacitly conceding the high ground of the argument and playing defense. So the habit of scientists to concede the name and to work with whatever name others come up with is not a good strategy when they enter the political arena. But it is not clear that all scientists have realized this and know when to shift gears.

In the next posting, I will examine how IDC advocates have used this casual approach to names to get an edge in the public relations wars, and how scientists should fight back.

Why scientists are good at arguing and bad at debating – 2

In an earlier posting on this topic, I argued that one reason that scientists fare poorly in public political-type debates or on TV talk shows is that the style of argumentation they encounter in those venues is very different from the style they become expert in in their academic discourses. If you are not prepared for this different style, and take steps to counter it, then you can get blind-sided and come off looking poorly. This is why while the scientific case against so-called ‘intelligent design’ (ID) is so strong as to justify the phrase ‘slam dunk’, the popular perception does not match it, because scientists who debate ID proponents often do not realize that they are no longer debating according to the rules of scientific argumentation.
[Read more…]

Misuse of scientific arguments

When I was in my first or second year of college, a friend of mine who belonged to a fundamentalist Christian church in Sri Lanka said that he had heard of a convincing scientific proof against the theory of evolution. He said the proof centered on the concept of entropy. I had already heard of the term entropy at that time, but I definitely did not understand the concept, since I had not as yet studied thermodynamics in any detail.

Anyway, my friend told me that there was this law of physics that said that the total entropy of a system had to always increase. He also said that the entropy of a system was inversely related to the amount of the order and complexity in the system, so that the greater the order, the lower the entropy. Since I did not have any reason (or desire) to challenge my friend, I accepted those premises.

Then came the killer conclusion. Since it was manifestly clear that the theory of evolution implied increasing order (under the theory, biological systems were becoming more diversified, complex, and organized from their highly disordered primeval soup beginnings) this implied that the entropy of the Earth must be decreasing. This violated the law of increasing entropy. Hence evolution must be false.

It was a pretty good argument, I thought at that time. But in a year or two, as I learned more about entropy, that argument fell apart. The catch is that the law of increasing entropy (also known as the second law of thermodynamics) applies to closed, isolated systems only, i.e., systems that have no interaction with any other system. The only really isolated system we have is the entire universe and the law is believed to apply strictly to it.

For any other system, we have to make sure that it is isolated (at least to a good approximation) before we apply the law to it, and this is where my friend’s argument breaks down. The Earth is definitely not a closed system. It continuously absorbs and radiates energy. It especially gains energy from the Sun and radiates energy into empty space and it is this exchange of energy that is the engine of biological growth.

So nothing can be inferred from the entropy of the Earth alone. You have to consider the entire system of the Sun, the Earth, and the rest of the universe, and you find that this leads to a net increase of the entire closed system. So the second law of thermodynamics is not violated.

You can have decreased entropy in a part of a system provided the entropy increases by more than that amount in another part. As an analogy, consider a sock drawer in which you have black and brown socks randomly mixed together. This is a state of low order and hence high entropy. If I now sort the socks so that all the black socks are on one side of the drawer and all the brown on the other side, then the sock drawer has gone from a lower to a higher state of order, and hence from higher to a lower state of entropy. Is this a violation of the second law? No, because it ignores the fact that I was part of the system. I had to use up energy to sort the socks, and in that process my entropy increased more than the decrease in entropy of the sock drawer, so that there was a net increase in entropy of the combined system (sock drawer + me). Strictly speaking, I was also in contact with the rest of the room since I was absorbing and radiating energy, breathing, etc., so if you wanted to get to an even better approximation to a closed system to be even more accurate, you had to take the entropy of the room into account as well.

This is why physicists believe that after the Sun eventually burns up all its nuclear fuel and ceases to exist, the Earth will inevitably fall into disorder, assuming that we haven’t destroyed the planet ourselves by then. (As an aside, Robert T Pennock in his book Tower of Babel says that some creationists believe that God created the second law, with its increasing disorder, as part of his punishment for Adam and Eve’s fall from grace.)

Once I understood better what entropy was all about, that was the end of the entropy argument against evolution, at least as far as I was concerned. Non-physicist scientists generally caught on to the fact that people were using the entropy argument fraudulently against evolution and were able to debunk it whenever it came up, so that nowadays one rarely hears that argument. One still occasionally comes across the entropy argument used in this fallacious manner, however, and it may still have power over the scientifically naive.

But even if the entropy argument itself has largely disappeared, other ‘scientific proofs’ against evolution and for the existence of god have arisen in the wake of so-called intelligent design (ID) and I will look at those arguments in future postings.

Science and trust – 3: The Sokal affair

In 1996, NYU physicist Alan Sokal published an article titled Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity in the journal Social Text, a publication that deals with the sociology of science. The same day that the journal appeared, Sokal published another article in the magazine Lingua Franca (which stopped publishing in 2001) exposing his other article as a hoax. He said that he had mimicked the dense and obscure style of some branches of the arts and humanities (especially the post-modernist philosophers and the area known as cultural studies), but had loaded the paper with citations to well-known people in that field and had asserted conclusions he thought would be pleasing to the editors.

A nice wikipedia article on this hoax explains Sokal’s rationale for it and the response by the embarrassed editors of Social Text:

In their defense, the editors of Social Text stated that they believed that the article “was the earnest attempt of a professional scientist to seek some kind of affirmation from postmodern philosophy for developments in his field” and that “its status as parody does not alter substantially our interest in the piece itself as a symptomatic document.” They charged Sokal with unethical behavior and suggested they only published the article as it was because Sokal refused to make changes they suggested and it was of relevance to a special issue they happened to be preparing.

Sokal argued that this was the whole point: the journal published articles not on the basis of whether they were correct or made sense, but simply because of who wrote them and how they sounded. [He said] “Sociology of science, at its best, has done much to clarify these issues. But sloppy sociology, like sloppy science, is useless or even counterproductive.”…..The controversy also had implications for peer review. Social Text had dispensed with peer review, hoping that this would promote more original, less conventional research, and trusted authors of prospective articles to guarantee the academic integrity of their work. Social Text‘s editors argue that, in this context, Sokal’s work constituted a deliberate fraud and betrayal of that trust.

To my mind, this episode does not reflect well on any of the parties involved. First, if the editors of Social Text decided to dispense with peer review for the (perfectly acceptable) reasons given, then they should have on their editorial board a diverse enough group of people to make judgments about papers. They clearly did not in this case. Either the editors did not have the competence to judge the quality of the paper or they did not give it enough scrutiny.

It also is the case that in academia there is an undesirable element of ‘physics envy’, and the editors were clearly thrilled that a real physicist from a reputable department was publishing in their social science journal, presumably giving their journal greater credibility. It was probably this reason that enabled Sokal to persuade them to publish his paper despite some initial reservations they had about it.

On the other hand, it was not good of Sokal to take advantage of the absence of peer review to get his article published. The elimination of peer review imposes a greater obligation on authors to be more self-critical and scrupulous and to not to take advantage of those journals, because the journal editors are deliberately making themselves more vulnerable.

It is said that if you are invited into the home of a friend and steal a small amount of money that is lying around, you are committing a worse moral offense than if you break into your friend’s safe and steal a very much larger amount from their safe. Because it is not the magnitude of the amount stolen that is a measure of the crime, it is the degree of violation of the trust.

If Sokal had not exposed his own hoax, what would have most likely happened is that the article would have either been ignored (since it had no content most readers would have been simply baffled by it) or at some time later, a more discerning reader would have exposed it as a fraud. It would not have done any harm to the field itself, just like most scientific errors or fraud.

So what did the Sokal hoax accomplish? Unlike ‘hoaxes’ that are part of a research study to study the processes of research and publication (see my earlier post for examples of this), the main result of this was to make the editors of Social Text look foolish and incompetent. There was no other benefit that I can see. Sokal himself is aware ethical issues involved because he says: “Of course, I’m not oblivious to the ethical issues involved in my rather unorthodox experiment. Professional communities operate largely on trust; deception undercuts that trust” and tries to explain why it was justified.

I don’t think that that his reasons were enough to justify playing the trick. I believe that trust among researchers is a valuable quality and I would hate to see researchers squandering it away.

POST SCRIPT: Tracy Kidder to speak at Case

Tracy Kidder, the author of the biography Mountains Beyond Mountains: The quest of Dr. Paul Farmer, a man who would cure the world which I wrote about earlier is the speaker at the Fall Convocation on Thursday, September 1 at 4:30 pm in Severance Hall.

The event is free and open to the public but prior registration is required. For more information and registration, go here.

Science and trust – 2

As I discussed in an earlier posting, trust plays an important role in science. It is hard to imagine science functioning as well as it does if everyone started being suspicious of each other. I see disturbing signs of this recently in the field of medicine. Increasingly, academic research on new drugs is being funded by private pharmaceutical companies that have a vested interest in the results coming out in favor of whatever drugs they are trying to market. Thus they can exert subtle and not-so-subtle pressure on the researchers to manipulate the results, since they are controlling the flow of money. This can raise suspicions about the credibility of the scientists who do this kind of sponsored research.
[Read more…]

Science and trust

My first scientific paper involved correcting an error made by others in an earlier paper published on the same topic. The error was a very simple one (a plus sign had been replaced by a minus sign) but had been buried in a complicated calculation that made it hard to detect. However, the consequences of the error were quite significant and had caused some puzzlement amongst the physicists in that subfield.

Ironically, some years later I too made a sign error in a published paper and my error was pointed out by someone else.

This kind of mistake and correction happens in science. Scientists are generally cautious and careful (otherwise they cease to be taken seriously by their peers) but are not infallible. And when they make a mistake, they are corrected by their peers, either in print or in private, and they move on. It is almost invariably assumed that the error was an honest mistake, not an attempt to cheat. Scientists trust each other.

In fact, the whole enterprise of science is based on trust and could not function otherwise. This does not mean that there are no checks in the process but those checks are not designed to catch fraud.

The process of peer review is one such measure. In this process, once the editors of a journal receive a submission, they send it out to (usually) two or more scientists who work in the same field to review the paper and recommend one of three actions to the editors – accept, reject, or make revisions.

I have had my papers reviewed by anonymous peers and have reviewed the papers of others. The point of the review is to check for clarity and completeness and proper methodology. The reviewer does not usually try to reproduce the paper’s results but instead tries to get a feel for whether the paper’s conclusions make sense and are consistent with other information. The reviewer assumes that the authors are honest, that the data given is correct, and that the calculations the authors say they made using the data have been done with due care.

So how do errors and fraud get caught? The way this usually happens is when another scientist wants to build on the previous published work and extend it or take it in a new direction. Then that scientist usually begins by trying to reproduce the results of the earlier work, and it is because of this that errors usually get detected. This is why reviewers try to make sure that all the information necessary to reproduce the results is present in a paper, even if they do not actually check the results themselves, so that future work can be built on it. (This is how the two errors that I was personally involved in got detected.) Clearly the chances of errors being detected become greater if the original work has major significance since then many people want to take advantage of that work and try to reproduce the results.

An example of this process at work occurred just this month with the important issue of global warming. While there is an emerging scientific consensus that it is occurring, there are disagreements over details. As the website What’s New reports: “One detail was records that were interpreted by a group at the U. Alabama in Huntsville as showing that the troposphere had not warmed in two decades and the tropics had cooled. However, three papers in Science this week report errors in the Alabama-Huntsville calculations. It seems that warming of the troposphere agrees with surface measurements and recent computer predictions. The group at Alabama-Huntsville concedes the error, but says the effect is not that large. That’s the way it’s supposed to work.”

If no one else cares about the work or is unaware of it, errors can remain undetected. Since trust is assumed, it is possible for an unscrupulous author to abuse that trust and to falsify and fabricate data and results and get their work published. But to remain undetected over an extended period of time usually means that the work was not considered of much use to begin with and was ignored by the scientific community.

Another way in which trust manifests itself in science is that unless there is some reason to suspect otherwise, scientists assume that whatever gets published in a journal (especially one that is peer-reviewed) is correct, even if they do not know the authors personally or even know the field. So scientists quote each other’s work freely, and often base their own papers on the work of others without knowing for sure whether that work is correct or not.

This might seem to be a risky thing to do but it is this very interconnected nature of science that keeps the system functioning. If at some point a result shows up that is plainly wrong or does not make sense, people can sometimes trace through the network of connections and find the original error that triggered the problem. Thus even errors that have remained undetected for a long time can suddenly surface because of research done in a seemingly distant area.

Given this feeling of openness and trust, it is possible to manipulate the system and get fraudulent results published. This can be for bad reasons such a deliberate fraud for personal gain (say because the authors are trying to pad their resumes or are trying for fame and hoping not to get caught). These are clearly wrong. But there are reasons for faking that, at least on the surface, may be good and these raise ethical issues that I will examine in a the next few postings.

Should all scientists try to accommodate religion?

Within the scientific community, there are two groups, those who are religious and who hold to the minimal scientific requirement of methodological naturalism, and those who go beyond that and are also philosophical naturalists, and thus atheists/agnostics or more generally “shafars”. (For definitions of the two kinds of naturalism, see here).

As I have said earlier, as far as the scientific community goes, no one really cares whether their colleagues are religious or not when it comes to evaluating their science. But clearly this question matters when science spills into the political-religious arena, as is the case with the teaching of so-called intelligent design (ID).
[Read more…]

Scientists’ Achilles heel

I was reading an article the other day about how, during the World War II, the US government assembled a team of anthropologists to investigate whether there were any fundamental differences between the Japanese “race” and white people which could be exploited to wage biological warfare that would harm them only.

The anthropologists found no differences and that particular war plan was abandoned. This is consistent with our modern scientific consensus that “race” has no biological markers and only makes sense as a social and cultural construct.

But the interesting point is that the anthropologists were told to not consider the ethical implications of their work, and that ethical issues would be taken into account by others when decisions on implementing the biological weapons were made. And presumably, the anthropologists went along with that.

This is the Achilles heel of science, the fact that so much of our work can be easily twisted to serve ends that we might not approve of. And yet we do it anyway. The allure of science is such that it draws in people to work on problems that could, with a few slight modifications, be used to harm innocent people.

Physicists are perhaps the most culpable. After all, we have been responsible for the invention and development of atomic weapons that, in the case of Hiroshima and Nagasaki, resulted in the deaths of a quarter of a million people. And when one counts the deaths from ore conventional weapons that physicists have helped bring into being, the numbers run probably into the tens, if not hundreds, of millions.

(Some physicists have refused to go along with this. Physics Professor Charles Schwartz at the University of California, Berkeley felt that university federal-funded science (especially physics) was so closely tied to the Pentagon that he refused to ask for grants and started to advise physics students on how to avoid getting sucked into making the Faustian bargain with the military machine. This seriously hampered his career but he stuck to it.)

How can physicists do our research and still sleep at night, knowing the purposes for which it might be used? I think we do the same thing that the anthropologists did. We avoid thinking about the ethics of our actions and hope that others will take ethics into account in due course at the appropriate time. We hope that policy makers will not take advantage of the science we develop for evil purposes, although time and again that hope has proven to be ill-founded. Or we persuade ourselves that while we may be doing something evil, we do it in the cause of preventing an even greater evil. Or we say that on balance science does more good than evil and has saved millions of lives in other ways. (A few of us may actually believe that developing weapons is a good thing and suffer no angst at all.)

All these things are true and they do provide some consolation. But they never quite wash away all the blood on our hands and I think that we physicists justifiably bear a burden of guilt that academics in other disciplines such as (say) history or English or music do not.

In his memoir A Mathematician’s Apology, written in 1940, G. H. Hardy takes pride in working on pure mathematics because he felt that it was “useless.” By this, he did not mean that it was of no value (he loved the beauty of the subject) but that he felt that, unlike applied mathematics, his field could not be used for evil purposes, that it had no applications at all to the outside world. But time has proved him wrong, and mathematics results that might have been considered too esoteric to have any real usefulness then are now being used in all areas.

It is probably safe to say that there is no area of science or mathematics that is immune from potential misuse. Apart from avoiding science altogether, perhaps our only option is to simultaneously work to prevent governments from using our work for destructive purposes.

POST SCRIPT

The Knight Ridder newspapers say that President Bush has endorsed the teaching of “Intelligent Design” in schools. This should not be too much of a surprise. He has been saying similar things in the past.