Trying to assuage guilt

On my return from Sri Lanka last week, I read the back issues of the Cleveland newspapers and found that a big story was the vicious beating of a middle-aged white man by a group of six black teenagers who had accosted him while he was on a walk in his neighborhood. The man was saved from possible death because of the alarm raised by a resident (who is a faculty member at Case) who had observed the assault from his home front window and raised the alarm, which caused the attackers to flee.

The neighborhood happens to be in the same community of Shaker Heights that I live in, an inner-ring suburb of Cleveland. This neighborhood is a rarity in the US, one that is ethnically integrated and has been so since the era of civil rights legislation. The youths lived a few miles away in the city of Cleveland.

The assault was cowardly and deplorable and was condemned by everyone. But what really caused a furor was a Plain Dealer newspaper column on Sunday, January 6, 2008 written by local media personality Dick Feagler in which he argued that the message of such events was clear: integrated neighborhoods were an impossible concept in practice and white people who could afford to should simply move out of places like Shaker Heights and into almost exclusively white communities, where they would be safe from such attacks. He said that such an attitude should be called ‘realism’, not ‘racism.’

Condemnation of his column has been swift and widespread both from his fellow columnists in the Plain Dealer and from the general public, including the victim, the person who raised the alarm, and neighborhood community leaders.

While the attack itself was indisputably awful, the reaction of people to such incidents is a kind of Rorschach test, revealing a lot about them.

Feagler’s reaction was a classic example of someone using external events to assuage their own guilt. To understand it one must know that Feagler is an old-style journalist who models himself on legendary columnists Mike Royko in Chicago and Jimmy Breslin in New York. They were people from gritty, urban, working class backgrounds, hard-drinking and smoking, who would frequent the bars and other nightspots of their town and be friendly and familiar with the people of the streets, such as construction and other blue-collar workers, beat cops, petty criminals, pimps, and hustlers. From this wealth of diversity, they would draw the stories and language that filled their columns and gave their newspaper readers a glimpse into a rich world that lay just beneath the surface.

Feagler is a Royko/Breslin wannabee who proudly recounts his childhood experiences growing up in a working class Cleveland neighborhood and still tries to portray himself as a Clevelander. Many of his columns are complaints about the decline of the city and the schools from the time when he was young. But his problem is that he long ago moved away from the working class neighborhoods of his upbringing and into the well-to-do and predominantly white suburb of Bay Village, which is far from Cleveland, both literally and figuratively. He thus faces the dilemma of all those who like to portray themselves as just regular, working class folks, men of the people, at home with all classes and ethnicities, but have chosen to live in places that have little such diversity and are more like enclaves for the wealthy. Such people have a sense of guilt at abandoning the places that they grew up in. Running away from a situation rather than staying and trying to improve things gives one a feeling of being a coward and it is hard to live with that.

For many such people, the only way to salvage their self-image is to argue that they were forced into this action and that it was eminently sensible to move. When others take the same action that you did, you feel a little more vindicated. So Feagler’s call for other white people to leave integrated areas is really a plea for others to not judge him harshly for having left.

Although I disagree strongly with what Feagler said, I think I understand what is driving it because I live with feelings of guilt and cowardice similar to Feagler’s. I had always wanted to live and work in Sri Lanka, amongst the family and friends that I had grown up with, to try and improve the conditions in that country. But in 1983, following an attack on an army truck, a vicious anti-Tamil pogrom was unleashed by the then government of Sri Lanka in which unchecked mobs rampaged the streets, killing Tamils and setting fire to their homes and buildings, while the government’s police and security forces stood idly by, sometimes even egging the mobs on. I saw these things first hand and they required me my family and me (because we were Tamils) to go into hiding for about a week to escape possible death.

I was furious at the government for abandoning its basic function of maintaining order and instead handing power over to its mobs and goons. (When I saw the film Hotel Rwanda I relived the awful sensation of what it feels like when you are completely powerless and unprotected from mobs who had been your neighbors the previous day, and when even the government is against you, although what happened in Rwanda was a massacre on a far larger scale than occurred in Sri Lanka.) I felt that such a society was not one in which I could bring up my older daughter, who was then just three months old. So in sorrow and anger I emigrated to the US and have stayed here ever since, returning to Sri Lanka only for brief visits. The ethnic violence in Sri Lanka continues to this day, ebbing and (currently) flowing.

But when you leave a bad situation, you are essentially weakening the side that is trying to make the situation better, and strengthening the hand of the bad elements. Those who leave may try to justify it as ‘realism’ but there is undoubtedly an element of cowardice involved. The guilt I felt for essentially running away from a problem and leaving others to deal with it rather than facing up to it personally has never gone away, although I have come to terms with it. I still wonder if I did the right thing, although the people I know in Sri Lanka keep saying that I was smart to move away. I did notice that when others I knew also took the step of leaving the country, it seemed to justify my decision, making it seem to me like I did the right thing. When others stayed or even returned to Sri Lanka, it made my decision seem wrong.

For people who leave a bad situation, there is a temptation to eagerly highlight reports of horrible events because it seems to retrospectively justify their decision to leave. In my case, it is easy to resist that temptation because I still have close family and friends living in Sri Lanka and every incident of violence there triggers alarm about their safety. But if you have no remaining real links to the place you left behind, it is tempting to view events through the lens of one’s own emotional needs and focus only on the bad things that happen.

I think that this is what is driving Feagler’s views. Every person who continues to live in integrated neighborhoods is a silent living rebuke to his decision to move away from them into an ethnically and economically exclusive enclave, while every person who moves away is a vindication of his own decision. Every sign that Cleveland is getting better implies that he made a mistake in moving. Every sign of its decline shows his prescience in abandoning the city.

Feagler is old enough that he should have the self-awareness to realize that his advice to other white people to move out of integrated communities is largely self-serving. He has drawn the wrong lesson from this deplorable event. It is not about him and his need to assuage his own guilt.

The vicious beating was a terrible thing to have had happen but it was an isolated incident and the youths involved were not even people from the neighborhood where the assault took place, so it was not a reflection on ethnic relations within the community. Unfortunately, such things can, and do, happen everywhere.

POST SCRIPT: Jesus Christ Superstar

If you haven’t seen this 1973 rock musical, you have missed a treat. This is Andrew Lloyd Webber’s best work, raised to a high level by the superb lyrics of Tim Rice and the magnificent performance of Carl Anderson who sang the part of Judas. The film exploded into life whenever Anderson appeared and he stole every scene in which he appeared. Anderson died in 2004 of leukemia.

Here is Anderson in the opening sequence:

And here he is singing the title song:

Jury service and jury nullification-2

The fourth time I was empanelled was for a criminal case involving charges of felonious assault where the defense said that it would argue self-defense. Once again, there was an oral voir dire, which included questions about whether we had ever been involved in any physical altercation.

It was during the voir dire that I ran into a problem. One of the prosecuting counsel asked if the jurors would be willing to convict a person on the facts of the case even if they felt the law under which the person was being prosecuted was unjust. It was clear that he expected you to answer ‘yes’ to this question. We have all seen at least some courtroom dramas where the judge instructs the jury on the law to be applied and the jury is asked to judge based only on the facts of the case, and not to judge the validity of the law itself.

What is not well known is that the jury has the right, in criminal cases, to acquit the accused even if he or she is clearly guilty on the facts, if the jury feels that the law that was used to convict was unjust. This procedure is known as jury nullification and I have written about it before. (See here and here.) In the past, juries have nullified laws and brought in acquittals in cases involving freedom of assembly, freedom of the press, harboring fugitive slaves, and so on, and their repeated refusal to convict have led to the repeal of those unjust laws and given us some of the basic freedoms we now take for granted. But despite this fundamental right that juries possess, courts do not inform juries of this right and are actively hostile to doing so.

I was placed in a quandary by the prosecutor’s question. What I knew about the case at hand was such that it seemed highly unlikely that it would involve an unjust law. But since I knew about jury nullification, I could not in good conscience agree to a blanket statement that I would convict even if I felt the law to be unjust. But if I said in open court that I could not convict based on an unjust law, then I would have to explain the whole business of jury nullification. While that might have been educational for my fellow jurors, it might have prejudiced the entire jury panel and thrown a spanner in the works for a case that did not involve a high principle. So I asked the judge if I might talk to him privately. This was allowed and the judge, all the counsel, the court recorder, and I moved to his chambers next door where I explained my problem. We then went back to the courtroom where the prosecutor asked me a few more questions. Then she dismissed me from the panel.

I expected this to happen. Prosecutors do not like jury nullification because it works only one way, and that is against them, since it only gives jurors the right to acquit on the basis on an unjust law.

This is the problem currently with jury nullification. It is a right of juries that is not only not publicized but actively hidden from jurors by the court system. If someone is aware of it and says so, he or she is likely to be struck from the pool of potential jurors in criminal cases. There are ways to get around this, apparently based upon the fact that the oath or affirmation one takes during voir dire is not enforceable, so one can apparently say that one will convict on the basis of the law even if one has no intention of doing so. Whether one takes this route has to be up to the individual. But I am uncomfortable doing this, especially in a case where there is no high principle involved. When one swears or affirms that one is going to tell the truth, one is obliged to follow the spirit as well as the letter of the law. Politicians use the careful parsing of statements to lie to us and make us think they mean one thing while intending to do another. But it is this kind of behavior that leads to bad governments and gets us into wars. There is no reason for ordinary people to copy that kind of disgraceful behavior. I much prefer that the issue of jury nullification become public knowledge so that juries routinely know about it, even without being told by the judge, the way we currently know about our Miranda rights. The best way to do this might be for the popular courtroom dramas in TV and film to deal with it frequently. For example, I think the William Penn trial would make a terrific film, that would put the focus on jury deliberations even better than the classic Twelve Angry Men, both for its dramatic content and for its educational value

As things stand now, it looks like I will never be able to serve on a criminal jury because of my knowledge of jury nullification.

It is important that everyone know about jury nullification because we have entered an era in which there are increasing violations of those rights we have long taken for granted. Laws are being passed that are taking away many cherished and hard-won rights, such as habeas corpus. We are already having trials in which ordinary people are being subject to harsh treatment and even torture and tried under draconian and unjust laws, all in the name of security and fighting the so-called war on terror, but in reality to serve the power needs of the state.

If we are called as jurors for such trials, we have to be willing to uphold our constitutional right to acquit people who are accused of crimes under unjust laws, when in reality what they may have been doing is standing up for fundamental rights. If we are asked to convict someone on the basis of evidence that has been obtained under torture, we should be willing to acquit, simply on the grounds that using torture to acquire evidence is cruel and unjust and any information gleaned from such practices is inherently suspect.

POST SCRIPT: Catholic priests caught in the lingerie section

I came across this funny clip from the 1990s British TV comedy series called Father Ted.

Jury service and jury nullification-1

By a coincidence, while writing and posting my series on the law and religion in public schools, I was also called for jury service and spent the better part of the week of November 5, 2007 in the Cuyahoga County Common Pleas Court in downtown Cleveland.

I feel strongly that the jury system is one of the greatest inventions of modern society and has been the foundation for democracy and creating and preserving freedoms. So I feel that to serve on a jury is a privilege and do not resent doing my time on the jury though it does involve some minor inconveniences and disruptions in work and home routine.

This was the third time I have been called for jury duty but I have yet to actually sit in on a case. For those not familiar with how it works, at least in Cuyahoga County where I live, when you are called for jury duty to the Common Pleas Court, it is not for a particular case but to be part of a large pool of jurors that serve many courts. So much of the time is spent waiting until your name is randomly called as needed if a case cannot be settled and should need to go to trial.

There are forty courtrooms in the building so the pool of potential jurors is quite large. All the jurors wait in a large room until such time as they are called to serve on a panel in a particular trial. The court system treats the jurors well. The jury pool room is well-lighted and spacious, has comfortable chairs, carrels with electrical outlets for people to use computers (but no internet access), plenty of newspapers and magazines to read, jigsaw puzzles, three TVs tuned to different channels in different corners of the room, vending machines and a nearby reasonably-priced cafeteria and, most importantly, a quiet room for those who simply want silence in order to read or take a nap. The court employees who run the operation are courteous, friendly, and helpful and the whole system works very smoothly.

Furthermore, I have been impressed with my fellow jury panel members. They come from all walks of life and occupations and backgrounds, and although there is a lot of joking and kidding around while we are waiting in the jury pool room about what they would prefer to be doing, they all seem to have a sense of duty and seriousness about what they have been called to do. I always feel good about the experience and would not hesitate to put my own fate in the hands of a jury if an event should arise that I am put on trial.

I have been called for four jury panels so far. They usually call a panel of eighteen potential jurors for a civil trial (from which eight jurors and two alternates are finally selected) and twenty-two for a criminal trial (from which twelve jurors and two alternates are selected). Civil trials require only a ¾th majority (i.e. at least six votes) for a verdict while criminal trials require a unanimous verdict.

Once a panel is randomly selected from the large pool in the room, we first assemble in the jury deliberation room for that particular court, and the bailiff tells us the order in which to line up to enter the court and where to sit once we get there. As we march in, the judge and the attorneys and the litigants are already present and standing, and once we are all in our places, the judge tells us to sit. The courts have sense of friendly formality and dignity, with the judge in his robes and the counsel in suits.

Then the voir dire (“to speak the truth”) process begins with the judge asking us to swear or affirm (the latter for the benefit of us atheists) an oath to tell the truth. He then tells us very briefly what the case is about and how long he expects it to last, and then he and the two attorneys ask each juror a lot of probing questions about our lives (such as where we work and what we do, what our hobbies are, how many children and what they do) plus questions about any life experiences or opinions that we may have had that are relevant to the particular case we are about to judge. For example, in one civil case involving an employee being fired, we were asked if any of us had even had problems with our employers or been fired or sued. In an assault case, we were asked if we had been assaulted. This voir dire process can take quite a while, and the rest of the jury panel listen while each potential juror is questioned. On the basis of the answers, jurors can be dismissed either for cause (because, say, they know someone involved with the case) or for no reason. The latter can be done by the attorneys for either side but each has only a limited number of such peremptory challenges at their disposal.

In my very first panel about eight years ago, the two sides made a deal and the case was settled just before the voir dire process even began. The second panel I was called for about four years ago was for a murder trial. There was an exceptionally large panel called (about 40 people) suggesting that the judge felt that it was going to be difficult to select an impartial jury. The voir dire in that case was a very detailed written questionnaire that ran to over twenty pages. I was dismissed from that panel. I had requested to be excused because the judge had said that the case would last at least three weeks and this was the week before the semester began which made it awkward for me. The fact that I had stated that I opposed the death penalty also may have contributed to my dismissal.

The third time I was empanelled (which was last November) involved a civil case, a contract dispute involving an employee who had been fired. I did not request to be excused but after the oral voir dire, I was the first person to be dismissed, by the attorney for the employer. No reasons need be given for such peremptory dismissals so I have no idea what reasons he might have had.

It was in the fourth case (also last November) that I ran into a problem because of my knowledge of the legal system and jury nullification. I will write about that in the next post.

POST SCRIPT: Textbook disclaimers

Some readers will recall how in Cobb County, GA the school board inserted stickers saying, “This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origin of living things. This material should be approached with an open mind, studied carefully, and critically considered” into the biology textbooks. This was ruled unconstitutional.

But why do advocates of such disclaimers limit themselves only to biology? Here are some other textbook disclaimer stickers that can be used. Here’s one suitable for a physics textbook:

This textbook asserts that gravity exists. Gravity is a theory, not a fact, regarding a force that cannot be directly seen. This material should be approached with an open mind, studied carefully, and critically considered.

And here’s a disclaimer that is suitable for almost any textbook:

This book teaches kids the difference between facts and myths. Because this erodes belief in Santa Claus, the Easter Bunny, and, well, other things, parents should homeschool their kids until the age of 27.

Why rush election results?

Ever since election day on November 6, 2007 news reports in Cleveland have been obsessing over the fact that the results were delayed by a couple of hours due to a systems crash that required the backup to kick in.

I am puzzled by this obsession with speed in elections. Why is there such a rush to get election results out so quickly? This drive for speed seems particularly paradoxical in the US where election campaigns are dragged out longer than in any other country and where there is a long time interval between voting day and the newly elected person actually taking office. New office holders typically take over at the beginning of the following year, allowing for a two-month transition period. The new president does not take the oath of office until January 20.

Because of the way election day is enshrined into the constitution, politicians can plan their campaigns years in advance. The campaign for the presidency in 2008 began nearly two years before and even before we have even a single primary election, candidates and voters are already experiencing election fatigue. And yet, as soon as the polls close, there is a desperate stampede to get the election results declared as soon as possible. Even though the actual results themselves are usually released within twelve hours of the polls closing, the media cannot wait even for that and set up elaborate exit polling systems so that they can call the results almost immediately after (or even before) the polls close.

The election debacle of 2000 showed what happens when people are in such a rush to declare the winner. But exactly the wrong lesson seems to have been drawn from that debacle. We are now going towards even more high-tech voting systems using computers that will presumably give the actual results even sooner.

I think this is the wrong way to go. We should go back to a completely paper ballot system, where people mark an X in a box next to their favored candidate. Then we should have human beings count and, if necessary, recount the votes. The counters should be given plenty of time, a week, two weeks, even a month, to do a thorough job and the rest of us should simply go about our normal business until they are ready to declare the winners. As I said before, nothing at all hinges on a quick release of the results.

In fact, in our desire for speed, we are decreasing public confidence in the credibility of elections. After all, anyone remotely familiar with computers knows that while their arithmetic powers are far superior to that of humans, they are highly susceptible to hacking and thus to fraud. What is worse, computer fraud is hard to detect and can be almost invisible except to very expert eyes looking closely. I have far more faith in humans counting paper to produce an honest result than I do in computers.

Of course, no system is perfect. Paper ballot elections can be manipulated too. One can have ballot box stuffing, stolen ballot boxes, and counting errors. But to do those things on a significant scale as to sway the results requires the collusion of a lot of people at a very low level, and such conspiracies are hard to keep secret and fairly easy to detect. Electronic fraud requires just a few sophisticated people working at a high level of expertise.

I am not a Luddite who wants to back to the old days for misplaced romantic reasons. I think elections are far too important to have anything but the best system. And in this particular case, the best system just happens to be one of the oldest systems.

POST SCRIPT: Paper ballots in Northeast Ohio?

I wrote the above post a couple of months ago when I was called for jury duty during the week of November 5, 2007 just after election day and was hanging around in the waiting room. I did not post it immediately due to the long ‘evolution and the law’ series that was running at that time. Hence I was pleasantly surprised to see a front page news headline in the Plain Dealer of December 15, 2007 that said that Ohio’s secretary of state is pushing for paper ballots for our area because of all the trouble we have had with the electronic systems. On my return to the US last week from a trip to Sri Lanka, I read that the push for paper ballots in Ohio is gaining ground.

I think this is a good idea. I think that paper ballots are better and this might be a chance to demonstrate that fact to the nation.

What is science?

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

Because of my interest in the history and philosophy of science I am sometimes called upon to answer the question “what is science?” Most people think that the answer should be fairly straightforward. This is because science is such an integral part of our lives that everyone feels that they intuitively know what it is and think that the problem of defining science is purely one of finding the right combination of words that captures their intuitive sense.

But as I said in my previous posting, strictly defining things means having demarcation criteria, which involves developing a set of necessary and sufficient conditions, and this is extremely hard to do even for seemingly simple things like (say) defining what a dog is. So I should not be surprising that it may be harder to do for an abstract idea like science.

But just as a small child is able, based on its experience with pets, to distinguish between a dog and a cat without any need for formal demarcation criteria, so can scientists intuitively sense what is science and what is not science, based on the practice of their profession, without any need for a formal definition. So scientists do not, in the normal course of their work, pay much attention to whether they have a formal definition of science or not. If forced to define science (say for the purpose of writing textbooks) they tend to make up some kind of definition that sort of fits with their experience, but such ad-hoc formulations lack the kind of formal rigor that is strictly required of a philosophically sound demarcation criterion.

The absence of an agreed-upon formal definition of science has not hindered science from progressing rapidly and efficiently. Science marches on, blithely unconcerned about its lack of self-definition. People start worrying about definitions of science mainly in the context of political battles, such as those involving so-called intelligent design creationism (or IDC), because advocates of IDC have been using this lack of a formal definition to try to define science in such a way that their pet idea be included as science, and thus taught in schools as part of the science curriculum and as an alternative to evolution.

Having a clear-cut demarcation criterion that defines science and is accepted by all would settle this question once and for all. But finding this demarcation criterion for science has proven to be remarkably difficult.

To set about trying to find such criteria, we do what we usually do in all such cases, we look at all the knowledge that is commonly accepted as science by everyone, and see if we can see similarities among these areas. For example, I think everyone would agree that the subjects that come under the headings of astronomy, geology, physics, chemistry, and biology, and which are studied by university departments in reputable universities, all come under the heading of science. So any definition of science that excluded any of these areas would be clearly inadequate, just as any definition of ‘dog’ that excluded a commonly accepted breed would be dismissed as inadequate.

This is the kind of thing we do when trying to define other things, like art (say). Any definition of art that excluded (say) paintings hanging in reputable museums would be considered an inadequate definition.

When we look back at the history of the topics studied by people in those named disciplines and which are commonly accepted as science, two characteristics stand out. The first thing that we realize is that for a theory to be considered scientific it does not have to be true. Newtonian physics is commonly accepted to be scientific, although it is not considered to be universally true anymore. The phlogiston theory of combustion is considered to be scientific though it has long since been overthrown by the oxygen theory. And so on. In fact, since all knowledge is considered to be fallible and liable to change, truth is, in some sense, irrelevant to the question of whether something is scientific or not, because absolute truth cannot be established.

(A caveat: Not all scientists will agree with me on this last point. Some scientists feel that once a theory is shown to be incorrect, it ceases to be part of science, although it remains a part of science history. Some physicists also feel that many of the current theories of (say) sub-atomic particles are unlikely to be ever overthrown and are thus true in some absolute sense. I am not convinced of this. The history of science teaches us that even theories that were considered rock-solid and lasted millennia (such as the geocentric universe) eventually were overthrown.)

But there is a clear pattern that emerges about scientific theories. All the theories that are considered to be science are (1) naturalistic and (2) predictive.

By naturalistic I mean methodological naturalism and not philosophical naturalism. The latter, I argued in an earlier posting where these terms were defined, is irrelevant to science.

By predictive, I mean that all theories that are considered part of science have the quality of having some explicit mechanism or structure that enable the users of these theories to make predictions, of saying what one should see if one did some experiment or looked in some place under certain conditions.

Note that these two conditions are just necessary conditions and by themselves are not sufficient. (See the previous posting for what those conditions mean.) As such they can only classify things into “may be science” (if something meets both conditions) or “not science” (if something does not meet either one of the conditions.) As such, these two conditions together do not make up a satisfactory demarcation criterion. For example, the theory that if a football quarterback throws a lot of interceptions his team is likely to lose, meets both naturalistic and predictive conditions, but it is not considered part of science.

But even though we do not have a rigorous demarcation criterion for science, the existence of just necessary conditions still has interesting implications.

Necessary and sufficient conditions

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

The problem of finding definitions for things that clearly specify whether an object belongs in that category or not has long been recognized to be a knotty philosophical problem. Ideally what we would need for a good definition is to have both necessary and sufficient conditions, but it is not easy to do so.

A necessary condition is one that must be met if the object is to be considered even eligible for inclusion in the category. If an object meets this condition, then it is possible that it belongs in the category, but not certain. If it does not meet the condition, then we can definitely say that it does not belong. So necessary conditions for something can only classify objects into “maybe belongs” or “definitely does not belong.”

For example, let us try to define a dog. We might say that a necessary condition for some object to be considered as a possible dog is that it be a mammal. So if we know that something is a mammal, it might be a dog or it might be another kind of mammal, say a cat. But if something is not a mammal, then we know for sure it is not a dog.

A sufficient condition, on the other hand, acts differently. If an object meets the sufficient condition, then it definitely belongs. If it does not meet the sufficient condition, then it may or may not belong. So the sufficient condition can be used to classify things into “definitely belongs” or “maybe belongs.”

So for the dog case, if a dog has papers certified by the American Kennel Association, then we can definitely say it is a dog. But if something does not have such papers it may still be a dog (say a mixed breed) or it may not be a dog (it may be a table).

A satisfactory demarcation criterion would have both necessary and sufficient conditions because only then can we say of any given object that it either definitely belongs or it definitely does not belong. Usually these criteria take the form of a set of individually necessary conditions that, taken together, are sufficient. i.e., Each condition by itself is not sufficient but if all are met they become sufficient.

It is not easy to find such conditions, even for such a seemingly simple category as dogs, and that it the problem. So for the dog, we might try define it by saying that it is a mammal, with four legs, barks, etc. But people who are determined to challenge the criteria could find problems (What exactly defines a mammal? What is the difference between an arm and a leg? What constitutes a bark? Etc. We can end up in an infinite regression of definitions.)

This is why philosophers like to say that we make such identifications (“this is a dog, that is a cat”) based on an intuitive grasp of the idea of “similarity classes,” things that share similarities that may not be rigidly definable. So even a little child can arrive at a pretty good idea of what a dog is without formulating a strict definition, by encountering several dogs and being able to distinguish what separates dog-like qualities from non-dog like qualities. It is not completely fool proof. Once in a while we may come across a strange looking animal, some exotic breed that baffles us. But most times it is clear. We almost never mistake a cat for a dog, even though they share many characteristics, such as being small four-legged mammals with tails that are domestic pets.

Anyway, back to science, a satisfactory demarcation would require that we be able to find both necessary and sufficient criteria that can be used to define science, and use those conditions to separate ideas into science and non-science. Do such criteria exist? To answer that question we need to look at the history of science and see what are the common features that are shared by those bodies of knowledge we confidently call science.

This will be discussed in the next posting.

Improving the quality of our snap judgments

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

In a previous post, I mentioned that my Race IAT results indicated that I had no automatic preference for black or white people. This surprised me, frankly. Although I am intellectually committed to thinking of people as equal, I am still subjected to the same kinds of images and stereotypes as everyone else in society so I expected to have at least a small automatic preference for white people. But the section on Malcolm Gladwell’s book Blink on ‘priming’ experiments might give an explanation for the null result.

The priming experiments were done by psychologist John Bargh. What he did was give two randomly selected groups of undergraduate students a small test involving words. The results of the word test itself were not relevant. What was relevant was that the first set of students encountered words like “aggressively”, “bold, “rude”, “bother”, etc. in their test while the second set encountered words like “respect”, “considerate”, “patiently”, “polite”, etc.

After they had done the word test, the students were asked to go down the hall to the person running the experiment to get their next assignment. This was the real experiment because it had been arranged to have a confederate blocking the doorway, carrying on an inane and seemingly endless conversation with the experimenter. The experiment was designed to see if the set of students who had been unknowingly ‘primed’ with aggressive words would take longer to interrupt this conversation than those who had been primed with polite words. Bargh expected to see a difference, but expected that difference to be measured in milliseconds. He said “I mean, these are New Yorkers. They aren’t going to just stand there. We thought maybe a few seconds, or a minute at most.”

What he found was that the people primed to be rude eventually interrupted after an average of five minutes, but 82% of the people primed to be polite did not interrupt at all, even after ten minutes which was the cut-off time that had been pre-set for the experiment, thinking that no one would ever wait that long.

What these and other priming experiments suggest is that the kinds of experiences we have carry their effects subconsciously over to the next events, at least for some time.

This may explain my negative result because for some time now I have been studying the achievement gap between black and white students in the US. The more I looked at it, the more I became convinced that the concept of race is biologically indefensible, that it cannot be the cause of the gap, and that the reasons for the gap have to be looked for elsewhere.

Since my book on the subject called The Achievement Gap in US Education: Canaries in the Mine came out in June 2005, I had been thinking a lot about these ideas at the same time as I took the test, and so I was probably ‘primed’ to think that there is no fundamental difference between the races, and hence my null result on the Race IAT test.

This ties in with other research that I quote in my book that deals with the role that teacher expectations of students play in student achievement. Teacher expectations are an important factor but a lot of the efforts to improve teacher expectations of low-achieving students have been along the lines “All children can learn!” sloganeering. But having teachers just saying this or plastering it on school walls may not help much, if they are not convinced of its truth. If people are conscious that they are being primed, then the priming effect disappears.

What is needed is for teachers to improve their overall expectations of students is for them to have opportunities to actually see for themselves traditionally underachieving students excelling. If they can have such experiences, then the inevitable snap judgments they make about students, and which can have an effect on student performance, may be more equitable than they are now.

I have long been in favor of diversity in our educational environments but my reasons were more social, because I felt that we all benefit from learning with, and from, those whose backgrounds and experiences differ from our own. But it seems that there is an added bonus as well. When we have a broader base of experience on which to base our judgments, our snap judgments tend to be better.

Snap judgments and prejudices

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

In an earlier post, I described Malcolm Gladwell’s book Blink about the way we instinctively make judgments about people. The way we make snap judgments is by ‘thin-slicing’ events. We take in a small slice of the phenomena we observe and associate the information in those slices with other measures. People who make good snap judgments are those people who associate the thin-slice information with valid predictors of behavior. People who make poor or prejudicial judgments are those people who associate the thin-slice information with poor predictors.

Think about what you observe about a person immediately as that person walks into your view. Gender, ethnicity, height, weight, color, gait, dress, hair, demeanor, eyes, looks, physique, gestures, voice, the list just goes on. We sweep up all these impressions in a flash. And based on them, whether we want to or not, we make a judgment about the person. Different people will weigh different elements in the mix differently.

If someone comes into my office wearing a suit, my initial impression of the person is different than if she had come in wearing jeans. (If you were mildly surprised by my using the pronoun ‘she’ towards the end of the last sentence, it is because, like me, you implicitly associate suits with male attire, so that the first part of the sentence made you conjure up a mental image of a man.)

A personal example of snap judgments occurs when I read Physics Today which I get every month. The obituary notices in have the magazine have a standard form. There is a head-shot of the person, with the name as the header, and one or two column inches describing the person.

Almost all of the obituaries are of old white men, not surprising for physicists of the generation that is now passing away. I found myself looking at the photo and immediately identifying whether the person was of English nationality or not. And I was right a surprising number of times. And I was not reasoning it through in any conscious way. As soon as I saw the picture came into view, I’d find myself thinking “English” or “not English”. I don’t know the basis of my judgments. But as I said, I was right surprisingly often.

Gladwell describes a very successful car salesman who over the years has realized that gender, ethnicity, clothes, etc. are not good predictors of whether the person is likely to buy a car or not. Someone who his fellow salespeople might ignore or dismiss because he looks like a rustic farmer, this salesman takes seriously. And because this salesman has been able to shape his intuition to ignore superficial or irrelevant things, his senses are better attuned to pick up on those cues that really matter.

Some of the strongest associations we make are those based on ethnicity, gender, and age. We immediately associate those qualities with generalizations associated with those groupings.

People are not always comfortable talking about their attitudes on race, gender, and other controversial topics. This is why surveys on such topics are unreliable, because people can ‘psyche out’ the tests, answering in the way they think they are expected to, the ‘correct’ way, rather than what they actually feel. This is why opinion polls on such matters, or in elections where the candidates are of different races or ethnicities, are hard to rely on.

There is a website, developed by researchers at Harvard University, that recognizes this problem. They have designed a survey instrument that tries to overcome this feature by essentially (as far as I can tell) measuring the time taken to answer their questions. In other words, they are measuring the time taken for you to psyche out the test. Since we have much less control over this, the researchers believe that this survey gives a better result. They claim that you cannot change your score by simply taking the test over and over again and becoming familiar with it.

If you want to check it out for yourself, go to the test site, click on “Demonstration”, then on “Go to Demonstration Tests”, then on “I wish to proceed”. This takes you to a list of Implicit Association Tests (or IAT) and you can choose which kinds of associations you wish to check that you make.

I took the Race IAT because that was what was discussed in Gladwell’s book, and it took me less than five minutes to complete. This test looks at the role that race plays in making associations. In particular it looks at whether we instinctively associate black/white people with good/bad qualities.

It turns out that more than 80% of people who have taken this test have pro-white associations, meaning that they tend to associate good qualities with white people and bad qualities with black people. This does not mean that such people are racists. They may well be very opposed to any kind of racist thinking or policies. What these tests are measuring are unconscious associations that we pick up (from the media, the people we know, our community, etc.) without being aware of them, that we have little control over.

Gladwell himself says that the test “always leaves me feeling a bit creepy.” He found himself being rated as having a moderate automatic preference for whites although he labels himself half black because his mother is Jamaican.

I can see why this kind of test is unnerving. It may shake our image of ourselves and reveal to us the presence of prejudices that we wish we did not have. But if we are unconsciously making associations of whatever kind, isn’t it better to know this so that we can take steps to correct for them if necessary? The successful car salesman became so because he realized that people in his profession made a lot of the unconscious associations that were not valid and had to be rejected. And he used that knowledge in ways that benefited him and his customers.

Although you cannot change your Race IAT scores by simply redoing the test, there are other things that can change your score. When I took the Race IAT, the results indicated that I have no automatic preference for blacks or whites. In a later posting, I will talk about the effects that ‘priming’ might have on the test results, and how that might have affected my results.

POST SCRIPT: Saying Iraq and Iran

I noticed that President Bush pronounces Iran the same way that I do (“E-rahn”) but pronounces Iraq as “Eye-rack” (instead of “E-rahk”), which really grates on me. He is not the only one who does this.

I don’t know how the people who live in those two countries pronounce the names but it seems reasonable to me to pronounce the two names similarly except for the last letter. Merriam-Webster’s online dictionary, which provides audio as well, agrees with me on this.

Snap judgments

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

I just finished reading Malcolm Gladwell’s book Blink. It deals with how we all make snap judgments about people and things, sometimes within a couple of seconds or less. Gladwell reports on a whole slew of studies that suggest that we have the ability to ‘thin-slice’ events, to make major conclusions from just a narrow window of observations.

I first read about this as applied to teaching in an essay by Gladwell that appeared in the New Yorker (May 29, 2000) where he described research by psychologists Nalini Ambady and Robert Rosenthal who found that by showing observers silent videoclips of teachers in action, the observers (who had never met the teachers before) were able to make judgments of teacher effectiveness that correlated strongly with the evaluations of students who had taken an entire course with that teacher. (Source: Half a Minute: Predicting Teacher Evaluations From Thin Slices of Nonverbal Behavior and Physical Attractiveness, Journal of Personality and Social Psychology, 1993, vol. 64, No. 3, 431-441.)

This result is enough to give any teacher the heebie-jeebies. The thought that students have formed stable and robust judgments about you before you have even opened your mouth on the very first day of the very first class is unnerving. It seems so unfair that you are being judged before you can even begin to prove yourself. But, for good or bad, this seems to be supported by other studies, such as those done by Robert Boice in his book Advice for New Faculty Members.

The implication for this is that the cliché “You never get a second chance to make a first impression” is all too true. And what Gladwell’s New Yorker article and book seem to suggest is that this kind of thin-slicing is something that all of us do all the time. But not all of us do it well. Some people use thin-slicing to arrive at conclusions that are valid, others to arrive at completely erroneous judgments.

Those who do it well tend to be people who have considerable experience in that particular area. They have distilled that experience into some key variables that they then use to size up the situation at a glance, often without even consciously being aware of how they do it.

Seen in this way, the seemingly uncanny ability of people to identify at a glance who the good and bad teachers are might not seem that surprising. Most people have had lots of experience with many teachers in their lives, and along the way have unconsciously picked up subtle non-verbal cues that they use to correlate with good and bad teaching. They use these markers as predictors and seem to be quite good at it.

I was self-consciously reflecting on this last week when I ran two mock-seminars for visiting high-school seniors as part of “Experience Case ” days. The idea was to have a seminar class for these students so that they could see what a seminar would be like if they chose to matriculate here. I found that just by glancing around the room at the assembled students at the beginning, I could tell who was likely to be an active participant in the seminar and who would not.

It was easy for me to make these predictions and I was pretty confident that I would be proven right, and I usually was. But how did I do it? Hard to tell. But I have taught for many years and encountered thousands of students and this wealth of experience undoubtedly played a role in my ability to make snap judgments. If pressed to explain my judgments I might say that it was the way the students sat, their body language, the way they made eye contact, the expression on their faces, and other things like that.

But while I am confident about my ability to predict the students’ subsequent behavior in the seminar, I am not nearly as confident in the validity of the reasons I give. And this is consistent with what Gladwell reports in his book. Many of the experts who made good judgments did not know how they arrived at their conclusions or, when they did give reasons, the reasons could not stand up to close scrutiny.

He gives the example of veteran tennis pro and coach Vic Braden. Braden found that when watching tennis players about to make their second serve, he could predict with uncanny accuracy (close to 100%) when they would double fault. This is amazing because he was watching top players (who very rarely double fault) perform on television, and many of the players were people he had never seen play before. But what drove Braden crazy was that he could not say how he made his predictions. He just knew in a flash of insight that they would, and no amount of watching slow-motion replays enabled him to pinpoint the reasons.

But Gladwell points out that we use thin-slicing techniques even is situations where we do not have much experience or expertise and these judgments can lead us astray. In later postings, I will describe the kinds of situations where snap judgments are likely to lead us to shaky conclusions and where we should be alert.

POST SCRIPT: Charlie Wilson’s War

The film with the above name tries to make a comedy out of the role that the US played in creating the Taleban in Afghanistan. Stanley Heller points out that this was no laughing matter for the million Afghans who died as a result of the geostrategic games played by the Soviet Union and the Carter-Reagan governments.

Atheism and Agnosticism

(I am taking a break from original posts due to the holidays and because of travel after that. Until I return, here are some old posts, updated and edited, for those who might have missed them the first time around. New posts should appear starting Monday, January 14, 2008.)

In an interview, Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist,” explains why he uses that term (thanks to onegoodmove):

I think I use the term radical rather loosely, just for emphasis. If you describe yourself as “Atheist,” some people will say, “Don’t you mean ‘Agnostic’?” I have to reply that I really do mean Atheist. I really do not believe that there is a god – in fact I am convinced that there is not a god (a subtle difference). I see not a shred of evidence to suggest that there is one. It’s easier to say that I am a radical Atheist, just to signal that I really mean it, have thought about it a great deal, and that it’s an opinion I hold seriously…

People will then often say “But surely it’s better to remain an Agnostic just in case?” This, to me, suggests such a level of silliness and muddle that I usually edge out of the conversation rather than get sucked into it. (If it turns out that I’ve been wrong all along, and there is in fact a god, and if it further turned out that this kind of legalistic, cross-your-fingers-behind-your-back, Clintonian hair-splitting impressed him, then I think I would chose not to worship him anyway.) . . .

And making the move from Agnosticism to Atheism takes, I think, much more commitment to intellectual effort than most people are ready to put in. (italics in original)

I think Adams is exactly right. When I tell people that I am an atheist, they also tend to suggest that surely I must really mean that I am an agnostic. (See here for an earlier discussion of the distinction between the two terms.) After all, how can I be sure that there is no god? In that purely logical sense they are right, of course. You cannot prove a negative so there is always the chance that not only that a god exists but, if you take radical clerics Pat Robertson and Jerry Falwell seriously, has a petty, spiteful, vengeful, and cruel personality.

When I say that I am atheist, I am not making that assertion based on logical or evidentiary proofs of non-existence. It is that I have been convinced that the case for no god is far stronger than the case for god. It is the same reasoning that makes me convinced that quantum mechanics is the theory to use for understanding sub-atomic phenomena or natural selection is the theory to be preferred for understanding the diversity of life. There is always the possibility that these theories are ‘wrong’ in some sense and will be superceded by other theories, but those theories will have to have convincing evidence in their favor.

If, on the other hand, I ask myself what evidence there is for the existence of a god, I come up empty. All I have are the assurances of clergy and assertions in certain books. I have no personal experience of it and there is no scientific evidence for it.

Of course, as long time readers of this blog are aware, I used to be quite religious for most of my life, even an ordained lay preacher of the Methodist Church. How could I have switched? It turns out that my experience is remarkably similar to that of Adams, who describes why he switched from Christianity to atheism.

As a teenager I was a committed Christian. It was in my background. I used to work for the school chapel in fact. Then one day when I was about eighteen I was walking down the street when I heard a street evangelist and, dutifully, stopped to listen. As I listened it began to be borne in on me that he was talking complete nonsense, and that I had better have a bit of a think about it.

I’ve put that a bit glibly. When I say I realized he was talking nonsense, what I mean is this. In the years I’d spent learning History, Physics, Latin, Math, I’d learnt (the hard way) something about standards of argument, standards of proof, standards of logic, etc. In fact we had just been learning how to spot the different types of logical fallacy, and it suddenly became apparent to me that these standards simply didn’t seem to apply in religious matters. In religious education we were asked to listen respectfully to arguments which, if they had been put forward in support of a view of, say, why the Corn Laws came to be abolished when they were, would have been laughed at as silly and childish and – in terms of logic and proof -just plain wrong. Why was this?
. . .
I was already familiar with and (I’m afraid) accepting of, the view that you couldn’t apply the logic of physics to religion, that they were dealing with different types of ‘truth’. (I now think this is baloney, but to continue…) What astonished me, however, was the realization that the arguments in favor of religious ideas were so feeble and silly next to the robust arguments of something as interpretative and opinionated as history. In fact they were embarrassingly childish. They were never subject to the kind of outright challenge which was the normal stock in trade of any other area of intellectual endeavor whatsoever. Why not? Because they wouldn’t stand up to it.
. . .
Sometime around my early thirties I stumbled upon evolutionary biology, particularly in the form of Richard Dawkins’s books The Selfish Gene and then The Blind Watchmaker and suddenly (on, I think the second reading of The Selfish Gene) it all fell into place. It was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I’d take the awe of understanding over the awe of ignorance any day.

What Adams is describing is the conversion experience that I described earlier when, suddenly switching your perspective seems to make everything fall into place and make sense.

For me, like Adams, I realized that I was applying completely different standards for religious beliefs than I was for every other aspect of my life. And I could not explain why I should do so. Once I jettisoned the need for that kind of distinction, atheism just naturally emerged as the preferred explanation. Belief in a god required much more explaining away of inconvenient facts than not believing in a god.

POST SCRIPT: The Noah’s Ark horror

One of the great triumphs of Judeo-Christian propaganda is getting their followers to overlook the fact that the Biblical Noah story, which many of them believe to be true, would be the worst act of genocide ever, and committed by god to boot. Hellbound Alleee tries to correct this.