Safe Zones

As you enter my office, directly across from the door is a bulletin board and on it is a little sticker. It has the words ‘SAFE ZONE’ in large purple letters over an inverted pink triangle background.

It was given to me by the Spectrum group at Case which, according to its website seeks to “provide an environment where GLBTQQIA (gay, lesbian, bisexual, transgender, queer, questioning, intersexed, and allied) persons can socialize, learn, and grow.�

The sticker on my bulletin board is meant to be a signal that a student who fits into any of those categories can let me know without fearing any adverse or hostile reaction from me.

I have to say that I feel a little sad whenever my eye falls on that sticker. Have we come to this, that we have to publicly announce zones of safety for people for no other reason than their sexual orientation? Shouldn’t that be something that is taken for granted? The fact that it is not is a sign of how far we are from creating a tolerant society.

I have never quite been able to understand why some people get so upset by other people’s private lives. Yes, I can understand that because of your own religious beliefs or culture or upbringing or whatever there are certain things that you personally might not approve of. But you are always free not to do them. But why should the private lives of other consenting adults, even total strangers, matter to you?

And yet, it seems that many people are concerned about just such things. To me, one of the more disturbing features on last November’s election was the adoption of so many anti-gay measures across the nation. In Ohio Issue 1, that sought to prohibit gay couples from getting some of the benefits that married heterosexual couples take for granted, was adopted by 62% to 38%, an alarmingly large margin.

It seems pretty clear that there are at least two groups who currently run the risk of open discrimination – non-heterosexuals and Arabs/Muslims. It seems to be perfectly acceptable to say disparaging things against either of these two groups without being shamed or called to account.

When it comes to Arabs, for example, Third-Tier Pundit™ Hall of Famer Ann Coulter recently in her column referred to veteran journalist Helen Thomas as “that old Arab.� James Wolcott speculates as to the outrage that would ensure if that kind of language was applied to other groups. And Coulter’s fellow traveler on the Third-Tier Pundit™ circuit Michelle Malkin’s approval of the internment of ethnic Japanese during World War II and her advocacy of racial, religious and nationality profiling now is another example of this appalling tendency to select specific groups for discriminatory treatment.

Back to the issue of ‘safe zones’, I am not naïve. I know that people who are not ‘straight’ run the risk of being discriminated against, or much worse, in the broader society and that they are justified in being cautious about who knows about them. But it is a little disheartening that even in a university there is this fear of intolerance. A university should be different, even though it is populated by the same kinds of people as elsewhere, because in the university there exists something that does not exist outside in any organized way and which should act as a uniting force that overcomes the friction and divergence that can be caused by differences.

This unifying force is the love of learning and a respect for academic values that universities are built upon. If we immerse ourselves in that shared love of learning, then we will find that people who are sometimes very different from us can be the very sources of our own intellectual, spiritual, and moral growth.

In a university you will find people who are different in many ways, not just in terms of their sexual orientation. It is such individual differences that make life so interesting and enjoyable and these same qualities have been the fuel for some of the most creative people that ever lived. Our society, and our universities, should find room for all these people and not seek to shred them of their distinctiveness and make them conform to some idealized ‘norm.’

In other words, we need to make the whole university a safe zone for everyone.

What do creationist/ID advocates want-III?

It is time to tackle head-on the notion of what is meant by the “materialism” that the creationist/ID camp find so distasteful. (See part I and part II for the background.)

The word materialism is used synonymously with “naturalism” and perhaps the clearest formulation of what it means can be found in the writings of paleontologist George Gaylord Simpson who said in Tempo and Mode in Evolution (p. 76.):

The progress of knowledge rigidly requires that no non-physical postulate ever be admitted in connection with the study of physical phenomena. We do not know what is and what is not explicable in physical terms, and the researcher who is seeking explanations must seek physical explanations only.(Emphasis added)

Simpson was not an atheist (as far as I can tell) but he is saying something that all scientists take for granted, that when you seek a scientific explanation for something, you look for something that has natural causes, and you do not countenance the miraculous or the inscrutable. This process is properly called “methodological naturalism”, to be contrasted with “philosophical naturalism.”

Despite the polysyllabic terminology, the ideas are easy to understand. For example, if you hear a strange noise in the next room, you might wonder if it is a radiator or the wind or a mouse or an intruder and you investigate each possible cause, looking for evidence. For each question that you pose, the answer is sought in natural causes. You would be unlikely to say “The noise in the next room is caused by God knocking over stuff.” In general, people don’t invoke God to explain the everyday phenomena of our lives, even though they might be quite religious.

Methodological naturalism is just that same idea. Scientists look for natural explanations to the phenomena they encounter because that is the way science works. Such an approach allows you to systematically investigate open questions and not shut off avenues of research. Any scientist who said that an experimental result was due to God intervening in the lab would be looked at askance, because that scientist would be violating one of the fundamental rules of operation. There is no question in science that is closed to further investigation of deeper natural causes.

Non-scientists sometimes do not understand how hard and frustrating much of scientific research is. People work for years and even decades banging their heads against brick walls, trying to solve some tough problem. What keeps them going? What makes them persevere? It is the practice of methodological naturalism, the belief that a discoverable explanation must exist and that it is only their ingenuity and skill that is preventing them from finding the solution. Unsolved problems are seen as challenges to the skills of the individual scientist and the scientific community, not as manifestations of God’s workings.

This is what, for example, causes medical researchers to work for years to find causes (and thus possibly cures) for rare and obscure diseases. Part of the reason is the desire to be helpful, part of it is due to personal ambition and career advancement, but an important part is also the belief that a solution exists that lies within their grasp.

It is because of this willingness to persevere in the face of enormous difficulty that science has been able to make the breakthroughs it has. If, at the early signs of difficulty in solving a problem scientists threw up their hands and said “Well, looks like God is behind this one. Let’s give up and move on to something else” then the great discoveries of science that we associate with Newton, Darwin, Einstein, Planck, Heisenberg, etc. would never have occurred.

For example, the motion of the perigee of the moon was a well-known unsolved problem for over sixty years after the introduction of Newtonian physics. It constituted a serious problem that resisted solution for a longer time than the problems in evolution pointed to by creationist/ID advocates. Yet no supernatural explanation was invoked, eventually the problem was solved, and the result was seen as a triumph for Newtonian theory.

So when creationist/ID advocates advocate the abandonment of methodological naturalism, they are not trying to ease just Darwin out of the picture. They are throwing out the operational basis of the entire scientific enterprise.

Philosophical naturalism, as contrasted with methodological naturalism, is the belief that the natural world is all there is, that there is nothing more. Some scientists undoubtedly choose to be philosophical naturalists (and thus atheists) because they see no need to have God in their philosophical framework, but as I said in an earlier posting, others reject that option and stay religious. But this is purely a personal choice made by individual scientists and it has no impact on how they do science, which only involves using methodological naturalism. There is no requirement in science that one must be a philosophical naturalist.

The question of philosophical naturalism is, frankly, irrelevant to working scientists. Scientists don’t really care if their colleagues are religious or not. I have been around scientists all my life. But apart from my close friends, I have no idea what their religious beliefs are, and even then I have only a vague idea of what they actually believe. I know that some are religious and others are not. It just does not matter to us. Whether a scientist is a philosophical naturalist or not does not affect how his or her work is received by the community.

But what the creationist/ID advocates want, according to their stated goal of “”If things are to improve, materialism needs to be defeated and God has to be accepted as the creator of nature and human beings” is to enforce the requirement that scientists reject both philosophical and methodological naturalism. They are essentially forcing two things on everyone:

  • Requiring people to adopt the creationist/ID religious worldview as their own.
  • Requiring scientists to reject methodological naturalism as a rule of operation for science.

In other words, creationist/ID advocates are not asking us to reject only Darwin or to turn the clock back to the time just prior to Darwin, they want us to go all the way back to before Copernicus, and reject the very methods of science that has enabled it to be so successful. They want us to go back to a time of rampant and unchecked superstition.

This is probably not a good idea¦

The strange story of David Horowitz and the “Bush-as-war-criminal” essay

I apologize for the length of this post but I felt a responsibility (especially since I had a role in creating this rolling snowball) to provide a fairly comprehensive update on the convoluted, strange, and suddenly fast-moving, saga of David Horowitz, the organization he founded called Students for Academic Freedom (SAF), and the college professor who allegedly asked his class to write a mid-term essay on “Why George Bush is a war criminal,” and then gave an F grade to a student who had been offended by the assignment and had instead turned in one on “Why Saddam Hussein is a war criminal.”
[Read more…]

What do creationist/ID advocates want-II?

We saw in an earlier posting that a key idea of the creationists is that it was the arrival of Darwin, Marx, and Freud that led to the undermining of Western civilization.

The basis for this extraordinary charge is the claim that it was these three that ushered in the age of materialism. These three people make convenient targets because, although they were all serious scientific and social scholars, they have all been successfully tarred as purveyors of ideas that have been portrayed as unpleasant or even evil (Darwin for saying that we share a common ancestor with apes, Marx with communism, Freud with sexuality).
[Read more…]

Universities as a reality-based community

In a previous posting I described the disturbing phenomenon that so many Americans seemed to be living is a reality-free world. I argued that this was because they were being systematically misled by people who should, and do, know better.

Further support for my somewhat cynical view comes from an article by former Wall Street Journal reporter Ron Suskind that appeared in the October 17, 2004 New York Times Magazine and that deserves to be better known because of the light it sheds on the extent to which the current administration is ideologically driven. His article has this chilling anecdote:

“In the summer of 2002, after I had written an article in Esquire that the White House didn’t like about Bush’s former communications director, Karen Hughes, I had a meeting with a senior adviser to Bush. He expressed the White House’s displeasure, and then he told me something that at the time I didn’t fully comprehend – but which I now believe gets to the very heart of the Bush presidency.

“The aide said that guys like me were ‘in what we call the reality-based community,’ which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality.’ I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ‘That’s not the way the world really works anymore,’ he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors . . . and you, all of you, will be left to just study what we do.'”

What you have on display here is a world-view that is so arrogant that it believes that it has the power to create its own realities.

It is not unusual in the hey-day of empires for its leaders to have the feeling that they alone can direct the course of events, that they can overcome the realities they face, and that nothing can stop them from achieving their goals, whatever they may be.

What is perhaps extreme in this case is that this arrogance seems to be causing the leaders to ignore the actual realities and to think that they can create their own version of it. In other words, they believe that what they want to believe actually exists. Now, in some ways, it is always possible to do this. Reality is a complex business, composed of many disparate elements, and it is always possible to pick out those elements that support one’s fantasy, ignore the rest, and act accordingly.

But what is happening here is deeper and more disturbing. What this administration is doing is trying to make reality irrelevant by creating an alternate “reality.” They do this by quickly and repeatedly and strongly saying the things that they wish the public to believe are true and depending on the media or the Democratic Party to not call them on the lack of support for the assertions. As a result, after a short time, the administration’s assertions enter the public consciousness, become the new “reality”, and thus become the basis for vacuous ‘policy debates’ that have nothing to do with the actual situation.

We saw this happen in the run-up to the war with Iraq and we are seeing it again with the recent killing in Lebanon of Rafik Hariri. Using a combination of innuendo and bombast, the administration has managed to make people think that Syria is the culprit even though, until today, no evidence in support of this claim has been presented and there even exists some counterevidence. On the other hand, Robert Fisk reports today that the UN investigation team is due to make a report that will allege that there may have been a cover-up of the investigation by Lebanese and Syrian authorities, so that the situation is still murky.

What most reality-based people realize is that while forcing your own version of reality on events can win you short-term political victories, it is a prescription for long-term disaster because eventually the contradiction between the ‘virtual reality’ and reality become too stark to make your actions viable. The “judicious study of discernible reality,” sneered at by the senior Bush advisor, is the way to arrive at reasoned judgments that have a chance of producing policies that make sense.

In many ways, universities have to be reality-based. The work of universities rests on empirical bases, on data, on evidence. This does not mean that they restrict themselves to describing just what is. Speculative ideas are the life-blood of academia because that is how new knowledge is created. Making bold speculations and pushing the limits of theories is part of the job of universities.

But such efforts must always rest on an empirical basis because otherwise they cease to be credible. You can build on reality, but you can’t totally depart from it. Academics know that their credibility rests on their ability to balance speculation and theorizing with empirical data. For example, a physicist who proposes theories that do not have a basis in data would be ridiculed.

But no such constraints seem to restrain the current political leaders. At one time, the media might have played the role of injecting reality into the public discussion, by comparing official statements with the facts on the ground and providing historical context. But now that the press has largely abdicated that role (see the previous postings on The questions not asked part I and part II) in favor of either acting as a mouthpiece for the fantasies of political leaders or debating tactical points while not questioning the core fantasies, it is up to the universities to fill that void.

This is why efforts like Ohio’s Senate Bill 24 that seek to restrict what university instructors can and cannot say are so dangerous. They seek to bring universities also under political control, to suffocate one of the few remaining viable reality-based institutions. While opponents decry universities as being too “liberal”, what really make universities “dangerous” is that they are fundamentally reality-based institutions that cannot be easily co-opted into accepting fantasies as reality.

It seems ironic that universities, long derided as ivory towers occupied by pointy-headed intellectuals out of touch with the “real world”, may in fact need to be the force that brings reality back into public life.

POST SCRIPT 1

On Thursday, May 17) in the Guilford Parlor from 11:30-1:00pm there will be a forum on Ohio’s Senate Bill #24 (the so-called academic bill of rights. I will be on the panel along with Professor Mel Durschlag (Law), Professor Jonathan Sadowsky (History), and Professor Joe White (Political Science).

POST SCRIPT 2

Update on a previous posting:

I received a call yesterday (March 14) from a person associated with Students for Academic Freedom informing me that my op-ed had triggered the release of more information on their website, where more details are given.

Although the student referred to had not in fact given this testimony at the Colorado Senate hearings as had been alleged earlier, the level of detail (which had not been released until now) provided on the SAF website is sufficient to remove this story from the category of urban legends since it does give some names and places and dates. But a judgment on whether this constitutes academic bullying will have to await more details on what actually transpired between professor and student. My contact at SAF says that the incident is still under investigation and confidentiality prevents the release of more information.

Update on the update (3/15/05): It gets curioser and curioser.

The blog Canadian Cynic reports that new information on this case has come out and that Horowitz is now backtracking on almost all of the key charges that were originally made. Canadian Cynic highlights Horowitz’s statements now that “Some Of Our Facts Were Wrong; Our Point Was Right” and “”I consider this an important matter and will get to the bottom of it even if it should mean withdrawing the claim.”

See the article on the website Inside Higher Education. It seems to be the most authoritative source of information on this case.

What do ID advocates want?

In an earlier posting, I spoke about how those who view Darwin’s ideas as evil see it as the source of the alleged decline in morality. But on the surface, so-called “intelligent design” (or ID) seems to accept much of evolutionary ideas, reserving the actions of a “designer” for just a very few (five, actually) instances of alleged “irreducible complexity” that occur at the microbiological level.

This hardly seems like a major attack on Darwin since, on the surface, it seems to leave unchallenged almost all of the major ideas of the Darwinian structure such as the non-constancy of species (the basic theory of evolution), the descent of all organisms from common ancestors (branching evolution), the gradualness of evolution (no discontinuities), the multiplication of species, and natural selection.
[Read more…]

The questions not asked II – UN resolutions

It’s time to play another game of The questions not asked. This is where we examine the reporting of some news event and try and identify the obvious questions that should have been posed by the media, or the context that should have been provided to better understand the event, but wasn’t.

Today’s example is taken from a speech given by George W. Bush on March 8, 2005 and reported in the Houston Chronicle.

“The time has come for Syria to fully implement Security Council Resolution 1559,” Bush told a largely military audience at the National Defense University. “All Syrian military forces and intelligence personnel must withdraw before the Lebanese elections for those elections to be free and fair.”

Bush, in a speech touting progress toward democracy in the broader Middle East, did not say what might follow failure to comply.

At the White House, spokesman Scott McClellan also left the question open. “If they don’t follow through on their international obligations, then, obviously, you have to look at what the next steps are,” McClellan said.

So what questions were not posed? What context was not provided?

One immediate answer is to compare the situations in Lebanon and Iraq. How can Bush say that the Lebanese elections cannot be free and fair because of the presence of 14,000 Syrian troops there, when ten times that many US troops were present in Iraq during that election in January, but those elections were praised?

But that question was not asked, the context not provided.

But there is another obvious angle to this particular case that was also overlooked, and that is the way in which UN resolutions are used selectively to justify US policy decisions.

UN resolutions routinely call, among other things, for the withdrawal of foreign troops from other countries. And given that the UN is, for want of anything better, the closest thing we have to providing a global consensus, such resolutions should be taken seriously.

But this is not the first time that UN resolutions calling for the withdrawal of occupying troops to be withdrawn have been defied. For example, Stephen Zunes, professor of Politics and chair of the Peace & Justice Studies Program at the University of San Francisco in his article US Double Standards in the October 22, 2002 issue of The Nation magazine says that more than ninety UN resolutions are currently being violated, and the vast majority of the violations are by countries closely allied with the US. He says:

For example, in 1975, after Morocco’s invasion of Western Sahara and Indonesia’s invasion of East Timor, the Security Council passed a series of resolutions demanding immediate withdrawal. However, then-US ambassador to the UN Daniel Patrick Moynihan bragged that “the Department of State desired that the United Nations prove utterly ineffective in whatever measures it undertook. The task was given to me, and I carried it forward with no inconsiderable success.” East Timor finally won its freedom in 1999. Moroccan forces still occupy Western Sahara. Meanwhile, Turkey remains in violation of Security Council Resolution 353 and more than a score of resolutions calling for its withdrawal from northern Cyprus, which Turkey, a NATO ally, invaded in 1974.

The most extensive violator of Security Council resolutions is Israel. Israel’s refusal to respond positively to the formal acceptance this past March by the Arab League of the land-for-peace formula put forward in Security Council Resolutions 242 and 338 arguably puts Israel in violation of these resolutions, long seen as the basis for Middle East peace. More clearly, Israel has defied Resolutions 267, 271 and 298, which demand that it rescind its annexation of greater East Jerusalem, as well as dozens of other resolutions insisting that Israel cease its violations of the Fourth Geneva Convention, such as deportations, demolition of homes, collective punishment and seizure of private property. Unlike some of the hypocritical and meanspirited resolutions passed by the UN General Assembly, like the now-rescinded 1975 resolution equating Zionism with racism, these Security Council resolutions are well grounded in international law and were passed with US support or abstention. Security Council Resolutions 446, 452 and 465 require that Israel evacuate all its illegal settlements on occupied Arab lands.

All the UN resolution pointed to be Zunes are very serious and are much older that the resolution 1559 being used against Syria, so that these violations are long standing. All this information is in the public record. Any reasonably competent journalist should know it and, when the administration (and this is done by both Republican and Democratic administrations) cynically invokes UN resolutions selectively to achieve narrow political ends, should be able to pose the relevant question of why only some UN resolutions have to be followed while others ignored.

But the mainstream journalists don’t do this. One question is why. But the more important question is, since they don’t do their job, what can we do to make up for it?

Evolutionary theory and falsificationism

In response to a previous posting, commenter Sarah Taylor made several important points. She clearly articulated the view that evolutionary theory is a complex edifice that is built on many observations that fit into a general pattern that is largely chronologically consistent.

She also notes that one distinguishing feature of science is that there are no questions that it shirks from, that there are no beliefs that it is not willing to put to the test. She says that “What makes scientific theories different from other human proposals about the nature of the universe are their courage. They proclaim their vulnerabilities as their strengths, inviting attack.�

I would mostly agree with this. Science does not shy away from probing its weaknesses, although I would not go so far as to claim that the vulnerabilities are seen as strengths. What is true is that the ‘weaknesses’ of theories are not ignored or covered up but are seen as opportunities for further research. Since there is no such thing in science as infallible knowledge, there is no inherent desire to preserve any theory at all costs, and the history of science is full of once dominant theories that are no longer considered credible.

But having said all that, it is not necessarily true that finding just one contradiction with a theory is sufficient to overthrow the theory. In the context of the challenge to Darwinian theory by intelligent design (ID) advocates, Sarah’s statement that “All that any ID devotee has to do is to show ONE fossil ‘out of place’, to prove the theory doesn’t work. Just one horse shoulder blade in a Cambrian deposit somewhere in the world, and we can say goodbye to Darwinâ€? is a little too strong.

Sarah’s view seems to be derived from the model of falsificationism developed by the philosopher of science Karl Popper (see his book Conjectures and Refutations: The Growth of Scientific Knowledge), 1963) who was trying to explain how science progresses. After showing that trying to prove theories to be true was not possible, Popper argued that what scientists should instead do is try to prove theories false by finding a single counter-instance to the theory’s predictions. If that happens, the theory is falsified and has to be rejected and replaced by a better one. Hence the only status of a scientific theory is either ‘false’ or ‘not yet shown to be false.’

But historians of science have shown that this model, although appealing to our sense of scientific bravado, does not describe how science actually works. Scientists are loath to throw away perfectly productive theories on the basis of a few anomalies. If they did so, then no non-trivial theory would survive. For example, the motion of the perigee of the moon’s orbit disagreed with Newton’s theory for nearly sixty years. Similarly the stability of the planetary orbits was an unsolved problem for nearly 200 years.

Good theories are hard to come by and we cannot afford to throw them away at the first signs of a problem. This is why scientists are quite agreeable to treating such seeming counter-instances as research problems to be worked on, rather than as falsifying events. As Barry Barnes says in his T.S. Kuhn and Social Science (1982): “In agreeing upon a paradigm scientists do not accept a finished product: rather they agree to accept a basis for future work, and to treat as illusory or eliminable all its apparent inadequacies and defects.�

Dethroning a useful theory requires an accumulation of evidence and problems, and the simultaneous existence of a viable alternative. It is like a box spring mattress. One broken spring is not sufficient to make the mattress useless, since the other springs can make up for it and retain the mattress’s functionality. It takes several broken springs to make the mattress a candidate for replacement. And you only throw out the old mattress if you have a better one to replace it with, because having no mattress at all is even worse. The more powerful and venerable the theory, the more breakdowns that must occur to make scientists skeptical of its value and open to having another theory replace it.

After a theory is dethroned due to a confluence of many events, later historians might point to a single event as starting the decline or providing the tipping point that convinced scientists to abandon the theory. But this is something that happens long after the fact, and is largely a rewriting of history.

So I do not think that finding one fossil out of place will dethrone Darwin. And ID does not meet the necessary criteria for being a viable alternative anyway, since it appeals to an unavoidable inscrutability as a factor in its explanatory structure, and that is an immediate disqualification for any scientific theory.

The purpose of college

Why go to college?

For some, college is just a stage in the educational ladder after high school and before entering the working world or going to graduate school. In this view, college is primarily the place where you obtain an important credential that is the pre-requisite for securing well-paying jobs. This is not an insignificant consideration.

Others might see college as the place where you both broaden and deepen your knowledge in a range of subjects and develop higher-order skills such as critical thinking and writing and researching skills.

All these things are undoubtedly valuable and worth pursuing. But for me, I think the primary purpose of college is that it is the place where you start to lay the foundations for a personal philosophy of life.

What I mean by this is that at least in college we need to start asking ourselves the question: “Why do I get up in the morning?” For some, the answer might be “Why not? What other option is there?” For others it might just be a habit that is unquestioned. For yet others, it might be that they have particular ambitions in life that they want to achieve. For yet others, it might be because other people depend on us to do various things.

But while all these considerations undoubtedly play a part for all of us, the question that I am addressing goes somewhat beyond that and asks what we think of as our role in the universe. What is it that gives our lives meaning? What should be the basis of our relationships with our family and friends and society? What is our obligation to all those to whom we are tied together by a common humanity? What should be our relationship with nature and the environment?

All of us think about these things from time to time. But I suspect that these various areas of our lives remain somewhat separate. By ‘developing a personal philosophy of life’, I mean the attempt to pull together all these threads and weave a coherent tapestry where each part supports and strengthens the other.

I think that the university is a wonderful place to start doing this because it has a unique combination of circumstances that can, at least in principle, enable this difficult task to be pursued. It has libraries, it has scholars, it has courses of study that can enable one to explore deeply into areas of knowledge. It provides easy access to the wisdom of the past and to adventures towards the future. But most importantly, it has people (students and staff and faculty) of diverse backgrounds, ages, ethnicities, nationalities, gender, etc.

But I wonder if we fully take advantage of this opportunity or whether the day-to-day concerns of courses, homework, research, teaching, studying prevent us from periodically stepping back and trying to see the big picture. In fact, it looks like the search for broader goals for college education is declining alarmingly. In 1969, 71% of students said they felt it essential that college help them in “formulating the values and goals of my life.” 76% also said that “learning to get along with people” was an essential goal of their college experience.

But by 1993, those percentages had dropped to 50% and 47% respectively, from the top ranked items to the bottom, being displaced by an emphasis on training and skills and knowledge in specialized fields. (Source: When Hope and Fear Collide by Arthur S. Levine and Jeannette S. Cureton, 1998, table 6.1, page 117.)

In my mind, this is an alarming trend and needs to be reversed.

One thing that events like the tsunami do, even for those not directly affected by it, is to bring us up short, to realize the fragility of life and the importance of making the most out of our time here. It reminds us that there are big questions that we need to ask and try to answer, and we cannot keep avoiding them.

This kind of thoughtful introspection mostly occurs outside formal classes, in the private discussions that we have in informal settings, in dorms, lounges, parks, offices, and coffee shops. But how often does it happen? And how can we create a university atmosphere that is conducive to making people realize the importance of having such discussions?

The meaning that we attach to life will depend on a host of individualized factors, such as our personal histories, what we value most, and what we are willing to give up. And we may never actually create a fully formed personal philosophy of life. The philosophy we do develop will most likely keep changing with time as our life experiences change us.

But the attempt to find out what our inner core is so that we act in life in ways that are consistent with it is something that I think college is perfectly suited for. I only hope that most people take advantage of it.

A Theory of Justice

I have to confess that this blog has been guilty of false advertising. On the masthead, of all the items listed, the one thing I have not talked about is books and it is time to make amends.

But first some background. Last week, I spent a thoroughly enjoyable evening having an informal conversation with about 20 students in the lobby of Alumni Hall (many thanks to Carolyn, Resident Assistant of Howe for organizing it). The conversation ranged over many topics and inevitably came around to politics. I had expressed my opposition to the attack on Iraq, and Laura (one of my former students) raised the perfectly legitimate question about what we should do about national leaders like Saddam Hussein. Should we just let them be? My response was to say that people and countries need to have some principles on which to act and apply them uniformly so that everyone (without exception) would be governed by the same principles. The justifications given by the Bush administration for the attack on Iraq did not meet those conditions.

But my response did not have a solid theoretical foundation and I am glad to report that a book that I have started reading seems to provide just that.

The book is A Theory of Justice by John Rawls, in which the author tries to outline what it would take to create a system that would meet the criteria of justice as fairness. The book was first published in 1971 but I was not aware until very recently of its existence. I now find that it is known by practically everyone and is considered a classic, but as I said elsewhere earlier, my own education was extraordinarily narrow, so it is not surprising that I was unaware of it until now.

Rawls says that everyone has an intuitive sense of justice and fairness and that the problem lies on how to translate that desire into a practical reality. Rawls’ book gets off to a great start in laying out the basis for how to create a just society.

“Men are to decide in advance how they are to regulate their claims against one another and what is to be the foundation charter of their society…Among the essential features of this situation is that no one knows his place in society, his class position or social status, not does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength, and the like…The principles of justice are chosen behind a veil of ignorance.” (my emphasis)

In other words, we have to decide what is fair before we know where we will fit into society. We have to create rules bearing in mind that we might be born to any of the possible situations that the ensuing structure might create. Right now what we have is ‘victor’s justice’, where the people who have all the power and privilege get to decide how society should be run, and their own role in it, and it should not surprise us that they see a just society as one that gives them a disproportionate share of the benefits.

Rawls argues that if people were to decide how to structure society based on this ‘veil of ignorance’ premise, they would choose two principles around which to organize things. “[T]he first requires equality in the assignment of basic rights and duties, while the second holds that social and economic inequalities, for example, inequalities of wealth and authority, are just only if they result in compensating benefits for everyone, and in particular for the least advantaged members of society. These principles rule out justifying institutions on the grounds that the hardships of some are offset by a greater good in the aggregate.”

Rawl’s argument has features similar to that young children use when sharing something, say a pizza or a cookie. The problem is that the person who gets to choose first has an unfair advantage. This problem is overcome by deciding in advance that one person divides the object into two portions while the other person gets first pick, thus ensuring that both people should feel that the ensuing distribution is fair.

(Here is an interesting problem: How can you divide a pizza in three ways so that everyone has the sense that it was a fair distribution? Remember, this should be done without precision measurements. The point is to demonstrate the need to set up structures so that people will feel a sense of fairness, irrespective of their position in the selection order.)

All this great stuff is just in the first chapter. Rawls will presumably flesh out the ideas in the subsequent chapters and I cannot wait to see how it comes out.

I will comment about the ideas in this book in later postings as I read more, because I think the ‘veil of ignorance’ gives a good framework for understanding how to make judgments about public policy.