Dover’s dominos-1: Why Intelligent Design Creationism will lose

The Scottish poet Robert Burns in his poem To a Mouse cautioned those who place too much faith in detailed plans for the future. He said:

The best laid schemes o’ Mice an’ Men, 

Gang aft agley.

When historians of the future write about the demise of Intelligent Design Creationism (IDC), they will likely point to the Dover, PA court decision as when the carefully thought-out plans and strategy of the IDC movement ganged agley in a big way.

If you recall, US District Judge John E. Jones III ruled on December 20, 2005 (Kitzmiller v. Dover) that the then Dover school board had acted unconstitutionally in its attempts to undermine the credibility of evolutionary theory in its biology class and in its attempt to promote IDC as a viable alternative. (See here for a previous posting giving the background to this topic.)

That case raised many fascinating issues and the final ruling clarified and put in perspective many of the issues clouding the role of intelligent design, science, religion, schools, and the US constitution. This series of posts that begins today will analyze that decision and the ripples it has caused throughout the country. I had been meaning to analyze the decision and its broader implications in depth for some time but kept getting deferred by other issues.
[Read more…]

On writing-5: The three stages of writing

(See part 1, part 2, part 3, and part 4 in the series.)

I believe that part of the reasons students end up plagiarizing, either inadvertently or otherwise, is that they underestimate the time it takes to write. This is because they think that writing only occurs when they are actually putting words on paper or typing on a keyboard.

But writing involves really three phases: prewriting, writing, and post-writing.

Pre-writing probably takes the most time and often does not involve the physical act of writing at all. This is the time when the author is mulling things over in his mind, sorting ideas out, trying to find the main point he is trying to make, asking what kinds of evidence is necessary and what documents should be read for background, and seeking out those sources of information. It also involves (for some) sketching out an outline and making rough notes. It is during this process of slow digestion that you start the important process of synthesizing the ideas that you have got from many sources and making something of your own.
[Read more…]

Squeezing workers to the limit

So there you are in a fast food drive-through, waiting for the people in the car ahead of you to place their order. They do so and move on, and you slowly move up to the speaker. It takes about 10 seconds for this shifting of cars to take place. Haven’t you wondered what the person at the other end of the speaker is doing with that 10 seconds of downtime? Me, neither.

But the good folks at fast food corporate headquarters care. They worry that the employee may be goofing off, perhaps drinking some water, thinking about their children or friends, what to make for dinner later, perhaps even thinking about how they can climb out of this kind of dead-end job. Committed as the corporate suits are to maximizing employee productivity, they feel that those 10 seconds between cars could be put to better use than to allow idle thoughts. But how?

Enter the internet. What if you outsourced the order taking to someone at a central location, who then enters the order into a computer and sends it back via the internet to the store location where you are? The beauty of such a situation is that the person at the central location could be taking an order from another store somewhere else in the country in the 10 second interval that was previously wasted. Genius, no?

Sound bizarre? This is exactly what McDonalds is experimenting with in California. The New York Times on April 11, 2006 reports on the way the process works and one such call center worker, 17 year old Julissa Vargas:

Ms. Vargas works not in a restaurant but in a busy call center in this town [Santa Maria], 150 miles from Los Angeles. She and as many as 35 others take orders remotely from 40 McDonald’s outlets around the country. The orders are then sent back to the restaurants by Internet, to be filled a few yards from where they were placed.

The people behind this setup expect it to save just a few seconds on each order. But that can add up to extra sales over the course of a busy day at the drive-through.

What is interesting about the way this story was reported was that it was focused almost entirely on the technology that made such a thing possible, the possible benefits to customers (saving a few seconds on an order) and the extra profits to be made by the company. “Saving seconds to make millions” as one call center executive put it.

There was no discussion of the possible long-term effects on the workers, or the fact that the seconds are taken from the workers’ lives while the millions are made by the corporation and its top executives and shareholders. This is typical of the way the media tend to underreport the perspective of the workers, especially low-paid ones.

Look at the working conditions under which the call center people work, all of which are reported as if they are nifty innovations in the business world, with no hint that there was anything negative about these practices:

Software tracks [Ms. Vargas’] productivity and speed, and every so often a red box pops up on her screen to test whether she is paying attention. She is expected to click on it within 1.75 seconds. In the break room, a computer screen lets employees know just how many minutes have elapsed since they left their workstations
. . .
The call-center system allows employees to be monitored and tracked much more closely than would be possible if they were in restaurants. Mr. King’s [the chief executive of the call center operation] computer screen gives him constant updates as to which workers are not meeting standards. “You’ve got to measure everything,” he said. “When fractions of seconds count, the environment needs to be controlled.”

This is the brave new world of worker exploitation. But in many ways it is not new. It is merely an updated version of what Charlie Chaplin satirized in his 1936 film Modern Times, where workers are given highly repetitious tasks and closely monitored so that they can be made to work faster and faster.

The call center workers are paid barely above minimum wage ($6.75 an hour) and do not get health benefits. But not to worry, there are perks! They do not have to wear uniforms, and “Ms. Vargas, who recently finished high school, wore jeans and a baggy white sweatshirt as she took orders last week.” And another plus, she says, is that after work “I don’t smell like hamburgers.”

Nowhere in the article was any sense of whether it is a good thing to push workers to the limit like this, to squeeze every second out of their lives to increase corporate profit. Nowhere in the article is there any sign that the journalist asks people whether it is ethical or even healthy for employees to be under such tight scrutiny where literally every second of their work life is monitored, an example of how the media has internalized the notion that what is good for corporate interests must be good for everyone. Just because you work for a company, does this mean they own every moment of your workday? Clearly, what these call centers want are people who are facsimiles of machines. They are not treating workers as human beings who have needs other than to earn money.

In many ways, all of us are complicit in the creation of this kind of awful working situation, by demanding low prices for goods and unreasonably quick service and not looking closely at how those prices are driven down and speed arrived at. How far are we willing to go in squeezing every bit of productivity from workers at the low end of the employment scale just so that the rest of us can save a few cents and a few seconds on a hamburger and also help push up corporate profits? As Voltaire said many years ago, “The comfort of the rich depends upon the abundance of the poor.”

The upbeat article did not totally ignore what the workers thought about this but even here things were just peachy. “Ms. Vargas seems unfazed by her job, even though it involves being subjected to constant electronic scrutiny.” Yes, a 17-year old woman straight put of high school may not be worn out by this routine yet. In fact the novelty of the job may even be appealing. Working with computers may seem a step up from flipping hamburgers at the store. But I would like to hear what she says after a year of this kind of work.

This kind of story, with its cheery focus on the benefits accruing to everyone except the worker, and its callous disregard for what the long-term effects on the workers might be, infuriates me.

I have been fortunate to always work in jobs where I had a great deal of autonomy and where the luxury of just thinking and even day-dreaming are important parts of work, because that is how ideas get generated, plans are formulated, and programs are envisaged. But even if people’s jobs do not require much creativity, that is not a reason to deny them their moments of free thought.

On writing-4: The role of originality

(See part 1, part 2, and part 3 in the series.)

So why do people end up sometimes plagiarizing? There are many reasons. Apart from the few who deliberately set out to do it because they are too lazy to do any actual writing of their own and lack any compunction about plagiarizing, I believe most end up doing it out of fear that they expected to say something that is interesting, original, and well written, usually (in the case of classroom assignments) about topics that they have little or no interest in.

This is a highly inflated and unrealistic expectation. I doubt that more than a few college or high school teacher really expect a high level of originality in response to classroom assignments, though that does not mean one should not try to achieve it.

A misplaced emphasis on originality creates unrealistic expectations that can cause insecure writers to plagiarize. I think that students who end up plagiarizing make the mistake of thinking that they must start by coming up with an original idea. Few people (let alone students who usually have very little writing experience) can reach such a high standard of originality. This is why they immediately hit a wall, lose a lot of time trying to get an idea, and in desperation end up plagiarizing by finding others who have said something interesting or relevant and “borrowing” their work. But since they want the reader to think that they have done the writing, they sometimes hide the borrowing by means of the ‘pointless paraphrase’ I wrote about previously.

Originality in ideas is often something that emerges from the writing and is not prior to the writing. A blindingly original idea may sometimes strike you, but this will be rare even for the most gifted and original writers. Instead, what you will usually find is a kind of incremental originality that emerges naturally out of the act of writing, where you are seemingly doing the mundane task of putting together a clear piece of writing using other people’s (cited) ideas. If you are writing about things that interest you, then you will be surprised to find that the very act of writing brings about something original, where you discover new relationships between old ideas.

As an instructor, what I am really looking for in student writing is something that just meets the single criterion of being well written. As for being interesting, all I want is to see that at least the writer is interested in the topic, and the evidence for that takes the form of the writer making the effort to try and convince the reader of the writer’s point of view. This seems like a modest goal but if followed can lead to pretty good writing.

In my experience, the most important thing is for writers to be interested enough in the topic that they want to say something about it, so the first condition for good writing is that the writer must care about the topic. The second condition is that the writer cares enough about it to want to make the reader care too. Once these two factors are in place, originality (to a greater or lesser degree) follows almost automatically from them.

It took me a long time to understand this. I had never written much in the earlier stages of my career (apart from scientific papers) because I was waiting for great new ideas to strike me, ideas that never came. But there came a time when I felt that a topic I cared a lot about (the nature of science) was one in which the point of view I held was not being articulated clearly enough by others. I began writing about it, not because I had an original idea, but because I felt a need to synthesize the ideas of many others into a simpler, more clearly articulated, position that I felt was missing from the discussion. In the process of creating that synthesis, some papers and my first book Quest for Truth: Scientific Progress and Religious Beliefs emerged. What turned out to be original (at least slightly) in them was the application of the ideas of certain classical philosophers and historians of science to the contemporary science-religion debate, something that I had not had in mind when I started writing. That feature emerged from the writing.

My second book The Achievement gap in US education: Canaries in the mine followed that same pattern. I was very concerned about what I felt were great misunderstandings about the causes of the achievement gap between black and white students in the US and how to deal with it. I felt that my experience and interests in science and education and politics and learning theory put me in a good position where I could bring ideas from these areas together. I did not have anything really original in mind when I started writing but whatever is original in the book emerged from the act of writing, the attempt to create a synthesis.

The same applies to these blog entries. I write about the things I care about, trying to make my point clear, without seeking to be original. After all, who can come up with original ideas five times per week? But very often I find that I have written things that I had not thought about prior to the writing.

To be continued. . .

POST SCRIPT: Is there no end to the deception?

One of the amazing things about they current administration is how brazen they are about misleading the public. The latest is that President Bush rushed to declare that “We have found [Iraq’s] weapons of mass destruction” in the form of mobile biological weapons laboratories, even while some intelligence investigators were finding that there was nothing to that charge.

The defense being offered by the administration’s spokespersons that these negative findings had not reached the president makes no sense. Before making a serious charge, it is the President and his staff’s responsibility to check what information is being gathered and processed. To shoot off his mouth when there was no urgency to do so is to be irresponsible at best and deceitful at worst.

Kevin Drum of Washington Monthly is maintaining a list of the more egregious examples of things the administration knew were not true or for which there were serious doubts, but went ahead and declared them as ‘facts’ anyway, to justify decisions that they had already made about attacking Iraq.

He is up to #8 and there is no reason to think that the list will not keep growing.

Atheism and Agnosticism

In an interview, Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist,” explains why he uses that term (thanks to onegoodmove):

I think I use the term radical rather loosely, just for emphasis. If you describe yourself as “Atheist,” some people will say, “Don’t you mean ‘Agnostic’?” I have to reply that I really do mean Atheist. I really do not believe that there is a god – in fact I am convinced that there is not a god (a subtle difference). I see not a shred of evidence to suggest that there is one. It’s easier to say that I am a radical Atheist, just to signal that I really mean it, have thought about it a great deal, and that it’s an opinion I hold seriously…

People will then often say “But surely it’s better to remain an Agnostic just in case?” This, to me, suggests such a level of silliness and muddle that I usually edge out of the conversation rather than get sucked into it. (If it turns out that I’ve been wrong all along, and there is in fact a god, and if it further turned out that this kind of legalistic, cross-your-fingers-behind-your-back, Clintonian hair-splitting impressed him, then I think I would chose not to worship him anyway.) . . .

And making the move from Agnosticism to Atheism takes, I think, much more commitment to intellectual effort than most people are ready to put in. (italics in original)

I think Adams is exactly right. When I tell people that I am an atheist, they also tend to suggest that surely I must really mean that I am an agnostic. (See here for an earlier discussion of the distinction between the two terms.) After all, how can I be sure that there is no god? In that purely logical sense they are right, of course. You cannot prove a negative so there is always the chance that not only that a god exists but, if you take radical clerics Pat Robertson and Jerry Falwell seriously, has a petty, spiteful, vengeful, and cruel personality.

When I say that I am atheist, I am not making that assertion based on logical or evidentiary proofs of non-existence. It is that I have been convinced that the case for no god is far stronger than the case for god. It is the same reasoning that makes me convinced that quantum mechanics is the theory to use for understanding sub-atomic phenomena or natural selection is the theory to be preferred for understanding the diversity of life. There is always the possibility that these theories are ‘wrong’ in some sense and will be superceded by other theories, but those theories will have to have convincing evidence in their favor.

If, on the other hand, I ask myself what evidence there is for the existence of a god, I come up empty. All I have are the assurances of clergy and assertions in certain books. I have no personal experience of it and there is no scientific evidence for it.

Of course, as long time readers of this blog are aware, I used to be quite religious for most of my life, even an ordained lay preacher of the Methodist Church. How could I have switched? It turns out that my experience is remarkably similar to that of Adams, who describes why he switched from Christianity to atheism.

As a teenager I was a committed Christian. It was in my background. I used to work for the school chapel in fact. Then one day when I was about eighteen I was walking down the street when I heard a street evangelist and, dutifully, stopped to listen. As I listened it began to be borne in on me that he was talking complete nonsense, and that I had better have a bit of a think about it.

I’ve put that a bit glibly. When I say I realized he was talking nonsense, what I mean is this. In the years I’d spent learning History, Physics, Latin, Math, I’d learnt (the hard way) something about standards of argument, standards of proof, standards of logic, etc. In fact we had just been learning how to spot the different types of logical fallacy, and it suddenly became apparent to me that these standards simply didn’t seem to apply in religious matters. In religious education we were asked to listen respectfully to arguments which, if they had been put forward in support of a view of, say, why the Corn Laws came to be abolished when they were, would have been laughed at as silly and childish and – in terms of logic and proof -just plain wrong. Why was this?
. . .
I was already familiar with and (I’m afraid) accepting of, the view that you couldn’t apply the logic of physics to religion, that they were dealing with different types of ‘truth’. (I now think this is baloney, but to continue…) What astonished me, however, was the realization that the arguments in favor of religious ideas were so feeble and silly next to the robust arguments of something as interpretative and opinionated as history. In fact they were embarrassingly childish. They were never subject to the kind of outright challenge which was the normal stock in trade of any other area of intellectual endeavor whatsoever. Why not? Because they wouldn’t stand up to it.
. . .
Sometime around my early thirties I stumbled upon evolutionary biology, particularly in the form of Richard Dawkins’s books The Selfish Gene and then The Blind Watchmaker and suddenly (on, I think the second reading of The Selfish Gene) it all fell into place. It was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I’d take the awe of understanding over the awe of ignorance any day.

What Adams is describing is the conversion experience that I described earlier when suddenly, switching your perspective seems to make everything fall into place and make sense.

For me, like Adams, I realized that I was applying completely different standards for religious beliefs than I was for every other aspect of my life. And I could not explain why I should do so. Once I jettisoned the need for that kind of distinction, atheism just naturally emerged as the preferred explanation. Belief in a god required much more explaining away of inconvenient facts than not believing in a god.

POST SCRIPT: The Gospel According to Judas

There was a time in my life when I would have been all a-twitter over the discovery of a new manuscript that sheds a dramatically different light on the standard Gospel story of Jesus and Judas. I would have wondered how it affected my view of Jesus and god and my faith.

Now this kind of news strikes me as an interesting curiosity, but one that does not affect my life or thinking at all. Strange.

On writing-3: Why do people plagiarize?

(See part 1 and part 2 in the series.)

Just last week, it was reported that twenty one Ohio University engineering graduates had plagiarized their master’s theses. Why would they do that?

I think it is rare that people deliberately set out to use other people’s words and ideas while hiding the source. Timothy Noah in his Chatterbox column has a good article in Slate where he points to Harvard’s guidelines to students which state that unintentional plagiarism is a frequent culprit:

Most often . . . the plagiarist has started out with good intentions but hasn’t left enough time to do the reading and thinking that the assignment requires, has become desperate, and just wants the whole thing done with. At this point, in one common scenario, the student gets careless while taking notes on a source or incorporating notes into a draft, so the source’s words and ideas blur into those of the student.

But lack of intent is not a valid defense against the charge of plagiarism. That has not prevented even eminent scholars like Doris Kearns Goodwin from trying to invoke it. But as Noah writes, the American Historical Association’s (AHA) and the Organization of American Historians’ (OAH) statement on plagiarism is quite clear on this point:
[Read more…]

The politics of V for Vendetta (no spoilers)

I believe that V for Vendetta will go down in film history as a classic of political cinema. Just as Dr. Strangelove captured the zeitgeist of the cold war, this film does it for the perpetual war on terrorism.

The claim that this film is so significant may sound a little strange, considering that the film’s premise is based on a comic book series written two decades ago and set in a futuristic Britain. Let me explain why I think that this is something well worth seeing.
[Read more…]

Don’t miss V for Vendetta!

I don’t normally post on the weekends but last night I saw the film V for Vendetta and it blew me away. It is a brilliant political thriller with disturbing parallels to what is currently going on in the US. It kept me completely absorbed.

I’ll write more about it next week but this is just to urge people to see it before it ends its run.

Tiktaalik bridges the gap

As you can imagine, the world of science has been abuzz with the news yesterday of the release of a paper in the prestigious science journal Nature heralding the discovery of a 375-million year old transitional fossil between fish and land animal, which has been named Tiktaalik. The fossil seems to provide evidence of a key evolutionary idea that land animals evolved from fish.
[Read more…]

Precision in language

Some time ago, a commenter to this blog sent me a private email expressing this view:

Have you ever noticed people say “Do you believe in evolution?” just as you would ask “Do you believe in God?” as if both schools of thought have equal footing? I respect others’ religious beliefs as I realize I cannot disprove God just as anyone cannot prove His existence, but given the amount of evidence for evolution, shouldn’t we insist on asking “Do you accept evolution?”

It may just be semantics, but I feel that the latter wording carries an implied affirmation just as “Do you accept that 2+2=4?” carries a different meaning than “Do you believe 2+2=4?”

I guess the point I’m trying to make is that by stating something as a belief, it opens the debate to the possibility that something is untrue. While this may fine for discussions of religion, shouldn’t the scientific community be more insistent that a theory well supported by physical evidence, such as evolution, is not up for debate?

It’s a good point. To be fair, scientists themselves are partly responsible for this confusion because we also say that we “believe” in this or that scientific theory, and one cannot blame the general public from picking up on that terminology. What is important to realize, though, is that the word ‘believe’ is being used by scientists in a different sense from the way it is used in religion.

The late and deeply lamented Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who called himself a “radical atheist” puts it nicely (thanks to onegoodmove):

First of all I do not believe-that-there-is-not-a-god. I don’t see what belief has got to do with it. I believe or don’t believe my four-year old daughter when she tells me that she didn’t make that mess on the floor. I believe in justice and fair play (though I don’t know exactly how we achieve them, other than by continually trying against all possible odds of success). I also believe that England should enter the European Monetary Union. I am not remotely enough of an economist to argue the issue vigorously with someone who is, but what little I do know, reinforced with a hefty dollop of gut feeling, strongly suggests to me that it’s the right course. I could very easily turn out to be wrong, and I know that. These seem to me to be legitimate uses for the word believe. As a carapace for the protection of irrational notions from legitimate questions, however, I think that the word has a lot of mischief to answer for. So, I do not believe-that-there-is-no-god. I am, however, convinced that there is no god, which is a totally different stance. . .

There is such a thing as the burden of proof, and in the case of god, as in the case of the composition of the moon, this has shifted radically. God used to be the best explanation we’d got, and we’ve now got vastly better ones. God is no longer an explanation of anything, but has instead become something that would itself need an insurmountable amount of explaining…

Well, in history, even though the understanding of events, of cause and effect, is a matter of interpretation, and even though interpretation is in many ways a matter of opinion, nevertheless those opinions and interpretations are honed to within an inch of their lives in the withering crossfire of argument and counterargument, and those that are still standing are then subjected to a whole new round of challenges of fact and logic from the next generation of historians – and so on. All opinions are not equal. Some are a very great more robust, sophisticated and well supported in logic and argument than others.

When someone says that they believe in god, they mean that they believe something in the absence of, or even counter to, the evidence, and even to reason and logic. When scientists say they believe a particular theory, they mean that they believe that theory because of the evidence and reason and logic, and the more evidence there is, and the better the reasoning behind it, the more strongly they believe it. Scientists use the word ‘belief’ the way Adams says, as a kind of synonym for ‘convinced,’ because we know that no scientific theory can be proven with 100% certainty and so we have to accept things even in the face of this remaining doubt. But the word ‘believe’ definitely does not carry the same meaning in the two contexts.

This can lead to the generation of confusion as warned by the commenter but what can we do about it? One option is, as was suggested, to use different words, with scientists avoiding use of the word ‘believe.’ I would have agreed with this some years ago but I am becoming increasingly doubtful that we can control the way that words are used.

For example, there was a time when I used to be on a crusade against the erroneous use of the word ‘unique’. The Oxford English Dictionary is pretty clear about what this word means:

  • Of which there is only one; one and no other; single, sole, solitary.
  • That is or forms the only one of its kind; having no like or equal; standing alone in comparison with others, freq. by reason of superior excellence; unequalled, unparalleled, unrivalled.
  • Formed or consisting of one or a single thing
  • A thing of which there is only one example, copy, or specimen; esp., in early use, a coin or medal of this class.
  • A thing, fact, or circumstance which by reason of exceptional or special qualities stands alone and is without equal or parallel in its kind.

It means, in short, one of a kind, so something is either unique or it is not. There are no in-betweens. And yet, you often find people saying things like “quite unique” or “very unique” or “almost unique.” I used to try and correct this but have given up. Clearly, people in general think that unique means something like “rare” and I don’t know that we can ever change this even if we all become annoying pedants, correcting people all the time, avoided at parties because of our pursuit of linguistic purity.

Some battles, such as with the word unique are, I believe, lost for good and I expect the OED to add the new meaning of ‘rare’ some time in the near future. It is a pity because then we would then be left with no word with the unique meaning of ‘unique’, but there we are. We would have to say something like ‘absolutely unique’ to convey the meaning once reserved for just ‘unique.’

In science too we often use words with precise operational meanings while the same words are used in everyday language with much looser meanings. For example, in physics the word ‘velocity’ is defined operationally by the situation when you have an object moving along a ruler and, at two points along its motion, you take ruler readings and clock readings, where the clocks are located at the points where the ruler readings are taken, and have been previously synchronized. Then the velocity of the moving object is the number you get when you take the difference between the two ruler readings and divide by the difference between the two clock readings.

Most people (especially sports commentators) have no idea of this precise meaning when they use the word velocity in everyday language, and often use the word synonymously with speed or, even worse, acceleration, although those concepts have different operational meanings. Even students who have taken physics courses find it hard to use the word in its strict operational sense.

Take, for another example, the word ‘theory’. By now, as a result of the intelligent design creationism (IDC) controversy, everyone should be aware that the way this word is used by scientists is quite different from its everyday use. In science, a theory is a powerful explanatory construct. Science depends crucially on its theories because they are the things that give it is predictive power. “There is nothing so practical as a good theory” as Kurt Lewin famously said. But in everyday language, the word theory is used as meaning ‘not factual,’ something that can be false or ignored.

I don’t think that we can solve this problem by putting constraints on how words can be used. English is a wonderful language precisely because it grows and evolves and trying to fix the meanings of words too rigidly would perhaps be stultifying. I now think that we need to change our tactics.

I think that once the meanings of words enter mainstream consciousness we will not be successful in trying to restrict their meanings beyond their generally accepted usage. What we can do is to make people aware that all words have varying meanings depending on the context, and that scientific and other academic contexts tend to require very precise meanings in order to minimize ambiguity.

Heidi Cool has a nice entry where she talks about the importance of being aware of when you are using specialized vocabulary, and the need to know your audience when speaking or writing, so that some of the pitfalls arising from the imprecise use of words can be avoided.

We have to realize though that despite our best efforts, we can never be sure that the meaning that we intend to convey by our words is the same as the meaning constructed in the minds of the reader or listener. Words always contain an inherent ambiguity that allows the ideas expressed by them to be interpreted differently.

I used to be surprised when people read the stuff I wrote and got a different meaning than I had intended. No longer. I now realize that there is always some residual ambiguity in words that cannot be overcome. While we can and should strive for maximum precision, we can never be totally unambiguous.

I agree with philosopher Karl Popper when he said, “It is impossible to speak in such a way that you cannot be misunderstood.” The best we can hope for is to have some sort or negotiated consensus on the meanings of ideas.