What makes us good at learning some things and not others?

(I will be traveling for a few weeks and rather than put this blog on hiatus, thought that I would continue with my weekday posting schedule by reposting some of the very early items, for those who might have missed them the first time around.)

One of the questions that students ask me is why it is that they find some subjects easy and others hard to learn. Students often tell me that they “are good” at one subject (say writing) and “are not good” at another (say physics), with the clear implication that they feel that there is something intrinsic and immutable about them that determines what they are good at. It is as if they see their learning abilities as being mapped onto a multi-dimensional grid in which each axis represents a subject, with their own abilities lying along a continuous scale ranging from ‘awful’ at one extreme to ‘excellent’ at the other. Is this how it is?

This is a really tough question and I don’t think there is a definitive answer at this time. Those interested in this topic should register for the free public lecture by Steven Pinker on March 14.

Why are some people drawn to some areas of study and not to others? Why do they find some things difficult and others easy? Is it due to the kind of teaching that one receives or parental influence or some innate quality like genes?

The easiest answer is to blame it on genes or at least on the hard-wiring of the brain. In other words, we are born the way we are, with gifts in some areas and deficiencies in others. It seems almost impossible to open the newspapers these days without reading that scientists have found the genes that ’cause’ this or that human characteristic so it is excusable to jump to genes as the cause of most inexplicable things.

But that is too simple. After all, although the brain comes at birth with some hard-wired structures, it is also quite plastic and the direction in which it grows is also strongly influenced by the experiences it encounters. But it seems that most of the rapid growth and development occurs fairly early in life and so early childhood and adolescent experiences are important in determining future directions.

But what kinds of experiences are the crucial ones for determining future academic success? Now things get more murky and it is hard to say which ones are dominant. We cannot even say that the same factors play the same role for everyone. So for one person, a single teacher’s influence could be pivotal. For another, it could be the parent’s influence. The influences could also be positive or negative.

So there is no simple answer. But I think that although this is an interesting question, the answer has little practical significance for a particular individual at this stage of their lives in college. You are now what you are. The best strategy is to not dwell on why you are not something else, but to identify your strengths and use them to your advantage.

It is only when you get really deep into a subject (any subject) and start to explore its foundations and learn about its underlying knowledge structure that you start to develop higher-level cognitive skills that will last you all your life. But this only happens if you like the subject because only then will you willingly expend the intellectual effort to study it in depth. With things that we do not care much about, we tend to skim on the surface, doing just the bare minimum to get by. This is why it is important to identify what you really like to do and go for it.

You should also identify your weaknesses and dislikes and contain them. By “contain” I mean that there is really no reason why at this stage you should force yourself to try and like (say) mathematics or physics or Latin or Shakespeare or whatever and try to excel in them, if you do not absolutely need to. What’s the point? What are you trying to prove and to whom? If there was a really good reason that you needed to know something about those areas now or later in life, the higher-level learning skills you develop by charging ahead in the things you like now could be used to learn something that you really need to know later.

I don’t think that people have an innate “limit”, in the sense that there is some insurmountable barrier that prevents them from achieving more in any area. I am perfectly confident that some day if you needed or wanted to know something in those areas, you would be able to learn it. The plateau or barrier that students think they have reached is largely determined by their inner sense of “what’s the point?”

I think that by the time they reach college, most students have reached the “need to know” stage in life, where they need a good reason to learn something. In earlier K-12 grades, they were in the “just in case” stage where they did not know where they would be going and needed to prepare themselves for any eventuality.

This has important implications for teaching practice. As teachers, we should make it our goal to teach in such a way that students see the deep beauty that lies in our discipline, so that they will like it for its own sake and thus be willing to make the effort. It is not enough to tell them that it is “useful” or “good for them.”

In my own life, I now happily learn about things that I would never have conceived that I would be interested in when I was younger. The time and circumstances have to be right for learning to have its fullest effect. As Edgar says in King Lear: “Ripeness is all.”

(The quote from Shakespeare is a good example of what I mean. If you had told me when I was an undergraduate that I would some day be familiar enough with Shakespeare to quote him comfortably, I would have said you were crazy because I hated his plays at that time. But much later in life, I discovered the pleasures of reading his works.)

So to combine the words from the song by Bobby McFerrin, and the prison camp commander in the film The Bridge on the River Kwai, my own advice is “Don’t worry. Be happy in your work.”

Sources:

John D. Bransford, Ann L. Brown, and Rodney R. Cocking, eds., How People Learn, National Academy Press, Washington D.C.,1999.

James E. Zull, The Art of Changing the Brain, Stylus Publishing, Sterling, VA, 2002.

Why is evolutionary theory so upsetting to some?

(I will be traveling for a few weeks and rather than put this blog on hiatus, thought that I would continue with my weekday posting schedule by reposting some of the very early items, for those who might have missed them the first time around.)

One of the questions that sometimes occur to observers of the intelligent design (ID) controversy is why there is such hostility to evolutionary theory in particular. After all, if you are a Biblical literalist, you are pretty much guaranteed to find that the theories of any scientific discipline (physics, chemistry, geology, astronomy, in addition to biology) contradict many of the things taught in the Bible.

So what is it about evolution in particular that gets some people’s goat?
[Read more…]

Can we ever be certain about scientific theories?

(I will be traveling for a few weeks and rather than put this blog on hiatus, thought that I would continue with my weekday posting schedule by reposting some of the very early items, for those who might have missed them the first time around.)

A commenter to a previous posting raised an interesting perspective that requires a fresh posting, because it reflects a commonly held view about how the validity of scientific theories get established.

The commenter says:

A scientist cannot be certain about a theory until that theory has truly been tested, and thus far, I am unaware of our having observed the evolution of one species from another species. Perhaps, in time, we will observe this, at which point the theory will have been verified. But until then, Evolution is merely a theory and a model.

While we may have the opportunity to test Evolution as time passes, it is very highly doubtful that we will ever be able to test any of the various theories for the origins of the Universe.

I would like to address just two points: What does it mean to “test” a theory? And can scientists ever “verify” a theory and “be certain” about it?

Verificationism as a concept to validate scientific theories has been tried and found to be wanting. The problem is that any non-trivial theory generates an infinite number of predictions. All the predictions cannot be exhaustively verified. Only a sample of the possible predictions can be tested and there is no universal yardstick that can be used to measure when a theory has been verified. It is a matter of consensus judgment on the part of scientists as to when a theory becomes an accepted one, and this is done on a case-by-case basis by the practitioners in that field or sub-field.

This means, however, that people who are opposed to a theory can always point to at least one particular result that has not been directly observed and claim that the theory has not been ‘verified’ or ‘proven.’ This is the strategy adopted by ID supporters to attack evolutionary theory. But using this kind of reasoning will result in every single theory in science being denied scientific status.

Theories do get tested. Testing a theory has been a cornerstone of science practice ever since Galileo but it means different things depending on whether you are talking about an experimental science like chemistry and condensed matter physics, or a historical science like cosmology, evolution, geology, and astronomy.

Any scientific theory is always more than an explanation of prior events. It also must necessarily predict new observations and it is these predictions that are used to test theories. In the case of experimental sciences, laboratory experiments can be performed under controlled conditions in order to generate new data that can be compared with predictions or used to infer new theories.

In the case of historical sciences, however, observations are used to unearth data that are pre-existing but as yet unknown. Hence the ‘predictions’ may be more appropriately called ‘retrodictions’, in that they predict that you will find things that already exist. For example, in cosmology the retrodictions were the existence of a cosmic microwave background radiation of a certain temperature, the relative abundances of light nuclei, and so forth. The discovery of the planet Neptune was considered a successful ‘prediction’ of Newtonian theory, although Neptune had presumably always been there.

The testing of a historical science is analogous is to that of the investigation of a crime where the detective says things like “If the criminal went through the woods, then we should be able to see footprints.” This kind of evidence is also historical but is as powerful as those of futuristic predictions, so historical sciences are not necessarily at a lower level of credibility than experimental sciences.

Theories in cosmology, astronomy, geology, and evolution are all tested in this way. As Ernst Mayr (who died a few days ago at the age of 100) said in What Evolution Is (2001): “Evolution as a whole, and the explanation of particular evolutionary events, must be inferred from observations. Such inferences must be tested again and again against new observations, and the original inference is either falsified or considerably strengthened when confirmed by all of these tests. However, most inferences made by evolutionists have by now been tested successfully so often that they are accepted as certainties.” (emphasis added).

In saying that most inferences are ‘accepted as certainties’, Mayr is exaggerating a little. Ever since the turn of the 20th century, it has been accepted that scientific knowledge is fallible and that absolute certainty cannot be achieved. But scientists do achieve a remarkable consensus on deciding at any given time what theoretical frameworks they have confidence in and will be used to guide future research. Such frameworks have been given the name ‘paradigms’ by Thomas Kuhn in The Structure of Scientific Revolutions (1970).

When scientists say they ‘believe’ in evolution (or the Big Bang), the word is being used in quite a different way from that used in religion. It is used as shorthand to say that they have confidence that the underlying mechanism of the theory has been well tested by seeing where its predictions lead. It is definitely not “merely a theory and a model” if by the word ‘merely’ the commenter implies a theory that is unsupported or untested.

So yes, evolution, like all the other major scientific paradigms, both historical and experimental, has been well tested.

Wanted: ‘Godwin’s Law’-type rule for science

(I will be traveling for the next few weeks and rather than put this blog on hiatus, thought that I would continue with my weekday posting schedule by reposting some of the very early items, for those who might have missed them the first time around.)

Mike Godwin coined a law (now known as Godwin’s Law) that states: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.”

This makes sense. As the discussion drags on, people start running out of fresh or relevant arguments, begin repeating themselves, lose their tempers, reach for something new to say, and Hitler/Nazi comparisons inevitably follow.

But Godwin’s rule has been extended beyond its original intent and is now used as a decision rule to indicate that a discussion has ceased to be meaningful and should be terminated. In other words, as soon as the Hitler/Nazi comparison is brought into any discussion where it is not relevant, Godwin’s rule can be invoked to say that the discussion is over and the person who introduced the Hitler/Nazi motif has lost the argument.
[Read more…]

Evolution III: Scientific knowledge is an interconnected web

(I will be traveling for the next few weeks and rather than put this blog on hiatus, thought that I would continue with my weekday posting schedule by reposting some of the very early items, for those who might have missed them the first time around.)

In an <a href=http://blog.case.edu/mxs24/2005/02/09/evolution_ii_science_is_not_a_smorgasbordearlier posting, the question was posed as to whether it was intellectually consistent to reject the findings of an entire modern scientific discipline (like biology) or of a major theoretical structure (like the theory of evolution) while accepting all the other theories of science.

The short answer is no. Why this is so can be seen by examining closely the most minimal of creationist theories, the one that goes under the label of ‘intelligent design’ or ID.

ID supporters take great pains to claim that theirs is a scientific theory that has nothing to do with religion or God, and hence belongs in the school science curriculum. (This particular question whether ID can be considered a part of science or of religion will be revisited in a later posting. This is becoming a longer series than I anticipated…)

ID advocates say that there are five specific biochemical systems and processes (bacterial flagella and cilia, blood clotting, protein transport within a cell, the immune system, and metabolic pathways) whose existence and/or workings cannot be explained by evolutionary theory and that hence one has to postulate that such phenomena are evidence of design and of the existence of a designer.

The substance of their arguments is: “You can claim all the other results for evolutionary theory. What would be the harm in allowing these five small systems to have an alternative explanation?”

Leaving aside the many other arguments that can be raised against this position (including those from biologists that these five systems are hardly intractable problems for evolutionary theory), I want to focus on just one feature of the argument. Is it possible to accept that just these five processes were created by a ‘designer,’ while retaining a belief in all the other theories of science?

No you cannot. If some undetectable agent had intervened to create the cilia (say), then in that single act at a microscopic level, you have violated fundamental laws of physics such as the law of conservation of energy, the law of conservation of momentum, and (possibly) the law of conservation of angular momentum. These laws are the bedrock of science and to abandon them is to abandon some of the most fundamental elements of modern science.

So rejecting a seemingly small element of evolutionary theory triggers a catastrophe in a seemingly far-removed area of science, a kind of chaotic ‘butterfly effect’ for scientific theories.

Scientific theories are so interconnected that some philosophers of science have taken this to the extreme (as philosophers are wont to do) and argued that we can only think of one big scientific theory that encompasses everything. It is this entire system (and not any single part of it) that should be compared with nature.

Pierre Duhem in his The Aim and Structure of Physical Theory (1906) articulated this position when he declared that: “The only experimental check on a physical theory which is not illogical consists in comparing the entire system of the physical theory with the whole group of experimental laws, and in judging whether the latter is represented by the former in a satisfactory manner.” (emphasis in original)

Of course, in practical terms, we don’t do that. Each scientific subfield proceeds along its own path. And we know that there have been revolutions in one area of science that have left other areas seemingly undisturbed. But this interconnectedness is a reality and explains why scientific theories are so resistant to change. Scientists realize that changing one portion requires, at the very least, making some accommodations in theories that are connected to it, and it is this process of adjustments that takes time and effort and prevents trivial events from triggering changes.

This is why it usually requires a major crisis in an existing theory for scientists to even consider replacing it with a new one. The five cases raised by ID advocates do not come close to creating that kind of crisis. They are like flies in the path of a lumbering evolutionary theory elephant, minor irritants that can be ignored or swatted away easily.

Iran’s president poses some tough questions for Bush

During the run up to the invasions of Afghanistan and Iraq, the leaders of those countries tried to open a dialogue with the Bush administration but were summarily rebuffed, since Bush and his neoconservative clique were determined to go to war from the get-go and all their posturing about preferring diplomacy have been revealed to be just that – posturing. The media was complicit in this dismissal of possibilities for peaceful resolution, hardly ever reporting the full extent of the overtures that those governments made to the US.
[Read more…]

Burden of proof-3: The role of negative evidence

In my previous post, I suggested that in science, the burden of proof lies with the proponent for the existence of some thing. The default assumption is non-existence. So if you propose the existence of something like electromagnetic radiation or neutrinos or N-rays, then you have to provide some positive evidence that it exists of a kind that others can try to replicate.

But not all assertions, even in science, need meet that positive evidence standard. Sometimes negative evidence, what you don’t see, is important too. Negative evidence is best illustrated by the famous Sherlock Holmes story Silver Blaze, in which the following encounter occurs:

Gregory [Scotland Yard detective]: “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”

There are times when the absence of evidence can be suggestive. This is true with the postulation of universal laws. The substance of such laws (such as that the total energy is conserved) is that they hold in every single instance. But we cannot possibly examine every possibility. The reason that we believe these types of laws to hold is because of negative evidence, what we do not see. If someone postulates the existence of a universal law, the absence of evidence that contradicts it is taken as evidence in support of the law. There is a rule of thumb that scientists use that if something can happen, it will happen. So if we do not see something happening, that suggests that there is a law that prevents it. This is how laws such as baryon and lepton number conservation originated.

Making inferences from absence is different from proving a negative about the existence of something, be it N-rays or god. You can never prove that an entity doesn’t exist. So at least at the beginning, it is incumbent on the person who argues for the existence of something to provide at least some evidence in support of it. The case for the existence of entities (like neutrinos or X-rays or god) requires positive evidence. Once that has been done beyond some standard of reasonable doubt, then the burden can shift to those who argue for non-existence, to show why this evidence is not credible.

This rule about evidence was not followed in the run up to the attack on Iraq. The Bush administration simply asserted that Iraq had weapons of mass destruction without providing credible evidence of it. They then (aided by a compliant media) managed to frame the debate so that the burden of proof shifted to those who did not believe the weapons existed. Even after the invasion, when the weapons did not turn up, Donald Rumsfeld famously said “There’s another way to phrase that and that is that the absence of evidence is not the evidence of absence. It is basically saying the same thing in a different way. Simply because you do not have evidence that something does exist does not mean that you have evidence that it doesn’t exist.” But he was wrong. When you are asserting the existence of an entity, if you have not provided any evidence that they do exist, then the absence of evidence is evidence of absence.

It is analogous to criminal trials. People are presumed innocent until proven guilty, and the onus is on the prosecution to first provide some positive evidence. Once that is done, the accused usually has to counter it in some way to avoid the risk that the jury will find the evidence sufficiently plausible to find the accused guilty.

So the question boils down to whether believers in a god have provided prima facie evidence in support of their thesis, sufficient to shift the burden to those who do not believe in god to show why this evidence is not convincing. Personal testimony by itself is usually not sufficient in courts, unless it is corroborated by physical evidence or direct personal observation by other credible sources who have observed the same phenomenon.

One of the common forms of evidence that is suggested is that since many, many people believe in the existence of god, that should count as evidence. My feeling is that that is not sufficient. After all, there have been universal beliefs that have subsequently been shown to be wrong, such as that the Earth was located at the center of the universe.

Has the evidence for god met the standard that we would accept in science or in a court of law? I personally just don’t see that it has but that is a judgment that each person must make. Of course, people can choose to not require that the evidence for god meet the same standard as for science or law, and if that is the case, then that pretty much ends the discussion. But at least we can all agree as to why we disagree.

Burden of proof-2: What constitutes evidence for god?

If a religious person is asked for evidence of god’s existence, the type of evidence presented usually consist of religious texts, events that are inexplicable according to scientific laws (i.e., miracles), or personal testimonies of direct experience of god. Actually, this can be reduced to just two categories (miracles and personal testimonies) since religious texts can be considered either as miraculously created (in the case of the Koran or those who believe in Biblical inerrancy) or as the testimonies of the writers of the texts, who in turn recorded their own or the testimonies of other people or report on miraculous events. If one wants to be a thoroughgoing reductionist, one might even reduce it to one category by arguing that reports of miracles are also essentially testimonies.

Just being a testimony does not mean that the evidence is invalid. ‘Anecdotal evidence’ often takes the form of testimony and can be the precursor to investigations that produce other kinds of evidence. Even in the hard sciences, personal testimony does play a role. After all, when a scientist discovers something and publishes a paper, that is kind of like a personal testimony since the very definition of a research publication is that it incorporates results nobody else has yet published. But in science those ‘testimonies’ are just the starting point for further investigation by others who try to recreate the conditions and see if the results are replicated. In some cases (neutrinos), they are and in others (N-rays) they are not. So in science, testimonies cease to be considered as such once independent researchers start reproducing results under fairly well controlled conditions.

But with religious testimonies, there is no such promise of such replicability. I recently had a discussion with a woman who described to me her experiences of god and described something she experienced while on a hilltop in California. I have no reason to doubt her story but even she would have thought I was strange if I asked her exactly where the hilltop was and what she did there so that I could try and replicate her experience. Religious testimonies are believed to be intensely personal and unique and idiosyncratic, while in science, personal testimony is the precursor to shared, similar, consistently reproducible experiences, under similar conditions, by an ever-increasing number of people.

The other kind of experience (miracles) again typically consists of unique events that cannot be recreated at will. All attempts at finding either a consistent pattern of god’s intervention in the world (such as the recent prayer study) or unambiguous violations of natural laws have singularly failed. All we really have are the stories in religious texts purporting to report on miraculous events long ago or the personal testimonies of people asserting a miraculous event in their lives.

How one defines a miracle is also difficult. It has to be more than just a highly improbable event. Suppose someone is seriously ill with cancer and the physicians have given up hope. Suppose that person’s family and friends pray to god and the patient suffers a remarkable remission in the disease. Is that a miracle? Believers would say yes, but unbelievers would say not necessarily, asserting that the body has all kinds of mechanisms for fighting disease that we do not know of. So what would constitute an event that everyone would consider a miracle?

Again, it seems to me that it would have to have the quality of replicability to satisfy everyone. If for a certain kind of terminal disease, a certain kind of prayer done under certain conditions invariably produced a cure where medicine could not, then that would constitute a good case for a miracle, because that would be hard to debunk, at least initially. As philosopher David Hume said: “No testimony is sufficient to establish a miracle unless the testimony be of such a kind that its falsehood would be more miraculous than the fact which it endeavors to establish…” (On Miracles)

But even this is problematical, especially for believers who usually do not believe in a god who acts so mechanically and can be summoned at will. Such predictable behavior is more symptomatic of the workings of as-yet-unknown natural laws than of god. The whole allure of belief in god is that god can act in unpredictable ways, to cause the dead to come back to life and the Earth to stop spinning.

So both kinds of evidence (miracles and testimonies) used to support belief in a god are inadequate for what science requires as evidentiary support.

The divide between atheists and religious believers ultimately comes down to whether an individual feels that all beliefs should meet the same standards that we accept for good science or whether we have one set of standards for science or law, and another for religious beliefs. There is nothing that compels anyone to choose either way.

I personally could not justify to myself why I should use different standards. Doing so seemed to me to indicate that I was deciding to believe in god first and then deciding on how to rationalize my belief later. Once I decided to use the yardstick of science uniformly across all areas of knowledge and see where that leads, I found myself agreeing with Laplace that I do not need the god hypothesis.

In a future posting, I will look at the situation where we can infer something from negative evidence, i.e., when something does not happen.

POST SCRIPT: Faith healing

The TV show House had an interesting episode that deals with some of the issues this blog has discussed recently, like faith healing (part 1 and part 2) and what to make of people who say god talks to them.

Here is an extended clip from that episode that pretty much gives away the entire plot, so don’t watch it if you are planning to see it in reruns. But it gets to grips with many of the issues that are discussed in this blog.

House is not very sympathetic to the claims of the 15-year old faith healer that god talks to him. When his medical colleagues argue with House, saying that the boy is merely religious and does not have a psychosis, House replies “You talk to god, you’re religious. God talks to you, you’re psychotic.”

Burden of proof

If a religious person asks me to prove that god does not exist, I freely concede that I cannot do so. The best that I can do is to invoke the Laplacian principle that I have no need of hypothesizing god’s existence to explain things. But clearly most people feel that theydo need to invoke god in order to understand their lives and experience. So how can we resolve this disagreement and make a judgment about the validity of the god hypothesis?

Following a recent posting on atheism and agnosticism, I had an interesting exchange with commenter Mike that made me think more about this issue. Mike (who believes in god) said that in his discussions with atheists, they often were unable to explain why they dismissed god’s existence. He says: “I find that when asked why the ‘god hypothesis’ as Laplace called it doesn’t work for them, they often don’t know how to respond.”

Conversely, Mike was perfectly able to explain why he (and other believers) believed in god’s existence:

The reason is that we have the positive proof we need, in the way we feel, the way we think, the way we act, things that can’t easily be presented as ‘proof’. In other words, the proof comes in a different form. It’s not in a model or an equation or a theory, yet we experience it every day.

So yes, we can ask that a religious belief provide some proof, but we must be open to the possibility that that proof is of a form we don’t expect. I wonder how often we overlook a ‘proof’ – of god, of love or a new particle – simply because it was not in a form we were looking for – or were willing to accept.

Mike makes the point (with which I agree) that it is possible that we do not have the means as yet to detect the existence of god. His argument can be supported by analogies from science. We believe we were all bathed in electromagnetic radiation from the beginning of the universe but we did not realize it until Maxwell’s theory of electromagnetism gave us a framework for understanding its existence and enabled us to design detectors to detect it.

The same thing happened with neutrinos. Vast numbers of them have been passing though us and the Earth but we did not know about their existence until the middle of the 20th century when a theory postulated their existence and detectors were designed that were sensitive enough to observe them.

So electromagnetic radiation and neutrinos existed all around us even during the long period of time when no one had any idea that they were there. Why cannot the same argument be applied to god? It can, actually. But does that mean that god exists? I think we would all agree that it does not, anymore than my inability to prove that unicorns do not exist implies that they do. All that this argument does is leave open the possibility of a hitherto undetected existence.

But the point of departure between science and religion is that in the case of electromagnetic radiation and neutrinos, their existence was postulated simultaneously along with suggestions of how and where anyone could look for them. If, after strenuous efforts, they could still not be detected, then scientists would cease to believe in their existence. But eventually, evidence for their existence was forthcoming from many different sources in a reproducible manner.

What if no such evidence was forthcoming? This has happened in the past with other phenomena, such as in 1903 with something called N-rays, which were postulated and seemed to have some evidentiary support initially, but on closer examination were found to be spurious. This does not prevent people from still believing in the phenomenon, but the scientific community would proceed on the assumption that it does not exist.

In the world of science the burden of proof is always on the person arguing for the existence of whatever is being proposed. If that evidence is not forthcoming, then people proceed on the assumption that the thing in question does not exist (the Laplacian principle). It is in parallel to the legal situation. We know that in the legal context in America, the presumption is that of innocence until proven guilty. This results in a much different kind of investigation and legal proceedings than if the presumption were guilty until proven innocent.

So on the question of god’s existence, it seems to me that it all comes down to the question of who has the burden of proof in such situations. Is the onus on the believer, to prove that god exists? Or on the atheist to argue that the evidence provided for god’s existence is not compelling? In other words, do we draw a parallel with the legal situation of ‘presumed innocent until proven guilty beyond a reasonable doubt’ and postulate a principle ‘non-existence until existence is proven beyond a reasonable doubt’? The latter would be consistent with scientific practice.

As long as we disagree on this fundamental question, there is little hope for resolution. But even if we agree that the burden of proof is the same for religion as for science, and that the person postulating existence of god has to advance at least some proof in support, that still does not end the debate. The question then shifts to what kind of evidence we would consider to be valid and what constitutes ‘reasonable doubt.’.

In the next few postings, we will look at the kinds of evidence that might be provided and how we might evaluate them.

Driving etiquette

Now that the summer driving season is upon us, and I am going to be on the highway today, here are some musings on driving.

Driving means never being able to say you’re sorry

We need a non-verbal sign for drivers to say “I’m sorry.” There have been times when I have inadvertently done something stupid or discourteous while driving, such as changing lanes without giving enough room and thus cutting someone off or accidentally blowing the horn or not stopping early enough at a stop sign or light and thus creating some doubt in the minds of other drivers as to whether I intended to stop. At such times, I have wanted to tell the other driver that I was sorry for unsettling them, but there is no universally recognized gesture to do so.
[Read more…]