The logic of science-2: Determining what is true

(For other posts in this series, see here.)

An important question in any area of knowledge is being able to identify what is true and what is false. The search for what is true and the ability to know when we have discovered truth is, after all, the Holy Grail of epistemology, because we believe that those things that are true are of lasting value while false statements are ephemeral, usually a waste of time and at worst harmful and dangerous.

Aristotle tried to make a clear distinction between those things that we feel we know for certain and are thus unchanging, and those things that are subject to change. The two categories were variously distinguished as knowledge versus opinion, reality versus appearance, or truth versus error. Aristotle made the crucial identification that true knowledge consisted of scientific knowledge, and his close association of scientific knowledge with truth has persisted through the ages. It also made the ability to distinguish between scientific knowledge and other forms of knowledge, now known as the demarcation problem, into an important question since this presumably also demarcates truth from error. (This brief summary of this history is taken from the essay The Demise of the Demarcation Problem by Larry Laudan which should be referred to for a fuller treatment.)
[Read more…]

The logic of science-1: The basic ideas

(For other posts in this series, see here.)

In the course of writing these blog posts, especially those dealing with religion, atheism, science, and philosophy, I have often appealed to the way that principles of logic are used in science in making my points. But these are scattered over many posts and I thought that I should collect and archive the ideas into one set of posts (despite the risk of some repetition) for easy reference and clarity. Besides, I haven’t had a multi-part series of posts in a long time, so I am due.

Learning about the principles of logic in science is important because you need a common framework in order to adjudicate disagreements. A big step towards in resolving arguments can be taken by either agreeing to a common framework or deciding that one cannot agree and that further discussion is pointless. Either outcome is more desirable than going around in circles endlessly, not realizing what the ultimate source of the disagreement is.
[Read more…]

Early eyes

A new article published today in Nature finds fossil evidence that fairly sophisticated eyes had evolved as early as 515 millions years ago, around the time known as the Cambrian explosion.

There were no fossil bodies found attached to the eyes, but the eyes probably belonged to a shrimp-like creature.

Myths about the Golden Ratio

Take a straight line. How should one divide the length into two parts such that the ratio of the length of the whole line to the longer segment is equal to the ratio of the longer segment to the shorter one? A little algebra gives you the result that longer segment should be 0.618 times the length of the whole line and thus the ratio of the full line to the longer segment is 1.618 (=1/0.618).

The number 1.618 is known as the ‘Golden Ratio’ and folklore ascribes deep significance to it and claims a ubiquity for it that far exceeds the reality.

Mathematician Keith Devlin tries to set the record straight.

Who am I?

In yesterday’s post, I wrote about the fact that different parts of our bodies keep regenerating themselves periodically. This fact alone should make nonsense of the belief of some religious people that our bodies become physically reconstituted after death in the afterlife, because if so, the resurrected body of a person who died at the age of 70 would be unrecognizably grotesque, consisting of around 70 livers and 7 full skeletons, all surrounded by hundreds, maybe thousands, of pounds of skin.

But leaving aside that, there is an interesting question raised by this constant regeneration of the body and that is how we retain a sense of having a single identity over our full life spans even as individual parts of us get replaced periodically. The average age of the molecules in my body is around 7 to 10 years and yet I have the strong sense of continuity, that I am in some fundamental sense the same person that I was as a child, even though almost none of those molecules have stayed with me over that time. How is it that we retain a strong sense of permanence in our identity while being so transient in our bodies?

The answer may lie in the fact that our brain seems to be the most permanent of our organs, undergoing little or no regeneration. In the same article in the New York Times that I referred to yesterday, Nicholas Wade says:

Dr. Frisen, a stem cell biologist at the Karolinska Institute in Stockholm, has also discovered a fact that explains why people behave their birth age, not the physical age of their cells: a few of the body’s cell types endure from birth to death without renewal, and this special minority includes some or all of the cells of the cerebral cortex.

The cerebral cortex is the thin sheet that forms the outer layer of the brain and is divided up into several zones that have different functional roles. If the cortex were removed and smoothed out to eliminate all the creases and folds, it would look like a dinner napkin. It is gray in color, the origin of its popular euphemism of ‘gray matter’. The network of nerve cells in the brain (called neurons) determines how the brain functions.

brain.jpg

While the brain seems to be the most enduring part of the body, even here there is variation. The cerebellum seems to contain non-neuronal cells that are close to the birth age (within three years or so) while the cerebral cortex (which is responsible for our cognitive capabilities and is thus most closely identified with our sense of self) has a slightly greater turnover of non-neuronal cells. But the researchers do not turn up any evidence that there is neuronal generation after birth, at least in the region known as the occipital cortex.

It was long believed that the number of neuronal connections in the brain grew rapidly during the first year or two of life and then got pruned and this was how our lives shaped our brains without new neurons being created. In 1999, there was research that found that new neurons were being created in the cerebral cortex of adult monkeys, suggesting that it could happen in adult humans too. This would complicate things somewhat as to how we retain a permanent sense of self but also provide hope that brains could regenerate. But this summary of later research (much of it by the same Karolinka group that I referred to yesterday) that appeared in the Proceedings of the National Academy of Sciences says that this does not happen with the neurons in the human cerebral cortex. (The neocortex referred to in the paper is the most recently evolved part of the cortex that is defined as containing the ‘higher’ functions and are “arranged in six layers, within which different regions permit vision, hearing, touch, the sense of balance, movement, emotional responses and every other feat of cognition.”)

The results show that the average age of the neurons (with respect to the age of the individual) is age 0.0 ± 0.4 years, i.e., the same as the age of the individual. In contrast, the nonneuronal cells have an average birth date of 4.9 ± 1.1 years after the birth of the individual.

Both of the experiments of Bhardwaj et al. indicate that there are no new neurons, either long-lived or transient, produced in the adult human for the neocortex. Importantly, these experiments are quantitative and indicate a theoretical maximum limit of 1% on the proportion of new neurons made over a 50-year period.

Bhardwaj et al. settle a hotly contested issue, unequivocally. The two-pronged experimental approach clearly establishes (i) that there is little or no continuous production of new neurons for long-term addition to the human neocortex and (ii) that there are few if any new neurons produced and existing transiently in the adult human neocortex. Importantly, the results are quantitatively presented, and a maximum limit to the amount of production of the new neurons can be established from the data presented. The data show that virtually all neurons (i.e., >99%) of the adult human neocortex are generated before the time of birth of the individual, exactly as suggested by Rakic, and the inescapable conclusion is that our neocortical neurons, the cell type that mediates much of our cognition, are produced prenatally and retained for our entire lifespan. [My italics]

So basically, even though every other part of us gets sloughed off and replaced at different points in time, for good or bad we are pretty much stuck with the brains that we have at birth. This may be crucial to our ability to retain a sense of a permanent identity that lasts all through our lives, although this is not yet established. Even if new research emerges that new neuronal cells could be generated over time replacing older ones, it may turn out to be able to do this seamlessly and provide cognitive continuity, just the way our other organs give us the illusion of being permanent even though they are not.

It seems like our brains are our essential selves with the rest of our bodies just superstructure. Rene Descartes famously said “I think, therefore I am.” We could also say, “My brain is who I am.”

How old are you?

In an article in the New York Times, Nicholas Wade points out that our bodies are younger than we think, because there is a discrepancy between our birth age and the age of the cells that make up our bodies

Whatever your age, your body is many years younger. In fact, even if you’re middle aged, most of you may be just 10 years old or less.

This heartening truth, which arises from the fact that most of the body’s tissues are under constant renewal, has been underlined by a novel method of estimating the age of human cells. Its inventor, Jonas Frisen, believes the average age of all the cells in an adult’s body may turn out to be as young as 7 to 10 years.

He quotes the work of Spalding, Bhardwaj, Buchhold, Druid, and Frisén of the Karolinska institute that uses the radioactive isotope carbon-14 to determine the age of the cells in bodies. Their paper appeared in the July 15, 2005 issue of Cell. They used carbon-14 dating to determine the age of cells. The carbon that forms organic matter is largely obtained from the atmosphere. Plants, for example, take in carbon dioxide from the air and exude oxygen as part of the process of photosynthesis. Hence the proportion of carbon-14 that is found in living organic matter is the same as that in the ambient atmosphere at the time it was absorbed. The level of the radioactive isotope carbon-14 that occurs in the atmosphere is fairly constant because its rate of production is balanced by the rate of decay. Once the plant dies, it does not take in any new carbon and the decay of the carbon-14 that it had at the moment of death results in a steadily smaller proportion of it and the difference can be used to measure how long it has been dead. The half-life of carbon-14 is 5,730 years and this method can be used to determine the age of dead organic matter up to about 50,000 years, which is a convenient range for archeological dating because it lies in the range required for those studies.

The way that Frisén and his co-workers used this knowledge to measure the age of cells in humans is quite clever. Carbon-14 is produced by cosmic rays and the level of carbon-14 in the atmosphere should be constant. This is why we can tell how long something has been dead but not when it was ‘born’, i.e., when the organic matter was created. But in the 1950s and 1960s, there was a sharp spike in carbon-14 levels because of the atmospheric testing of nuclear weapons. Once atmospheric test ban treaties came into came into being, the surge of carbon-14 that had been produced steadily became diffused in the atmosphere as it spread over the globe, and so there has been a steady decline in average carbon-14 levels over time. It is this that enables us to know when the carbon-14 was absorbed to create organic matter.

cell images_Page_02_Image_0001.jpg

The amount of carbon-14 in the genomic DNA can thus be used to measure when the DNA in the cell was created. The technique was checked against the age of trees which can be measured by the amounts of carbon-14 found in the various rings as the isotope is absorbed during photosynthesis. Their results and those of others show that different parts of the body get replaced after different durations, whose approximate values are given below. (I have included results from both the Wade newspaper article and the Frisen paper.)

Stomach lining: five days
Surface layer of skin: two weeks
Red blood cells: three months
Liver: one year
Skeleton: 10 years
Intestine: 11 years
Rib muscles: 15 years

This explains why our bodies seem so durable and able to withstand considerable abuse. [UPDATE: Later studies find that about 50% of our heart muscles are replaced over our lifetime, but the brain cells seem to be largely unchanged.]

So why do we die if parts of us keep getting regenerated? It seems as if the ability of stem cells to keep reproducing declines with age. In other words there seems to be a limit to the number of times that cells can reproduce and once we reach that limit, the ability of the body to regenerate itself ceases. What causes this limit is still an open question. As Wade writes:

Some experts believe the root cause is that the DNA accumulates mutations and its information is gradually degraded. Others blame the DNA of the mitochondria, which lack the repair mechanisms available for the chromosomes. A third theory is that the stem cells that are the source of new cells in each tissue eventually grow feeble with age.

Frisen thinks his research might be able to shed some light on this question, especially the third option, saying “The notion that stem cells themselves age and become less capable of generating progeny is gaining increasing support.”

Who should own the rights to one’s tissues?

People generally do not think about what happens to the blood and tissue samples they give as part of medical tests, assuming that they are eventually discarded in some way. Many are not aware that your samples may be retained for research or even commercial purposes. Once you give it away, you lose all rights to what is subsequently done with it, even if your body parts have some unique property that can be used to make drugs and other things that can be marketed commercially.

The most famous case of this is Henrietta Lacks, a poor black woman in Baltimore who died from cervical cancer in 1951. A researcher who had been trying unsuccessfully, like others, to have cells reproduce in the test tube, received a sample of hers too. It turned out that her cancer cells, unlike other cells, could reproduce endlessly in test tubes, providing a rich and inexhaustible source of cells for research and treatment. Her cells, called HeLa, have taken on a life of their own and have travelled the world long after she died. Her story is recounted in the book The Immortal Life of Henrietta Lacks by Rebecca Skloot.

The issue of whether one’s cells should be used without one’s permission and whether one should be able to retain the rights to one’s tissues is a tricky one for law and ethics.

“Science is not the highest value in society,” [Lori Andrews, director of the Institute for Science, Law, and Technology at the Illinois Institute of Technology] says, pointing instead to things like autonomy and personal freedom. “Think about it,” she says. “I decide who gets my money after I die. It wouldn’t harm me if I died and you gave all my money to someone else. But there is something psychologically beneficial to me as a living person to know I can give my money to whoever I want. No one can say, ‘She shouldn’t be allowed to do that with her money because that might not be most beneficial to society.’ But replace the word money in that sentence with tissue, and you’ve got precisely the logic many people use against giving donors control over their tissues.” (Skloot, p. 321)

It does seem wrong somehow for private companies to hugely profit from the lives and bodies of others without owing them anything. In the case of Henrietta Lacks, her family remained very poor and lacked health insurance and proper medical care even while her cells became famous and they bitterly resented this. They did not even know about the widespread use of her cells until two decades later.

On the other hand, it would put a real crimp on research if scientists had to keep track of whose tissues they were working on. Since we all benefit (or should benefit) from the results of scientific research, one can make the case that the tissues we give up are like the trash we throw away, things for which we have voluntarily given away our rights. If the tissues are used for medical research done by public institutions like the NIH or universities and the results are used not for profit but to benefit the general public, this would, I believe, remove many of the objections to the unaccredited use of tissues.

You can see why scientists would prefer to have the free use of tissues but what I don’t understand are those scientists who go overboard in making special exceptions for religion.

David Korn, vice president for research at Harvard University says: “I think people are morally obligated to allow their bits and pieces to be used to advance knowledge to help others. Since everybody benefits, everybody can accept the small risks of having their tissue scraps used in research. “The only exception he would make is for people whose religious belief prohibit tissue donation. “If somebody says being buried without all their pieces will condemn them to wandering forever because they can’t get salvation, that’s legitimate, and people should respect it,” Korn says. (Skloot, p. 321)

This is another case where religions try to claim special privileges denied to everyone else. Why is that particular claim legitimate? Why should religious superstitions get priority over other irrational beliefs? Our bodies are in a constant state of flux. It sheds cells all the time in the normal course of our daily lives, which is why DNA testing has become such a valuable forensic tool for solving crimes. Since we are losing old cells and gaining new cells all the time, it is a safe bet that hardly any of the cells that were part of me as a child are still in my body. So the whole idea that the afterlife consists of ‘all of me’ is absurd since that would require bringing together all the cells that I have shed during my life, resulting in me having multiple organs and limbs, like some horror fiction monster.

Rather than pandering to this fantasy, we should educate people that our bodies are in a constant state of flux, that our seemingly permanent bodies are actually transient entitites.

Atheism is a byproduct of science

Science is an atheistic enterprise. As the eminent population geneticist J. B. S. Haldane said:

My practice as a scientist is atheistic. That is to say, when I set up an experiment I assume that no god, angel or devil is going to interfere with its course; and this assumption has been justified by such success as I have achieved in my professional career. I should therefore be intellectually dishonest if I were not also atheistic in the affairs of the world.

While not every scientist would apply the highly successful atheistic methodology to every aspect of their lives as Haldane does, the fact that intellectual consistency requires it, coupled with the success of science, has persuaded most scientists that leaving god out of things is a good way to proceed and hence it should not be surprising that increasing awareness of science correlates with increased levels of atheism.

But it would be wrong to conclude that scientists have atheism as a driving concern in their work or that they actively seek out theories that deny the existence of god. God is simply irrelevant to their work. The negative implications for god of scientific theories is a byproduct of scientific research rather than the principle aim of it. Non-scientists may be surprised that discussions about god are almost nonexistent at scientific meetings and even in ordinary interactions among scientists. We simply take it for granted that god plays no role whatsoever.

For example, the idea of the multiverse has torpedoed the argument of religious people that the universe must have had a beginning or that its parameters seem to be fine-tuned for human life, which they argue are evidences for god. They seem suspicious that the multiverse idea was created simply to eliminate god from these two of the last three refuges in which he could be hiding. (The third refuge is the origin of a self-replicating molecule that was the precursor of life.) In his article titled Does the Universe Need God?, cosmologist Sean Carroll dismisses that idea.

The multiverse is not a theory; it is a prediction of a theory, namely the combination of inflationary cosmology and a landscape of vacuum states. Both of these ideas came about for other reasons, having nothing to do with the multiverse. If they are right, they predict the existence of a multiverse in a wide variety of circumstances. It’s our job to take the predictions of our theories seriously, not to discount them because we end up with an uncomfortably large number of universes.

Carroll ends with a nice summary of what science is about and why god really has no reason to be postulated into existence. This is similar to the points I made in my series on why atheism is winning.

Over the past five hundred years, the progress of science has worked to strip away God’s roles in the world. He isn’t needed to keep things moving, or to develop the complexity of living creatures, or to account for the existence of the universe. Perhaps the greatest triumph of the scientific revolution has been in the realm of methodology. Control groups, double-blind experiments, an insistence on precise and testable predictions – a suite of techniques constructed to guard against the very human tendency to see things that aren’t there. There is no control group for the universe, but in our attempts to explain it we should aim for a similar level of rigor. If and when cosmologists develop a successful scientific understanding of the origin of the universe, we will be left with a picture in which there is no place for God to act – if he does (e.g., through subtle influences on quantum-mechanical transitions or the progress of evolution), it is only in ways that are unnecessary and imperceptible. We can’t be sure that a fully naturalist understanding of cosmology is forthcoming, but at the same time there is no reason to doubt it. Two thousand years ago, it was perfectly reasonable to invoke God as an explanation for natural phenomena; now, we can do much better.

None of this amounts to a “proof” that God doesn’t exist, of course. Such a proof is not forthcoming; science isn’t in the business of proving things. Rather, science judges the merits of competing models in terms of their simplicity, clarity, comprehensiveness, and fit to the data. Unsuccessful theories are never disproven, as we can always concoct elaborate schemes to save the phenomena; they just fade away as better theories gain acceptance. Attempting to explain the natural world by appealing to God is, by scientific standards, not a very successful theory. The fact that we humans have been able to understand so much about how the natural world works, in our incredibly limited region of space over a remarkably short period of time, is a triumph of the human spirit, one in which we can all be justifiably proud.

Religious believers misuse this fundamental nature of scientific inquiry, that all conclusions are tentative and that what we believe to be true is a collective judgment made by comparing theories and determining which one is best supported by evidence, to make the misleading case that unless we have proved one single theory to be true, other theories (especially the god theory) should merit serious consideration. This is wrong. While we may not be able to prove which theories are right and which are wrong, we do know how to judge which ones are good and which ones are bad.

God is a terrible theory. It fails utterly to deliver the goods, and so should be abandoned like all the other failed theories of the past. In the film Love and Death, Woody Allen’s character says, “If it turns out that there is a god, I don’t think that he’s evil. I think that the worst you can say about him is that basically he’s an underachiever.” He is right.

God is not the ‘simplest’ explanation for the universe

Believers in god (especially of the intelligent design variety) like to argue that a god is a ‘simpler’ explanation than any of the alternatives for many natural phenomena. But they seem to equate simple with naïve, in the sense that what makes something simple is something that should be understandable by a child. For example, if a child asks you why the sun rises and sets every day, giving an explanation in terms of the laws of gravity, Newton’s laws of motion, and the Earth’s rotation about its own axis, is not ‘simple’. A child would more likely understand an explanation in which there is a man whose job it was to push the sun around in its daily orbit. This is ‘simpler’ because the concepts of ‘man’ and ‘push’ are familiar ones to a child, requiring no further explication. But this apparent simplicity is an illusion because it ignores enormously complicating factors such as how the man got up there, how strong must he be, why don’t we see him, and so on. It is because such issues are swept under the rug that this explanation appears to be simple.

In his article titled Does the Universe Need God?, cosmologist Sean Carroll points out that introducing a new ad hoc element like god into a theory actually makes things enormously complicated. The erroneous idea that simplicity is linked to the number of entities involved is based on a misconception of science.

All else being equal, a simpler scientific theory is preferred over a more complicated one. But how do we judge simplicity? It certainly doesn’t mean “the sets involved in the mathematical description of the theory contain the smallest possible number of elements.” In the Newtonian clockwork universe, every cubic centimeter contains an infinite number of points, and space contains an infinite number of cubic centimeters, all of which persist for an infinite number of separate moments each second, over an infinite number of seconds. Nobody ever claimed that all these infinities were a strike against the theory.

The simplicity of a theory is a statement about how compactly we can describe the formal structure (the Kolmogorov complexity), not how many elements it contains. The set of real numbers consisting of “eleven, and thirteen times the square root of two, and pi to the twenty-eighth power, and all prime numbers between 4,982 and 34,950” is a more complicated set than “the integers,” even though the latter set contains an infinitely larger number of elements. The physics of a universe containing 1088 particles that all belong to just a handful of types, each particle behaving precisely according to the characteristics of its type, is much simpler than that of a universe containing only a thousand particles, each behaving completely differently.

At first glance, the God hypothesis seems simple and precise – an omnipotent, omniscient, and omnibenevolent being. (There are other definitions, but they are usually comparably terse.) The apparent simplicity is somewhat misleading, however. In comparison to a purely naturalistic model, we’re not simply adding a new element to an existing ontology (like a new field or particle), or even replacing one ontology with a more effective one at a similar level of complexity (like general relativity replacing Newtonian spacetime, or quantum mechanics replacing classical mechanics). We’re adding an entirely new metaphysical category, whose relation to the observable world is unclear. This doesn’t automatically disqualify God from consideration as a scientific theory, but it implies that, all else being equal, a purely naturalistic model will be preferred on the grounds of simplicity.

Religious people think that god is a ‘simpler’ theory because they give themselves the license to assign their god any property they wish in order to ‘solve’ any problem they encounter, without making the answer given in one area consistent with an answer given elsewhere. But the very fact that the god model is so malleable is what makes it so useless. For example, religious people will argue (as they must) that the way that the world currently exists, despite the suffering, disasters, and catastrophes that seem to afflict everyone indiscriminately, is evidence for a loving god. A colleague of mine who is a very thoughtful and sophisticated person told me recently that when he looks at the world, he sees one that is consistent with the existence of god.

This raises two questions. The first is whether the world that he sees also consistent with the non-existence of god. If yes, how does he decide which option to believe? If no, what exactly is the source of the inconsistency?

The second question is what the world would need to look like for him to conclude that the there is no god. Carroll gives a thought experiment that illustrates the shallowness of those who argue that the evils and misfortunes and calamities that bestride this world are actually evidence for god.

In numerous ways, the world around us is more like what we would expect from a dysteleological set of uncaring laws of nature than from a higher power with an interest in our welfare. As another thought experiment, imagine a hypothetical world in which there was no evil, people were invariably kind, fewer natural disasters occurred, and virtue was always rewarded. Would inhabitants of that world consider these features to be evidence against the existence of God? If not, why don’t we consider the contrary conditions to be such evidence?

It is not hard to understand why the concept of god could only have arisen in primitive, or at least pre-modern, times.

Consider a hypothetical world in which science had developed to something like its current state of progress, but nobody had yet thought of God. It seems unlikely that an imaginative thinker in this world, upon proposing God as a solution to various cosmological puzzles, would be met with enthusiasm. All else being equal, science prefers its theories to be precise, predictive, and minimal – requiring the smallest possible amount of theoretical overhead. The God hypothesis is none of these. Indeed, in our actual world, God is essentially never invoked in scientific discussions. You can scour the tables of contents in major physics journals, or titles of seminars and colloquia in physics departments and conferences, looking in vain for any mention of possible supernatural intervention into the workings of the world.

The concept of god is a relic of our ancient history, like the vestigial elements of animal physiology such as the legs bones of some snakes, the small wings of flightless birds like the kiwi, the eyes of the blind mole rat, and the tailbone, ear muscles, and appendix of humans. It will, like them, eventually disappear for the same reason, because they have ceased to be of use.