Proof from Morality (1)

A hot topic in psychology is the study of infants.

We had great difficulty finding a way to probe their minds. How do you communicate with something that cannot talk, after all? In the 1980’s, scientists found a simple solution: if an infant is puzzled, they’ll stare at something longer than if they understood it.

For instance, put two dolls behind a curtain, then pull away the curtain. Five-month-old infants will pay little attention if there are two dolls behind the drapes, but will take notice if one or three dolls are sitting there. This behaviour only makes sense if they have some basic counting skills built-in at birth.

Or, stage a puppet show instead. Have the puppets nicely pass a ball to each other for a while, only to have one steal the ball and run off-stage. Now set both puppets in front of the infant, each with a bowl of food placed in front, and encourage the child to take a treat. The vast majority of the time, infants will take from the bowl in front of the greedy puppet. Clearly, they wanted to punish this amoral muppet.[104]

But how can this be? Children that young don’t understand language, so they can’t have learned this from an adult. This sense of morality must be built-in. And who better to do the building-in than a god?

The earliest religions we know of seem to agree. Ma’at, the ancient Egyptian code of ethics, is a prime example.

Maat is right order in nature and society, as established by the act of creation, and hence means, according to the context, what is right, what is correct, law, order, justice and truth. This state of righteousness needs to be preserved or established, in great matters as in small. Maat is therefore not only right order but also the object of human activity. Maat is both the task which man sets himself and also, as righteousness, the promise and reward which await him in fulfilling it.

(Siegfried Morenz, “Egyptian Religion.” Cornell University Press, 1973. pg. 113)

The earliest written records of Ma’at date back to 2600BCE. The Sumerians had a similar system involving “underworld judges” since at least 2900BCE. This is as far back as we can reliably look, as these two civilizations were kind enough to write down their morals for us.

Moral Quandaries

Wait wait wait, we’re missing something here. Before we can properly discuss morality, we need to nail down what a moral is.

That seems simple enough, as we’re handed morals all the time via stories. In Shakespeare’s play “Macbeth,” Lady Macbeth is so devoted to her husband that she practically murders the king for him, thinking that because it was prophecized to happen everything would turn out fine. It doesn’t; she kills herself out of guilt over the murder, and her husband’s head winds up on a pike. The moral of the story is: don’t kill people, even if it’ll get you a head.[105]

In part, then, a moral is a description for how to behave (or how not to behave) in a given situation. To be moral, or to be “good,” is to follow that description. But there’s another aspect to it as well; the night of the murder, Macbeth sees a ghostly dagger, and others note strange behaviour from the animals and weather. Back in the 1600’s, that could only mean one thing: something bad or immoral was going to happen to the king, and the gods didn’t approve of it. Compare this with Macduff’s beheading of Macbeth near the end of the play; another king is killed, and yet this time nature doesn’t kick up a fuss, indirectly hinting that it approves. So morals aren’t simply rules you follow, but rules you should follow.

Should? According to who or what?

And in that word, we see exactly why the religious love bringing up morality. A moral must have some justification for it, presumably from something or one greater than the entities that must live by that moral. I could declare that everyone should give me a portion of their earnings, because I’m just that awesome, but the justification for that moral dies along with me. There’s also a chance that I may be mistaken about my awesomeness, or inventing it in order to line my pockets. I could even change my mind! We need something more stable than an individual to anchor our morals to, and yet even human culture and society can change dramatically over time. An external, unchanging entity would be an ideal anchor, such as a god.

There’s still one more axis to consider, though. During a royal banquet, the newly-crowned Macbeth has his seat taken by the ghost of his former rival, Banquo. Nevermind the action on stage, take a step back and ask yourself a more basic question: is this banquet moral?

That question seems terribly strange. It’s expected of kings to hold banquets from time to time, and pretty much required of them to host one after their coronation. If someone has no choice in an action, how can morality enter into it?

Aha! We’ve finally clinched our definition: a moral is a description of how something should (or shouldn’t) behave in a certain situation, given multiple choices. Eating does not involve morality, since you have no choice on the matter.  Your choice of what to eat is quite different, provided you have more than one choice. Even then, if none of those choices have been approved or disapproved in some fashion, then we’re not making a choice based on morality after all.

This definition opens up new possibilities, too. Suppose we were to walk into a deserted village, poking our heads into the deserted doors. We’re surprised to find the interiors were kept remarkably clean when they were occupied, which is quite unlike all others we find from that time. We’ve got two-thirds of morality in place already: from how consistent this pattern is, we can be pretty confident there was a description of how to behave, and from this village’s neighbours we know they had multiple options. The only thing missing is a “should,” but again the consistency of this behaviour suggests the original occupiers of this village had one.

If you agree to this, we can push back the first evidence of morality to the Çatalhöyük settlement in Turkey, which was occupied between 7500 and 5700 BCE.[106]

Can we go farther? Many archaeologists, including Klaus Schmidt, claim that a site called Göbekli Tepe may actually be the first religious temple.

In the pits, standing stones, or pillars, are arranged in circles. Beyond, on the hillside, are four other rings of partially excavated pillars. Each ring has a roughly similar layout: in the center are two large stone T-shaped pillars encircled by slightly smaller stones facing inward. The tallest pillars tower 16 feet and, Schmidt says, weigh between seven and ten tons. As we walk among them, I see that some are blank, while others are elaborately carved: foxes, lions, scorpions and vultures abound, twisting and crawling on the pillars’ broad sides. […]
And partly because Schmidt has found no evidence that people permanently resided on the summit of Gobekli Tepe itself, he believes this was a place of worship on an unprecedented  scale—humanity’s first “cathedral on a hill.”

(Andrew Curry, “Gobekli Tepe: The World’s First Temple?“ Smithsonian magazine, November 2008.)

Again, we find hints of morality; almost all deities encourage their worship, implicitly approving of it, and yet we have the choice of not worshipping. If that place truly was a temple, then our evidence of morality begins at roughly 9500 BCE. To put that date in context, it’s only a thousand years after the last major ice age ended, right about when we learned how to farm, a thousand years before we invented numbers, six thousand before we discovered copper and writing, and eight thousand before Moses was given the 613 mitzvot[107] by YHWH, according to orthodox Judaism.


[104]  “The Moral Life of Babies,” the New York Times Magazine, May 5th,  2010.

[105] Sorry. Oh, and while I have your attention: I’m going to be discussing key plot points for the next few paragraphs. Spoiler alert!

[106]  http://www.catalhoyuk.com/library/goddess.html . Search for “clean.”

[107] Commandments of behaviour given to you by YHWH. Oddly enough, despite sharing the same holy text, despite Jesus claiming all old Jewish laws apply to Christians (Matthew 5:17-20), despite Jesus only naming six (Matthew 19:16-19) or two (Matthew 22:37-40) commandments explicitly, in violation of Jewish (Simon Glustrom, The Myth and Reality of Judaism, pp 113–114) and Christian (James 2:9-12) tradition that every commandment is important, Christians only recognize ten commandments as absolute divine law. And even then, they ignore the one about sacrificing your first-born (Exodus 34:19-20). Go figure.

76 days

I was wondering how long it would take for Trump to start a war to prop up his approval ratings, and I may have just gotten my answer.

The operation, which the Trump administration authorized in retaliation for a chemical attack killing scores of civilians this week, dramatically expands U.S. military involvement in Syria and exposes the United States to heightened risk of direct confrontation with Russia and Iran, both backing Assad in his attempt to crush his opposition.

President Trump said the strike was in the “vital national security interest” of the United States and called on “all civilized nations to join us in seeking to end the slaughter and bloodshed in Syria. And also to end terrorism of all kinds and all types.”

“We ask for God’s wisdom as we face the challenge of our very troubled world,” he continued. “We pray for the lives of the wounded and for the souls of those who have passed and we hope that as long as America stands for justice then peace and harmony will in the end prevail.”

On the surface, that looks like it could trigger a war with Russia. But:

The Pentagon has confirmed it used a hotline for minimising the risk of aerial combat between US and Russian jets in eastern Syria to alert Russia of the strike against its Syrian client. The Russians are sure to have routed that warning to Assad, raising immediate questions about what the strike will have accomplished, and also signalling that the US does not seek escalation.

The sun is coming up on Russia, so we’ll quickly learn how accurate that is. Still, there’s good reason to think this won’t snowball into war, as well as good reason to think this is Trump shouting “WOLF!!” and pointing in the other direction.


Initial reports from Russia aren’t looking good.

Proof from Logic and Dualism (4)

You’ve Got to Have Soul!

This dismissal of a bridged consciousness and dualism strike at the heart of another key part of religion.

I’d argue that souls, instead of god, are the common thread between religions. Every religion I’ve encountered has them, no matter how many gods they worship:

And Jehovah spake unto Moses, saying,

When thou takest the sum of the children of Israel, according to those that are numbered of them, then shall they give every man a ransom for his soul unto Jehovah, when thou numberest them; that there be no plague among them, when thou numberest them.

(Exodus 30:11-12, Old Testament, American Standard Translation)

I bow down to those who have reached omniscience in the flesh and teach the road to everlasting life in the liberated state.
I bow down to those who have attained perfect knowledge and liberated their souls of all karma.
I bow down to those who have experienced self-realization of their souls through self-control and self-sacrifice.
I bow down to those who understand the true nature of soul and teach the importance of the spiritual over the material.
I bow down to those who strictly follow the five great vows of conduct and inspire us to live a virtuous life.
To these five types of great souls I offer my praise.

(The first six lines of the Namokar Maha Mantra, the “universal” prayer of Jainism)Never is he (Soul) born, nor does he die at any time, he has never been brought into being, nor shall come hereafter; unborn, eternal, permanant and ancient (primeval). When the body is slain, he is not slain. \
O’ Arjunaa, know this soul to be eternal, undecaying, birthless and indestructible. A person who knows him to be so — whom can he slay or cause another to slay.
As a man casts off worn-out garments and puts on new ones, so the embodied soul casts off the worn-out body and enters other new ones.

(Chapter 2, verses 20-22, from Srimad Bhagavat Gita, a Hindu holy text)

The concept of the soul requires two things, that consciousness be detachable from the brain, and that this consciousness have some non-material place to inhabit. With both in place, death becomes an annoyance instead of a finality, thus evaporating the biggest fear of any conscious organism. As an added bonus we can bolt on a system of punishment, to take care of anyone who got away with murder while alive.[102]

In some religions this non-material realm sounds suspiciously like the material one:

Lo! those who kept their duty dwell in gardens and delight,
Happy because of what their Lord hath given them, and (because) their Lord hath warded off from them the torment of hell-fire.
(And it is said unto them): Eat and drink in health (as a reward) for what ye used to do,
Reclining on ranged couches. And we wed them unto fair ones with wide, lovely eyes.
And they who believe and whose seed follow them in faith, We cause their seed to join them (there), and We deprive them of nought of their (life’s) work. Every man is a pledge for that which he hath earned.
And We provide them with fruit and meat such as they desire.
There they pass from hand to hand a cup wherein is neither vanity nor cause of sin.
And there go round, waiting on them menservants of their own, as they were hidden pearls.

(Sura 52:17-24, The Qur’an, translated by M.M. Pickthall)

After these things I saw, and behold, a door opened in heaven, and the first voice that I heard, [a voice] as of a trumpet speaking with me, one saying, Come up hither, and I will show thee the things which must come to pass hereafter.
Straightway I was in the Spirit: and behold, there was a throne set in heaven, and one sitting upon the throne;
and he that sat [was] to look upon like a jasper stone and a sardius: and [there was] a rainbow round about the throne, like an emerald to look upon.
And round about the throne [were] four and twenty thrones: and upon the thrones [I saw] four and twenty elders sitting, arrayed in white garments; and on their heads crowns of gold.

(Revelations 4:1-4, New Testament, American Standard Translation) [103]

In others, this non-material realm is never described, or slightly inconsistent:

In coming and going, birth and death an apostate loses honour.
By serving the True Guru man attains to eternal peace and his light merges with the Supreme Light.
The service of the Sat [True] Guru is extremely ease-bestowing and by it one obtains the boon that he desire.
By placing Lord God in the mind continence, truthfulness and penance are obtained and the body becomes pure.
Such person ever remain happy day and night and procures peace by meeting the Beloved.
I am devoted unto those, who have sought the protection of the True Guru.
In the True Court they obtains true honour and are easily absorbed in the True Lord.
Nanak, through associating with the society of the Exalted Guru one meets the Lord by His grace.

(from pg. 31 of the Sri Guru Granth Sahib, primary holy text of the Sikh, as translated by Bhai Manmohan Singh)

The exceptions, most notably deism and to some degree pantheism, lean on dualism instead. They can’t rely on the supernatural, but still yearn for a connection to something greater than themselves. Invoking a world of ideas allows them that comfort, without having to return to the messy contradictions of a traditional faith.

If consciousness is anchored to the brain, and a perfect world of the intellect is impossible, then dualism is as likely as an afterlife with a couch.


[102] I’ll develop this more in the Popularity proof, when I propose an explanation for religion’s origins.

[103] To add to the confusion, both Islam and Christianity have an different version of “heaven” in some sects. Instead of an eternal paradise after death, you decompose and lose consciousness until the end times, at which point all the good little believers will be given new bodies and brought back to life. This version doesn’t require souls at all.

You!

Who would you like to signal boost?

Today is Trans* Day of Visibility, and I have a gaggle of people and resources to share. Two web comics I follow are Rooster Tales and Trans Girl Next Door, which provide a great mix of silly and serious. On the educational side, TransAdvocate has been an excellent read, a mix of rigorous scholarship and activism. Zinnia Jones is in the same vein, and the Gender Analysis series she helped start is a must-see. And of course, Shiv’s blogging is worth highlighting.

Proof from Logic and Dualism (3)

Dualism

These effects aren’t limited to brain damage. Just over four percent of all humans are synaesthetic, which means that two separate regions of their brain link up more strongly than usual. The combinations seem limitless: some will see shapes when they taste anything, or hear sounds when they see something move, or their numbers will appear coloured. We know they aren’t faking it, because we’ve handed them a test like this:

Can you spot all the twos mixed in with the fives?

Most people[B] take about ten seconds to find all the 2’s mixed in with the 5’s. Most grapheme-colour synesthetes will glance at it and say “the 2’s form a square.” These people experience the world quite differently,[96] which has been confirmed by many tests like the above. We’ve also used brain scanning techniques to peer inside the skulls of synesthetes, and we find they’re wired differently in exactly the way our maps predict.

Einstein was infamous for his thought experiments, and when they cracked open his skull they spotted an enlarged inferior parietal lobe,[97] which is linked to spatial reasoning and visualization. Christian Gaser and several other researchers have found that professional musicians have differently structured brains than the rest of us. The areas responsible for hearing, as well as motor and spatial control, are physically larger.

Admittedly, none of the above is a slam-dunk debunk of a bridged consciousness. Nothing ever could be, so long as we have no concise definition of “consciousness.”[98] This fuzziness is exploited by those who are unsettled by the strong links between consciousness and the brain. “All that may be true, but at what point do I ‘see’? At what point does the objective sensory input become a subjective colour?”

The first question’s answer is “wherever you want.” Tell me: when does bread stop being dough, and start being bread? Surely not when the ingredients are mixed together, nor when it’s placed in the oven. It can’t be when it’s removed from the oven, since there’s no difference between the instants before it was removed and the instant after, and besides the inside is still being baked by the warmer outside. It can’t be when it has cooled to room temperature, because it was edible before then. It can’t be when the lump was first edible, because that’s a subjective measure that varies by person.

Face it, bread is much too complex to be understood by science. It must be a divine product!

What’s really going on here is that “bread” and “dough” are only probabilistic definitions. They only work in certain situations, but those situations pop up often enough to justify the definitions. Push either too far, and they’re guaranteed to fail. “Sensory input” and “seeing” are no different.

The second question is a little harder to answer. Christof C. Koch at Caltech found a “Halle Berry” neuron in an epilepsy patient. This little thing got excited whenever its owner was presented with a photo of Halle Berry, or a drawing of her, or even just her name. This neuron isn’t in everyone, of course, and almost certainly isn’t in the same spot in another person who recognizes that actress. And don’t let my poor phrasing fool you, the only difference between it and the neuron two spots over is which connections it has. The neurons on the other ends of those links have already marked the input as “person,” “female,” “known entity,” and so on. The entire length of this patient’s nervous system, from the ganglion in the back of the eye all the way down to this little neuron, has been gradually abstracting that image/picture/name of Halle Berry.

The only question left is how abstract we have to get to satisfy your definition of “subjective.” Once that’s done, we can zero in on one or more brain structures.

The Limits of Logic

Even if our consciousness doesn’t come from this second world, we can at least take some comfort in knowing it exists as the source of perfection and order.

Or can we? This half of the argument suffered two major blows in the past century, thanks to Kurt Gödel and Alan Turing.

Before those two were born, logicians had unearthed a crisis. They were probing the foundations of mathematics, and found it wasn’t as solid as they wanted. How were numbers constructed? Why did the basic math operations work? Could more complex operations be counted on? These are not trivial questions, since science heavily depends on math to measure and predict the world. Any weakness in one could topple the other. That anxiety triggered a century of search for the absolute fundamentals of math, which reached its pinnacle when Bertrand Russell and Alfred Whitehead took 362 pages to prove

1 + 1 = 2

Not only did the uncertainty refuse to leave, but to their horror it crept into logic as well. Epimenides of Crete was one of the first to discover the basic problem, albeit inadvertently:[99]

They fashioned a tomb for thee, O holy and high one
The Cretans, always liars, evil beasts, idle bellies!
But thou art not dead: thou livest and abidest forever,
For in thee we live and move and have our being.

(Epimenides, Cretica, circa 600BCE)

Or, without the poetry:

I, as a Cretan, know that all Cretans are liars!

If Epimenides is lying, then Cretans tell the truth. But this is impossible, since he is Cretan. He must be telling the truth, then… but that would mean Cretans are liars, including Epimenides! This paradox has an easy solution (Cretans could be a mix of liars and truth-tellers), but it doesn’t take much thought to come up with a stronger version:

This statement is false.

Variations of this paradox were found in the rules of logic, and every attempt to remove them just created more. It was quickly becoming an embarrassment. Many mathematicians and logicians were drawn to the problem, hoping for a solution that put math on solid ground.

Instead, Kurt Gödel proved the ground would always be unstable. In his two Incompleteness Theorems, he noted that you could translate any group or system of mathematical statements into numbers. Since the results had a finite digit count, you could create a method that would take in a number and tell you if the original math statements were true or false. Since this method itself was a mathematical statement, it too had a number. Feed the method’s translated number to itself, and BANG, a contradiction popped out: this method cannot determine its own truthfulness.

Worse, the details didn’t matter. No clever transform could save the day, and every mathematical statement can be transformed. There were only two choices: ensure that this truth-evaluating method was not within the mathematical system you’re using, making it impossible to ever prove that every statement is true or false, or accept that your math rules will have contractions. It was the logical equivalent of a rock and a hard place.[100]

As mathematicians were freaking out over this, Alan Turing made things worse. Gödel’s Theorem focused on proving things true or false; it said nothing about whether all statements could be proven, period.

To study this tougher problem, Turing invented a simple “machine” which would later be named in his honour. These “Turing Machines” were basically an ideal computer,[101] no more than an infinite storage space for symbols paired with a set of instructions for modifying those symbols. Once you set a machine in motion, there were two outcomes: it would eventually stop running, or it would carry on forever. Turing now pondered if there was a way to examine a machine’s instructions to determine which way it would go.

The answer, surprisingly, was no! By using a route similar to Gödel’s, he showed that no matter what sort of method you used, there were always some machines that stopped but couldn’t be proven to do so.

As if that wasn’t bad enough, he also reinforced Gödel’s findings. Let’s define two more conditions: if a Turing machine halts, and there are no symbols in storage, we say it “accepted” the input. If the machine halts with any other configuration, we say it “rejected” the input. Suppose you handed me a storage space and a set of instructions for a Turing machine; could I tell you if the machine would accept or reject the tape? This scenario is Gödel’s Theorem in another form, and unsurprisingly Turing came to the same conclusion as Gödel.

Gödel and Turing shattered the idea of mathematical and logical perfection. Dualism’s proposed universe of ideas is a self-contradicting mess, at best. If we instead view this “universe” as something that emerges out of the material world, those contradictions make more sense. The lumps of matter from before are only additive down to a certain level, at which point reality gets very ugly and our abstractions break down. We should have expected the ugliness others have found in logic, in fact, since our abstractions deliberately over-simplify real life and thus are not guaranteed to be as consistent!


[96]  Having said that, all humans are partial synesthetes. Present us with two squiggles in a “foreign language,” ask us to guess which is “titi” and which is “bouba,” and the vast majority will assign the pointy shape the harsh-sounding name “titi.” The big difference between the typical human and a synesthete is the latter has a stronger, conscious connection.

[97]  Move your hand up your head a hand width, so one edge of it runs along the very top. That’s the parietal lobe, and the lower half of your palm is covering the inferior parietal.

[98]  I encounter the same problem with the intelligence proof, so in the interest of not boring this book out of your hands, I won’t tear “consciousness” apart in the same way.

[99]  I don’t think he did it on purpose. He believed, contrary to most Cretans, that the god Zeus was immortal. His poem was likely a rant against the foolish beliefs of his countrymen. Yes, I’m snickering, why do you ask?

[100]  You might be tempted to claim this impossibility for god. Unfortunately, any god that could defeat Gödel must be partially irrational, yet the laws of nature seem to be consistent and rational. That’s tough to reconcile.

[101]  Actually, Turing outright invented the modern computer. Before him, the non-human computer was dedicated to a single task, like adding numbers or calculating bombshell trajectories. His work, along with Von Neumann’s, showed that you could make them capable of any math task, no matter how complex. This was so important, I think it overshadowed his other big accomplishment: winning World War 2 for the Allies!

[B] Past-me had written “ordinary people” here. Tsk, tsk.

Proof from Logic and Dualism (2)

Elegant, According to Whom?

I suppose you’re curious about what these mythical four equations look like. Fortunately, they’re quite short:

Maxwell's Equations, #1: nabla cdot D ~ = ~ %varrho_f

(Electric fields point away from positive electric charges, and towards negative ones.)

 Maxwell's Equations #2: nabla cdot B ~ = ~ 0

(There is no such thing as a magnetic charge.)

 Maxwell's Equations #3: nabla times E ~ = ~ - {alignc {partial B} over {partial t}}

(An electric field can be created by a changing magnetic field.)

 Maxwell's Equations #4: nabla times H ~ = ~ J_f + {partial D} over {partial t}

(A magnetic field can be created by a changing electric field or current.)

To most of you, none of that math made sense. There’s no shame in that; Maxwell used vector calculus to create those equations, which is rarely taught outside of a university. With the proper training, anyone could grasp them at a glance.

Wait. The elegance of those equations partially depends on your existing knowledge. While there aren’t a lot of symbols in those equations, each of them is rich in meaning. If you don’t know how to properly interpret them, Maxwell’s work is elegant in the way Kanji[89] is to  someone who doesn’t understand the language but likes its look.

On top of that, there are multiple ways to write those equations. The version I’ve used above is the free charge variant, in differential form. I chose it because it has fewer symbols than the other versions, and thus “looks” more elegant. You could unravel the shortcuts provided by vector calculus to make the underlying meaning more obvious, but that would result in an explosion of symbols. The written summaries I’ve cobbled together seem to accomplish elegance while providing meaning, but that’s only because I’ve massively simplified what each equation actually says!

If you want symbolic elegance, you must pay the cost of hidden meaning. There is no way to avoid it. On top of that, it’s easy to forget the cost once you’ve paid it. Thus the intellectual harmony that seems to pervade the universe is partially an illusion.

Dualism

Perhaps we’ve taken the wrong approach, however.

Maxwell accomplished his feat by taking observations about the universe and condensing them into a pithy intellectual description. We may have better luck in finding the underlying harmony if we do the reverse; instead of moving from the material to the intellectual, we should start with intellect and reason our way to the material universe.

Others have tried this. You’ve no doubt heard of the Pythagorean theorem, that states that the length of the longest side multiplied by itself equals the sum of the squares of the two remaining sides.[90] You probably know nothing about the Pythagoreans, however. That’s by design: they were an ancient Greek cult that kept quiet about most of their discoveries. From what little we can piece together, we know they worshipped numbers and believed that by contemplating them you could free yourself from continual reincarnation.

René Descartes took their ideas to the next level. This brilliant mathematician from the 17th century was also a deep philosopher, as mentioned in my section on the Ontological proof. It’s fitting that he coined the phrase “I think, therefore I am,” since he nearly lived by it. According to him, our senses frequently lie to us while thought is rarely wrong. The intellectual realm must be separate from the rest of reality, though it does have an influence through consciousness and physical laws.

This “dualism” rests on the assumptions that the universe of logic and math is more orderly than the material world, and that these two worlds are linked via consciousness. If consciousness really is a bridge to another world, one wonders where it is situated, fully in one world or somewhat split between the two. If it were mostly in the abstract, we’d expect it to be always available; this second universe is eternal and perfect, after all. And yet all humans shut off their consciousness for 6-10 hours every day via sleep. You could argue that dreams are a sort of continued consciousness, but that does little to save the argument. Dreams rarely last long, are less rich and notable than reality, freely defy logic, and only show up intermittently.

We also loose consciousness via anaesthetic just before surgery. While humans are somewhat aware of their surroundings while asleep, you can rip us to pieces if we’ve been chemically knocked out. The odds of having a dream are much lower, too. Consciousness is more tied to the physical world than the intellectual one.

Dualism also struggles with brain injury. If consciousness was somewhat separate from the physical world, you’d predict that physical trauma would have little to no effect. For example, if a blood vessel were to swell or burst in the brain, you’d expect the symptoms to be easily noticed and highly predictable.

Instead, strokes are difficult to diagnose. The most common symptoms are a weakness of the face or an arm, or difficulty speaking, but it’s also possible to have feelings of numbness, nausea or confusion; a loss of consciousness, vision, memory, or balance; a change in breathing or heart rate; or any of nearly a half-dozen more symptoms. Interestingly, the symptoms of a stroke are strongly linked to where the vessel burst.

More dramatic effects happen when a giant chuck of brain is removed entirely. Phineas Gage was compacting blasting powder and a fuse with a metal tamping rod, as part of the construction of a railway, when the powder accidentally exploded. The rod shot cleanly through his head, landing 25 metres behind him. Gage remained surprisingly alert, despite the new opening in his skull:

I first noticed the wound upon the head before I alighted from my carriage, the pulsations of the brain being very distinct. Mr. Gage, during the time I was examining this wound, was relating the manner in which he was injured to the bystanders. I did not believe Mr. Gage’s statement at that time, but thought he was deceived. Mr. Gage persisted in saying that the bar went through his head….Mr. G. got up and vomited; the effort of vomiting pressed out about half a teacupful of the brain, which fell upon the floor.

(Dr. Edward H. Williams, The American Journal of the Medical Sciences, July 1850)

Gage lived another twelve years, which was unheard of for a head injury that horrible. He was fit enough to work on a farm, and his only long-term problems were a partially paralysed face and no vision in his left eye.

Well, he did suffer from one more problem:[91]

The equilibrium or balance, so to speak, between his intellectual faculties and animal propensities, seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operations, which are no sooner arranged than they are abandoned in turn for others appearing more feasible. A child in his intellectual capacity and manifestations, he has the animal passions of a strong man. Previous to his injury, although untrained in the schools, he possessed a well-balanced mind, and was looked upon by those who knew him as a shrewd, smart businessman, very energetic and persistent in executing all his plans of operation. In this regard his mind was radically changed, so decidedly that hisfriends and acquaintances said he was “no longer Gage.”

(Dr. John Martyn Harlow, Bulletin of the Massachusetts Medical Society, 1868)

This was a sensation to contemporary psychologists. They had been debating whether or not changes to the brain could effect personality, and Gage’s case was the first chunk of evidence they couldn’t dismiss. Psychologists began collecting extensive case files on patients with brain trauma. These files did far more than link behaviour and personality to the physical brain; they allowed scientists to start mapping the brain, by matching a loss of functionality to a specific location.

Take HM.[92] He suffered from seizures since childhood, and by the time of his 16th birthday they would drop him to the floor with massive convulsions. The seizures came so often that he had to be institutionalized.  He was eventually placed in the care of Dr William Scoville, who tracked the source of the seizures to HM’s medial temporal lobes.[93] He proposed a radical therapy, the removal of both lobes.

The operation was a success, reducing the seizures from a crippling handicap into an occasional nuisance, but the loss of brain matter had an unexpected side effect: HM stopped forming new memories. Dr Scoville brought in a psychologist to help, Dr Brenda Milner. HM had to be reintroduced to her each time she walked into the room, even if she’d just left to graba drink, even a decade after she began working with him.

Remarkably, nothing else seemed wrong with HM. He did better on intelligence tests after the surgery, since he was on less medication. HM remembered his parents, childhood, and even some of his adult life normally, up to the two years before his surgery. He could easily carry on a conversation. He fell in love with crossword puzzles. Even his short term memory was fine, so he could remember a phone number or name for a bit. Other than some problems with grammar when he was reading, HM could pass as normal if you weren’t paying much attention.

HM had more surprises in store, however. Dr Milner asked him to do a complicated spacial task, which involved drawing stars through a mirror. He did poorly at first, but surprised her by getting better with practice! HM was no less shocked that he could ace a task he’d never seen before. He also wowed her by sketching the interior of his home from memory, even though he moved into it after the operation. Science had just discovered there were multiple kinds of long-term memory, each situated in different parts of the brain.

If this loss of memory was tied to a specific area, we’d expect similar problems in other patients with similar damage, and different problems in patients with different damage. That’s exactly what we find; Clive Wearing had the same area of his brain tampered with, this time thanks to a virus, and also can’t form new long-term memories. He’s famous for greeting
his wife as if she’s been gone for years when she’d last visited him ten minutes before, yet he can still conduct a choir. Gage had a different area ripped from his head, and so his long-term memory was intact.

In 1985, Anthony Barker developed a way to “fake” these injuries in otherwise healthy people. Trans-cranial Magnetic Stimulation sends a pulse of extremely powerful electric current into a coil of wire placed against the scalp. By carefully controlling this pulse, researchers can induce a smaller current in one part of the brain and disable it for a short while, all without cracking open the skull.

This technique can be used for all sorts of fun, from making someone’s arm jump to changing their morality. In the latter case, Rebecca Saxe and others at MIT gave their subjects a story like this:

Alice asks Bob to get her a coffee. As Bob fills the cup, he sees a container labelled “rat poison” and adds it to Alice’s drink. Fortunately it was just a mislabelled tin of sugar, so Alice was fine.

Half of them then had a region of the brain called the right temporo-parietal junction suppressed,[94] while the other half were zapped elsewhere. Both groups were asked immediately afterwards to rate how moral person B’s actions were. Those without a functional TPJ were more likely to say B acted morally than those with another area of their brain disabled.

 That was as expected: functional Magnetic Resonance Imaging had already suggested what that area of the brain did. The TPJ is where we ponder what something is thinking, what’s known as our “theory of mind.” Assessing the morality of this situation depends on being able to read person B’s intention; if you think they intended to kill A, then their actions were immoral.[95]


[89] The ornate Chinese characters that make up part of the Japanese alphabet, eg. 漢字

[90]  If you don’t think math is creative, search out proofs of the Pythagorian Theorem. There are hundreds to choose from, using every technique from high-level math to slicing up squares!

[91]  Gage’s psychological changes are somewhat controversial, since the evidence is thin and Harlow would sometimes exaggerate the change in personality. Recent evidence suggests Gage recovered most of his self-control before he died, though he was still a different person. Still, no-one denies he changed after the accident, they merely haggle over the degree of change.

[92]  To protect a patient’s identity, researchers refer to them only by their initials. HM has since passed away, and finally permitted his real name to be made public: Henry Gustav Molaison.

[93]  Incidentally, brain regions are named for where they are, not what they do. Take one of your hands and place the palm of it over one ear, thumb down, with your fingers wrapping around the back of your head, just above the spot where your neck attaches. You’re covering one of your temporal lobes; do the same with your other hand to cover the other. The medial bits are buried deep inside, right next to each other as well as the spot where your spine plugs in.

[94]  Remember the palm trick? First, find the bony bit of your right palm that’s just above your wrist. Now put your right palm to your right ear as before; that bit is roughly where the right TPJ is, right around where the top of your earlobe attaches to your head.

[95] Note that this also suggests a biological basis for Morality…

Everything Is Significant!

Back in 1939, Joseph Berkson made a bold statement.

I believe that an observant statistician who has had any considerable experience with applying the chi-square test repeatedly will agree with my statement that, as a matter of observation, when the numbers in the data are quite large, the P’s tend to come out small. Having observed this, and on reflection, I make the following dogmatic statement, referring for illustration to the normal curve: “If the normal curve is fitted to a body of data representing any real observations whatever of quantities in the physical world, then if the number of observations is extremely large—for instance, on an order of 200,000—the chi-square P will be small beyond any usual limit of significance.”

This dogmatic statement is made on the basis of an extrapolation of the observation referred to and can also be defended as a prediction from a priori considerations. For we may assume that it is practically certain that any series of real observations does not actually follow a normal curve with absolute exactitude in all respects, and no matter how small the discrepancy between the normal curve and the true curve of observations, the chi-square P will be small if the sample has a sufficiently large number of observations in it.

Berkson, Joseph. “Some Difficulties of Interpretation Encountered in the Application of the Chi-Square Test.” Journal of the American Statistical Association 33, no. 203 (1938): 526–536.
His prediction would be vindicated two decades later.

[Read more…]

Proof from Logic and Dualism (1)

The most influential discovery in science didn’t happen in a lab.

James Maxwell was intrigued by other people’s work in magnetism and electricity, and decided to put his skill at calculus to work by summarizing what they had found.

The result was four equations that captured the close ties between both forces.  It made clear that both were opposite sides of the same coin, better thought of as a single force that could be expressed in two ways. This opened up the idea of a “theory of everything,” that could describe the laws of the entire universe in a few lines of math. For this alone, Maxwell is noteworthy.

As he looked over his work, however, he spotted something. The equations predicted that a changing magnetic field would create an electric field that would create a magnetic field, and so on. The result was a blip of energy that expanded outward.

In other words, Maxwell discovered radio waves, using little more than a chalk board.

That changed civilization. Before, when we wanted to communicate electrically, we had to string up thin, delicate wires. After, all you needed was two antennas and an agreement on how to use them. As a result Rob Hall, alone and dying in a storm on Mount Everest, could have one last conversation with his wife back at home in New Zealand.[86] This wireless bridge can span anywhere from two metres, via a cheap pair of wireless headphones, to 16,957,965,862,947 metres, the distance between the Voyager 1 space probe and Earth.[87]

Science has exploited this to the fullest. If we send a probe into space we don’t care if we get it back, it will tell us what it’s seeing right until it smacks into a planet. In some cases we don’t even need to send probes; the rocky surface of Venus was mapped by bouncing radio waves off it from our home planet, a few million kilometres distant. The Sun, Jupiter, and lightning all spray out radio waves that tell us something about their underlying physics.

And yet those pale in comparison to Maxwell’s final discovery. He noticed that his equations set a limit on how fast these waves could travel. By plunking in a few constants and doing some simple math, he was able to calculate this speed.

It matched the speed of light.

The full impact of that match has been lost over the years. Maxwell’s discovery came in 1865, however; back then, most scientists thought electricity came in particles, light was a wave that rippled through some sort of aether, and magnetism was a field like gravity. No one thought light and electricity were linked, and yet a math geek armed with a blackboard had shown they must be. You know you’ve done good when Albert Einstein is moved to say:

The precise formulation of the time-space laws was the work of Maxwell. Imagine his feelings when the differential equations he had formulated proved to him that electromagnetic fields spread in the form of polarised waves, and at the speed of light! To few men in the world has such an experience been vouchsafed . . it took physicists some decades to grasp the full significance of Maxwell’s discovery, so bold was the leap that his genius forced upon the conceptions of his fellow-workers.

(Science, May 24, 1940)

Maxwell’s simple bit of math spawned a multitude of experiments that confirmed its hunch, which in turn led to the most successful scientific theories we’ve found yet: Quantum Mechanics, and General Relativity.

It’s a staggering legacy for four equations. And it’s not an isolated incident, either. Riemann manifolds were regarded as a weird, useless oddity of math when they were invented; 70 years later, General Relativity relied on them to describe how the universe was shaped. Complex numbers were treated with disdain, until they became essential for electromagnetism and Quantum Mechanics.

But why should these abstract bits of logic and math do such a good job of describing the universe? Doesn’t this point to an underlying order to the universe, a harmony that exists separately from the material world, which could only be provided by God?

The Connection to Reality

One problem with this proof is that it puts the cart before the horse.

I’m assuming you, the reader, are of the species Homo Sapiens Sapiens. Even if I’m wrong, it’s quite likely you take up a finite amount of space and time, and we share the same laws of the universe. You are only a small part of the greater whole, and don’t have complete knowledge of the remainder.

As a result, you interact with things outside of your immediate understanding on a regular basis. You cope with this through abstraction. The collection of wood, metal, and petrochemicals that I’m currently sitting on, for instance, is known as a “chair.”

This “chair” is a structure built to relieve the strain my lower half puts up with as it tries to keep my upper half from hitting the pavement. This abstraction is a big help; without it, every time I wanted to relieve said strain I would have to examine the surrounding area for a flat spot next to a vertical panel at a convenient height, and test its structural integrity and comfort level. With it, I scan for an object that looks like a “chair,” then sit on it. The time and energy savings are enormous!

Abstraction works because the laws of the universe allow it to. If physics somehow forced my lower half to forever carry all the weight of my upper half, no matter what position I put myself in, I’d never create the concept of “chair.”

So it is with numbers. Octopuses, whales, dolphins, parrots, elephants, dogs, and apes like myself can all do basic math. Why? Since all of these are social animals, it seems likely that numbers are handy in social situations. Perhaps we used them to keep track of food or gifts, so we can ensure our generosity is returned or that no-one is being a pig.[88] Whatever the reason, the concept of counting is based on the physical reality that matter is a limited resource, and tends to stay in one place. If food was constantly available to all, or three apples turned into 20 apples before dissolving into mush, it’s doubtful any species would develop the abstraction called “numbers.”

A few things result from this abstraction. If numbers are distinct and unchanging, we could imagine ways to combine them, for instance “adding” and “multiplying.”  Likewise, if matter tends to lump together and remain relatively constant, those extensions we developed for math will also work in the universe. Calculus is an extension that deals with the way numbers can change, based on a few assumptions about those numbers. If those assumptions are similar to the laws of the universe, then the discoveries and predictions of calculus will match reality very closely.

Confirming The Obvious

So there’s a good reason math seems to have an uncanny knack for describing our universe. It was built on the basic laws of said universe! Maxwell’s four equations were an excellent abstraction of the laws of electricity and magnetism, so good that they revealed some surprising connections no one had noticed before.

Sometimes, however, we make assumptions that don’t match the underlying laws. Ole Rømer spent a decade looking at Jupiter’s moon Io, and in 1678 noticed that the predictions of Newton’s “laws” of gravity wobbled from what he was seeing. Years of painstaking study showed that the moon seemed to slow down on one side of its orbit and speed up on the other, and yet it appeared to move the same speed when it moved in front of Jupiter as when it was moving behind. After a lot of head-scratching, he found an explanation. One of the assumptions behind his math was that light moved from one place to another instantly. This was reasonable, since it matched what scientists observed and what the math permitted.

If he instead assumed that light traveled at a fixed rate, his timing problems disappeared. He could estimate this speed from his numbers and the equations, and fortunately it was very, very, very fast. If it was not, that would conflict with our previous guess that it was infinitely fast, and a lot more assumptions would have to be tossed out.

This brings up another good point. We’re small beings in a big universe, so when we find out we’ve misunderstood some part of the greater whole, we’ve got no reason to be surprised. On the contrary, when we really nail down a part of it, we break out the champagne because we realize the odds of getting it right are small. Maxwell’s accomplishment was noteworthy because it goes against our expectations, while Rømer’s observation was just more proof that we don’t understand the universe, so we don’t celebrate it in the same way. A flip through any book on cosmology will show that while we’ve learned a staggering amount in the 330 years since Rømer, there’s still a lot more to know.


[86] His simple last words still move me: “I love you. Sleep well, my sweetheart. Please don’t worry too much.”

[87] As of May 2nd, 2010. Since Voyager is zipping away from us at 17km/s, it’s even further than that by now.

[88] While pigs are considered more intelligent than dogs, I can’t find any evidence that they understand numbers.

Stop Assessing Science

I completely agree with PZ, in part because I’ve heard the same tune before.

The results indicate that the investigators contributing to Volume 61 of the Journal of Abnormal and Social Psychology had, on the average, a relatively (or even absolutely) poor chance of rejecting their major null hypotheses, unless the effect they sought was large. This surprising (and discouraging) finding needs some further consideration to be seen in full perspective.

First, it may be noted that with few exceptions, the 70 studies did have significant results. This may then suggest that perhaps the definitions of size of effect were too severe, or perhaps, accepting the definitions, one might seek to conclude that the investigators were operating under circumstances wherein the effects were actually large, hence their success. Perhaps, then, research in the abnormal-social area is not as “weak” as the above results suggest. But this argument rests on the implicit assumption that the research which is published is representative of the research undertaken in this area. It seems obvious that investigators are less likely to submit for publication unsuccessful than successful research, to say nothing of a similar editorial bias in accepting research for publication.

Statistical power is defined as the odds of failing to reject a false null hypothesis. The larger the study size, the greater the statistical power. Thus if your study has a poor chance of answering the question it is tasked with, it is too small.

Suppose we hold fixed the theoretically calculable incidence of Type I errors. … Holding this 5% significance level fixed (which, as a form of scientific strategy, means leaning over backward not to conclude that a relationship exists when there isn’t one, or when there is a relationship in the wrong direction), we can decrease the probability of Type II errors by improving our experiment in certain respects. There are three general ways in which the frequency of Type II errors can be decreased (for fixed Type I error-rate), namely, (a) by improving the logical structure of the experiment, (b) by improving experimental techniques such as the control of extraneous variables which contribute to intragroup variation (and hence appear in the denominator of the significance test), and (c) by increasing the size of the sample. … We select a logical design and choose a sample size such that it can be said in advance that if one is interested in a true difference provided it is at least of a specified magnitude (i.e., if it is smaller than this we are content to miss the opportunity of finding it), the probability is high (say, 80%) that we will successfully refute the null hypothesis.

If low statistical power was just due to a few bad apples, it would be rare. Instead, as the first quote implies, it’s quite common. That study found that for studies with small effect sizes, where Cohen’s d was roughly 0.25, their average statistical power was an abysmal 18%. For medium-effect sizes, where d is roughly 0.5, that number is still less than half. Since those two ranges cover the majority of social science effect sizes, that means the typical study has very low power and thus a small sample size. Instead, the problem of low power must be systemic to how science is carried out.

In this fashion a zealous and clever investigator can slowly wend his way through a tenuous nomological network, performing a long series of related experiments which appear to the uncritical reader as a fine example of “an integrated research program,” without ever once refuting or corroborating so much as a single strand of the network. Some of the more horrible examples of this process would require the combined analytic and reconstructive efforts of Carnap, Hempel, and Popper to unscramble the logical relationships of theories and hypotheses to evidence. Meanwhile our eager-beaver researcher, undismayed by logic-of-science considerations and relying blissfully on the “exactitude” of modern statistical hypothesis-testing, has produced a long publication list and been promoted to a full professorship. In terms of his contribution to the enduring body of psychological knowledge, he has done hardly anything. His true position is that of a potent-but-sterile intellectual rake, who leaves in his merry path a long train of ravished maidens but no viable scientific offspring.

I know, it’s a bit confusing that I haven’t clarified who I’m quoting. That first paragraph comes from this study:

Cohen, Jacob. “The Statistical Power of Abnormal-Social Psychological Research: A Review.” The Journal of Abnormal and Social Psychology 65, no. 3 (1962): 145.

While the second and third are from this:

Meehl, Paul E. “Theory-Testing in Psychology and Physics: A Methodological Paradox.” Philosophy of Science 34, no. 2 (1967): 103–115.

That’s right, scientists have been complaining about small sample sizes for over 50 years. Fanelli et. al. [2017] might provide greater detail and evidence than previous authors did, but the basic conclusion has remained the same. Nor are these two studies lone wolves in the darkness; I wrote about a meta-analysis of 16 different power-level studies between Cohen’s and now, all of which agree with Cohen’s.

If your assessments have been consistently telling you the same thing for decades, maybe it’s time to stop assessing. Maybe it’s time to start acting on those assessments, instead. PZ is already doing that, thankfully…

More data! This is also helpful information for my undergraduate labs, since I’m currently in the process of cracking the whip over my genetics students and telling them to count more flies. Only a thousand? Count more. MORE!

… but this is a chronic, systemic issue within science. We need more.

Proof from Intelligence (7)

Farming

Ah, but what about less cooperative behaviours? Humans grow cattle for meat, for instance, and bred our plants to suit our needs instead of theirs. With altruism, you could argue that both parties are coming out ahead; with farming, one side is getting far more out of the bargain. Are we the only ones clever enough to bend evolution in our favour?

Certainly not! The leaf-cutter ant farms fungus. Worker ants venture out to collect leaves, return to the nest with their haul, chew them into small pieces, and finally feed the fungal matter carefully growing within.[70] They manually keep the crop parasite-free, and make use of anti-mold bacteria specifically targeted against the greatest threat to their harvest, the Escovopsis mold. Surprisingly, the ants have prevented this pest from evolving a resistance to its poison, a trick that we big-brained humans have yet to figure out. Remove the ants from the equation, and the fungus is completely overrun by Escovopsis within days.[71]

Aphids are also farmed by ants. They will be carried out of the nest to a leaf, protected from predators while they feed, then carried back in when the ants retreat for the day. When stroked by the ants they will release a sweet nectar called honeydew. The queens of the yellow meadow ant will even take an aphid egg with them as they jet off to start a new colony. [insert references here]

What pushes this from mutual co-operation to true farming, however, is the harm the ants inflict on the aphids. They secret a chemical on their feet that impairs the ability of the aphids to walk. Their glands produce a chemical that prevents aphids from growing wings, to prevent their “cows” from flying away. If that fails, they simply rip out the wings.

Beavers outright kill their helper. They chew down trees to create dams and lodges, creating deep ponds that are more beaver-friendly.

You might argue these to examples are cheating on my part. Our instances of farming are not instinctual, but instead carefully planned. If you lock a beaver in a bare room, for instance, it’ll start building a phantom dam with invisible wood. That’s a fair point, but it also implies that farming doesn’t take any brains to pull off, which ruins its use for the Intelligence proof.

Lying

The used car salesman is an American cliché that comes from a grain of truth. Few people know how to fix cars, let alone have the time, tools, and training to properly inspect an auto. Buyers are forced to trust the salesmen, which gives the latter a big advantage. It is all too easy to repair a car just enough to get it running, and turn a blind eye to the expensive but hidden problems that won’t blow up immediately. By the time trouble hits, the salesman may have skipped town, or swear the buyer must have mistreated the car and wants to blame someone else for their mistakes.

Lying relies on a lot of high-level skills. The liar has to act according to a reality that doesn’t exist; not only does that require a mental model of how the universe works, but that model has to be sophisticated enough to model other mental models. With so many layers of misdirection, it must be a human-only thing.

Except I’ve already mentioned Santino the chimpanzee, who is infamous for pelting unwary visitors with rocks. Back then, I conveniently failed to mention why he’s managed to keep surprising his keepers.

On the day after he played cool, Santino twice repeated his usual pattern: freak out to show dominance when he saw a tour group approach, only to fizzle out in frustration as the group stayed out of throwing range but failed to submit. The third group found Santino calmly resting on a bed of hay, near the edge of his pen, with no rocks in sight. Again, the tour guides declared the coast to be clear and brought the group in for a close look. When they were within throwing range, Santino reached into his bedding, pulled out a few rocks he’d hidden there, and began pelting the hapless group. He kept doing this throughout the year, but sometimes hid the rocks behind a log.

Santino is a master liar. He was able to suppress his desire to show his dominance, and hide the tools he used as enforcement, long enough to trick another species famed for its lying.

[insert section on antelopes faking predator calls for sex]

[PRESENT-DAY HJH:

The male antelopes, observed in southwest Kenya, send a false signal that a predator is nearby only when females in heat are in their territories. When the females react to the signal, they remain in the territory long enough for some males to fit in a quick mating opportunity.

The signal in this case, an alarm snort, is not a warning to other antelopes to beware, but instead tells a predator that it has been seen and lost its element of surprise, the researchers found.

So when the scientists observed the animals misusing the snort in the presence of sexually receptive females, they knew they were witnessing the practice of intentional deception – a trait typically attributed only to humans and a select few other animal species.

https://researchnews.osu.edu/archive/topimate.htm ]

So What’s Left?

I’ve racked my brains, and so far I can’t think of anything within them that isn’t partially present in some other species. It’s entirely plausible for our intelligence to be a product of evolution, and so I can invoke Ockham’s Razor.

There’s still one nagging problem. No other animal exploits their intelligence to the degree we do. While I’ve had little difficulty finding bits and pieces of intellect scattered around the place, no-one seems to have mastered them like we have,[72] let alone collect all of them under one brain. Even if the pieces of intelligence existed before we did, isn’t our combination and amplification of them into a cohesive whole a sign of divine nudging?

There are two big flaws in this argument. First, it assumes our species jumped to prominence from humble beginnings alone. In fact, we were competing against at least three other braniacs: Homo Erectus,[73] Homo Neanderthalensis, and Homo Florensiensis.[74] Secondly, it views evolution as a sort of  “ladder of life,” where species grow increasingly complex in a linear fashion, conveniently ending with us.

As I point out in the chapter on the Design proof, evolution is nowhere near that tidy. Rewind the clock back 40,000 years ago, and all three of our Homo cousins were competing with us. While all four shared a common ancestor two million years prior, there’s no evidence that they could interbreed at that time.[75] Even though the four of us looked very similar, we were distant cousins like chimpanzees and bonobos currently are. All four of us used tools better than any other species that came before. At least two of us could sail the seas, Florensiensis and Sapiens Sapiens, and there are hints that Erectus might have beaten both to the shipbuilding business. Erectus  also earns a medal for being the first to create fire,[76] and were the first of our line to build houses.[77] For a long time, Sapiens Sapiens  and Neanderthalensis  swapped tools and goods. Neanderthalensis  in particular is famed for building decorated houses and burying their dead, perhaps even creating their own animal traps, jewellery, and body paint. Yet the most successful of us all, Erectus,[78] had the intellect of Alex the parrot. The species with the biggest brain was not Sapiens Sapiens, but Neanderthalensis; 1.8 litres worth, for the record, to our 1.4.

We shouldn’t be asking why one species alone has been granted superior intellect, we should be pondering why the most successful wasn’t smart, and the smartest one didn’t win!

One objection is that we’re a young species, and haven’t been given the same chances as Erectus had to prove our longevity. I’m a little dubious at this, given the number of nuclear missiles we have on a hair trigger and our lousy attempts at managing climate changes that we’ve created, but overall I think the point has merit.

The Neanderthalensis skull is a tougher nut. One argument is that they weren’t as smart as their brain size would indicate, since they had difficulty speaking. Robert McCarthy from Florida Atlantic University found some evidence they couldn’t pronounce “E.” That letter serves as an “anchor” for all of our languages, and since language was so important for our success that could be counted as a handicap.

However, that assumes there’s only one way to craft a language; Steven Mithen, for instance, proposes they instead mixed together singing and speech. I’ll also note that whales and prairie dogs have no difficulty communicating through languages completely unlike our own.

Proving that a long-extinct species was able to talk is clearly quite difficult, and can only be approached by piling up heaps of circumstantial evidence. The Neanderthalensis hyoid bone is nearly identical to ours, and this bone is essential to form the wide range of sounds that our verbal languages crave. The nerve that shuttles signals between brain and tongue is also a close match in both species. Their genome may also have contained a human-like FOXP2 gene, an essential part of our language skills.

Recently, David Frayer and his colleagues at the University of Kansas[79] discovered an interesting pattern. Imagine you want to scrape an animal hide clean using simple stone tools. In order to do this properly the hide has to be stretched tight, but suppose there are no other human beings around to help you pull, and no giant stones or frames are around to give you a hand. The easiest solution is to grip one end of the hide in your teeth, pull it tight with one hand, and scrape away with the other. If you have a dominant hand, you’ll likely use that hand for scraping and the other for pulling; otherwise, you’d just pick any old combination. Since accidents happen, you’ll occasionally smack your teeth with the stone tool, creating permanent little nicks in your front teeth. The direction of these scratches will depend on the hand you’re holding the tool in. These marks are small, but still large enough for anthropologists to spot.

I think you can see where I’m going with this. About 93% of all Neanderthalensis individuals had distinctly more down-right nicks on their front-most teeth, which suggests they were right-handed. What might not be obvious is why I’m headed that way.

Many of you may know that about 90% of all Sapiens Sapiens individuals are right-handed. Most of you have also heard that our brains are lopsided; language processing tends to be on the left-hand side of our brain, which corresponds to the right side of the body. The leading theory of handedness claims that having two areas for fine motor control mirrored across the brain is less efficient than cramming it all on one side, because neuron signals have more distance to travel and the two sides could give conflicting orders. Since both hand manipulation and speech require fine motor control, they get shoved to one side. Thanks to the mirroring of the body, this gives an advantage to the opposite side of the body, and most of the time genetics gives the nod to the right side.[80] Fewer of you will know that many other animals also tend to favour one hand, paw, or flipper. About 60% of Chimpanzees, for instance, favour their right hand.[81]

Interestingly, scientists have observed a link between higher brain function and handedness, and have also noted that no other species exhibits the same degree of bias we show. In other words, no other species of animal has 90% of them favouring a single side.

Well, up until David Frayer did some digging. And since handedness is linked to higher brain function and complex tasks, this strongly suggests Neanderthalensis was our intellectual peer, and weakly suggests they were equally adept at language.

So why did they, or for that matter Florensiensis or Erectus, go the way of Raphus Cucullatus?[82]

I suspect the real reason was luck. We traded tools with Neanderthalensis, which gave both our species a crucial leg up on the rest of the family. Both of us also lived in a more bountiful biome that gave ample spare time to refine our tools and practice co-ordinating with one another. This weeded out all but our Neanderthalensis buddies,[83] until climate change rolled in. Their larger bodies required more calories to sustain than ours,[84] and around the time of their extinction an ice age caused the climate to wildly swing around. This would have been devastating to a species that lived in woodlands and hunted by surprising prey, but not so bad to one that liked grassland and chased down their food. Neanderthalensis was starved out of existence, leaving us all alone.

While this line of thought seems plausible, it still has gaping holes. Why didn’t Neanderthals simply move to the more fertile plains and shove the weaker Homos out? They survived multiple ice ages, so why was the last one so fatal for our bigger-brained cousin? There’s also some evidence they subsisted on plants,[85] contradicting earlier claims that Neanderthalensis lived solely on meat, and suggesting they were more adaptable to food changes than we thought.

Science, alas, has not provided us with an answer yet. But it knows enough to suggest our intelligence is not so much a god’s touch as a lucky break.


[70] http://www.news.wisc.edu/18956

[71] http://www.nytimes.com/1999/08/03/science/for-leaf-cutter-ants-farm-life-isn-t-so-simple.html

[72]  I can think of one exception: plunk a human being down in front of a television. Flash them the numbers one through nine, scattered about randomly on the screen, for one second, then replace them with white squares. Ask us to select those white squares in ascending order of the numbers behind them. Almost all of us will fail before we get to our second number, even with training; Tetsuro Matsuzawa handed the same test to chimps, and after training them to settle down in front of the telly, they could repeatedly nail every number.

[73] There’s some controversy over how to classify Erectus, with a few palaeontologists wanting to break them up into an Asian-only group with H. Ergaster taking over the African/European half. Recent human ancestors are incredibly difficult to classify, since their bones are nearly identical to ours yet too old for genetic tests.

[74] As usual, there’s controversy over this species too. Some palaeontologists think they were diseased Sapiens Sapiens, though this seems to be a minority view. The bones we’ve found are uniquely fresh and well-preserved, compared to the remains of our other cousins, so genetic testing may solve this dispute.

[75]  Recent genetic tests suggest we may have had a little cross-species action roughly 65,000 years ago, but nothing since. Some archaeologists, however, point to much later skeletons which apparently show a mix of Neanderthalensis and Sapiens Sapiens traits. Both of them could be right; there may have been a hybrid population that went extinct, leaving us relative purebloods to be the last species standing in the Homo line. More recent research has cast doubt on those findings, though, suggesting instead that those shared genes really came from our common ancestor. Separating fact from speculation will take a few decades, unfortunately, and only if the geologic record permits.

[76] http://www.huji.ac.il/cgi-bin/dovrut/dovrut_search_eng.pl?mesge122510374832688760 [better citation needed: more recent research pins it at 1mya]

[77] http://news.bbc.co.uk/2/hi/science/nature/662794.stm

[78] Erectus had survived nearly two million years by then, and spread over much of Africa, Europe, and Asia. In contrast, genetic testing has shown Sapiens Sapiens nearly went extinct within the last 100,000 years; there were roughly 10,000 individuals alive at that point, making our entire species somewhat inbred.

[79] Right handed Neandertals: Vindija and beyond; David W. Frayer et al, Journal of Anthropological Sciences, volume 88, pp. 113-127

[80] There are some big problems with this theory; a minority of left-handed people process language equally on both sides of the brain, for instance. Still, the basic pattern holds true for 95% of all right-handers, so this explanation is likely half-true.

[81] Chimpanzees (Pan troglodytes) Are Predominantly Right-Handed: Replication in Three Populations of Apes; William D. Hopkins et al, Behav Neurosci, 2004 June.

[82] The Dodo was a very trusting bird that we “Wise Men” decided to club into extinction on a lark.

[83] On the mainland, anyway. Florensiensis managed to outlive Neanderthalensis by hanging out on tropical islands, which insulated them from climate shifts but limited their food choices. Only they know the true reason for their extinction, unfortunately.

[84] Energetic Competition Between Neandertals and Anatomically Modern Humans , Andrew W. Froehle and Steven E. Churchill, PaleoAnthropology 2009: 96−116

[85] Microfossils in calculus demonstrate consumption of cooked foods in Neandertha diets, Amanda G. Henry, Alison S. Brooks, and Dolores R. Piperno, PNAS January 11, 2011 vol. 108 no. 2 486-491