Proof from Logic and Dualism (3)


These effects aren’t limited to brain damage. Just over four percent of all humans are synaesthetic, which means that two separate regions of their brain link up more strongly than usual. The combinations seem limitless: some will see shapes when they taste anything, or hear sounds when they see something move, or their numbers will appear coloured. We know they aren’t faking it, because we’ve handed them a test like this:

Can you spot all the twos mixed in with the fives?

Most people[B] take about ten seconds to find all the 2’s mixed in with the 5’s. Most grapheme-colour synesthetes will glance at it and say “the 2’s form a square.” These people experience the world quite differently,[96] which has been confirmed by many tests like the above. We’ve also used brain scanning techniques to peer inside the skulls of synesthetes, and we find they’re wired differently in exactly the way our maps predict.

Einstein was infamous for his thought experiments, and when they cracked open his skull they spotted an enlarged inferior parietal lobe,[97] which is linked to spatial reasoning and visualization. Christian Gaser and several other researchers have found that professional musicians have differently structured brains than the rest of us. The areas responsible for hearing, as well as motor and spatial control, are physically larger.

Admittedly, none of the above is a slam-dunk debunk of a bridged consciousness. Nothing ever could be, so long as we have no concise definition of “consciousness.”[98] This fuzziness is exploited by those who are unsettled by the strong links between consciousness and the brain. “All that may be true, but at what point do I ‘see’? At what point does the objective sensory input become a subjective colour?”

The first question’s answer is “wherever you want.” Tell me: when does bread stop being dough, and start being bread? Surely not when the ingredients are mixed together, nor when it’s placed in the oven. It can’t be when it’s removed from the oven, since there’s no difference between the instants before it was removed and the instant after, and besides the inside is still being baked by the warmer outside. It can’t be when it has cooled to room temperature, because it was edible before then. It can’t be when the lump was first edible, because that’s a subjective measure that varies by person.

Face it, bread is much too complex to be understood by science. It must be a divine product!

What’s really going on here is that “bread” and “dough” are only probabilistic definitions. They only work in certain situations, but those situations pop up often enough to justify the definitions. Push either too far, and they’re guaranteed to fail. “Sensory input” and “seeing” are no different.

The second question is a little harder to answer. Christof C. Koch at Caltech found a “Halle Berry” neuron in an epilepsy patient. This little thing got excited whenever its owner was presented with a photo of Halle Berry, or a drawing of her, or even just her name. This neuron isn’t in everyone, of course, and almost certainly isn’t in the same spot in another person who recognizes that actress. And don’t let my poor phrasing fool you, the only difference between it and the neuron two spots over is which connections it has. The neurons on the other ends of those links have already marked the input as “person,” “female,” “known entity,” and so on. The entire length of this patient’s nervous system, from the ganglion in the back of the eye all the way down to this little neuron, has been gradually abstracting that image/picture/name of Halle Berry.

The only question left is how abstract we have to get to satisfy your definition of “subjective.” Once that’s done, we can zero in on one or more brain structures.

The Limits of Logic

Even if our consciousness doesn’t come from this second world, we can at least take some comfort in knowing it exists as the source of perfection and order.

Or can we? This half of the argument suffered two major blows in the past century, thanks to Kurt Gödel and Alan Turing.

Before those two were born, logicians had unearthed a crisis. They were probing the foundations of mathematics, and found it wasn’t as solid as they wanted. How were numbers constructed? Why did the basic math operations work? Could more complex operations be counted on? These are not trivial questions, since science heavily depends on math to measure and predict the world. Any weakness in one could topple the other. That anxiety triggered a century of search for the absolute fundamentals of math, which reached its pinnacle when Bertrand Russell and Alfred Whitehead took 362 pages to prove

1 + 1 = 2

Not only did the uncertainty refuse to leave, but to their horror it crept into logic as well. Epimenides of Crete was one of the first to discover the basic problem, albeit inadvertently:[99]

They fashioned a tomb for thee, O holy and high one
The Cretans, always liars, evil beasts, idle bellies!
But thou art not dead: thou livest and abidest forever,
For in thee we live and move and have our being.

(Epimenides, Cretica, circa 600BCE)

Or, without the poetry:

I, as a Cretan, know that all Cretans are liars!

If Epimenides is lying, then Cretans tell the truth. But this is impossible, since he is Cretan. He must be telling the truth, then… but that would mean Cretans are liars, including Epimenides! This paradox has an easy solution (Cretans could be a mix of liars and truth-tellers), but it doesn’t take much thought to come up with a stronger version:

This statement is false.

Variations of this paradox were found in the rules of logic, and every attempt to remove them just created more. It was quickly becoming an embarrassment. Many mathematicians and logicians were drawn to the problem, hoping for a solution that put math on solid ground.

Instead, Kurt Gödel proved the ground would always be unstable. In his two Incompleteness Theorems, he noted that you could translate any group or system of mathematical statements into numbers. Since the results had a finite digit count, you could create a method that would take in a number and tell you if the original math statements were true or false. Since this method itself was a mathematical statement, it too had a number. Feed the method’s translated number to itself, and BANG, a contradiction popped out: this method cannot determine its own truthfulness.

Worse, the details didn’t matter. No clever transform could save the day, and every mathematical statement can be transformed. There were only two choices: ensure that this truth-evaluating method was not within the mathematical system you’re using, making it impossible to ever prove that every statement is true or false, or accept that your math rules will have contractions. It was the logical equivalent of a rock and a hard place.[100]

As mathematicians were freaking out over this, Alan Turing made things worse. Gödel’s Theorem focused on proving things true or false; it said nothing about whether all statements could be proven, period.

To study this tougher problem, Turing invented a simple “machine” which would later be named in his honour. These “Turing Machines” were basically an ideal computer,[101] no more than an infinite storage space for symbols paired with a set of instructions for modifying those symbols. Once you set a machine in motion, there were two outcomes: it would eventually stop running, or it would carry on forever. Turing now pondered if there was a way to examine a machine’s instructions to determine which way it would go.

The answer, surprisingly, was no! By using a route similar to Gödel’s, he showed that no matter what sort of method you used, there were always some machines that stopped but couldn’t be proven to do so.

As if that wasn’t bad enough, he also reinforced Gödel’s findings. Let’s define two more conditions: if a Turing machine halts, and there are no symbols in storage, we say it “accepted” the input. If the machine halts with any other configuration, we say it “rejected” the input. Suppose you handed me a storage space and a set of instructions for a Turing machine; could I tell you if the machine would accept or reject the tape? This scenario is Gödel’s Theorem in another form, and unsurprisingly Turing came to the same conclusion as Gödel.

Gödel and Turing shattered the idea of mathematical and logical perfection. Dualism’s proposed universe of ideas is a self-contradicting mess, at best. If we instead view this “universe” as something that emerges out of the material world, those contradictions make more sense. The lumps of matter from before are only additive down to a certain level, at which point reality gets very ugly and our abstractions break down. We should have expected the ugliness others have found in logic, in fact, since our abstractions deliberately over-simplify real life and thus are not guaranteed to be as consistent!

[96]  Having said that, all humans are partial synesthetes. Present us with two squiggles in a “foreign language,” ask us to guess which is “titi” and which is “bouba,” and the vast majority will assign the pointy shape the harsh-sounding name “titi.” The big difference between the typical human and a synesthete is the latter has a stronger, conscious connection.

[97]  Move your hand up your head a hand width, so one edge of it runs along the very top. That’s the parietal lobe, and the lower half of your palm is covering the inferior parietal.

[98]  I encounter the same problem with the intelligence proof, so in the interest of not boring this book out of your hands, I won’t tear “consciousness” apart in the same way.

[99]  I don’t think he did it on purpose. He believed, contrary to most Cretans, that the god Zeus was immortal. His poem was likely a rant against the foolish beliefs of his countrymen. Yes, I’m snickering, why do you ask?

[100]  You might be tempted to claim this impossibility for god. Unfortunately, any god that could defeat Gödel must be partially irrational, yet the laws of nature seem to be consistent and rational. That’s tough to reconcile.

[101]  Actually, Turing outright invented the modern computer. Before him, the non-human computer was dedicated to a single task, like adding numbers or calculating bombshell trajectories. His work, along with Von Neumann’s, showed that you could make them capable of any math task, no matter how complex. This was so important, I think it overshadowed his other big accomplishment: winning World War 2 for the Allies!

[B] Past-me had written “ordinary people” here. Tsk, tsk.