Moose in Mineral Creek, near Silverton, Colorado. We saw a bunch of people stopped along the highway, and we stopped, too. Glad we did!
Moose in Mineral Creek, near Silverton, Colorado. We saw a bunch of people stopped along the highway, and we stopped, too. Glad we did!
Creationist neurosurgeon Michael Egnor doubles down in his latest at the misnamed evolutionnews.org site. He’s still claiming that animals don’t have language.
He really has nothing new to say. He provides no evidence for his claims, just a series of assertions:
“Language in animals has never been demonstrated”: actually, it has. I gave several citations, but Egnor didn’t address any of them. There is a whole subfield of ethology that deals with this. Egnor didn’t even seem to know the name of the field.
“because animals are incapable of language.”: pure assertion.
“Claims of animal language have been made by some ethologists, but those claims are mistaken”: Egnor suddenly starts using the word “ethologist”, which he didn’t before. I am glad to have informed him of the name for the practitioners of the field he is criticizing. Again, pure assertion. He doesn’t actually address any of the cited studies.
“We should begin with an examination of what we mean by language.”: Egnor is not a linguist, either.
“The confusion between signals and designators is at the root of ethologists’ misunderstanding about animal “language.”: Yeah, all those ethologists who actually study animal language are wrong, but Egnor (who didn’t seem to even know the word “ethologist” until two days ago) is right, despite not working in the field. Remember the word “egnorance” and why it was coined?
“Natural animal signals have no grammar”: Probably not true. For example, see here and here and here. Now one could certainly take issue with any or all of these, but the point is that there is a large literature that needs to be assessed carefully, and which cannot be addressed by categorical denials of the kind Egnor makes. Egnor does not do this. He just knows he is correct, because … Aristotle.
“Animals do not signal abstract concepts”: pure assertion.
(Quoting de Waal): “We honestly have no evidence for symbolic communication, equally rich and multifunctional as ours, outside our species”: A red herring. Nobody said animal language was as “rich and multifunctional” as human language, just that it exists, contrary to Egnor’s claims. This is the traditional creationist technique known as “moving the goalposts”.
Finally, Egnor insinuates that I haven’t read de Waal’s books, when I was the one who introduced them to him. He urges me to read de Waal’s books. I have. My records show that I read Good Natured in 1996, as well as Chimpanzee Politics and Peacemaking Among Primates.
We haven’t heard from creationist neurosurgeon Michael Egnor lately. (If I had to guess, I’d wager he’s writing a book, in order to cash in on the unlimited religionist thirst to have someone with credentials confirm their world view.) That’s too bad, because Egnor was a neverending source of amusement. He is, after all, the man for which the word “egnorance” was coined: “the egotistical combination of ignorance and arrogance”.
That’s why it’s such a delight to see Egnor make a fool of himself yet again, with this Discovery Institute column about animal intelligence and language.
Egnor claims that “cats can’t do logic, mathematical or otherwise, and they never will”. Here is one of his arguments in support of this claim: “they don’t do logic. Because they’re cats.” Well, that was certainly convincing.
Showing that Egnor knows even less about logic than he does about evolution, Egnor goes on to claim that “A logical statement is true inherently, independently of the particulars that occupy the place-holders”. Really? This will certainly be news to actual logicians, who labor under the delusion that a statement like “for all x, there exists a y such that x = 2*y” is a false statement in the logical theory known as “Presburger arithmetic”.
Like most religionists, Egnor seems to have a real need to believe that people are somehow fundamentally different from the rest of the animal world. He claims that “What distinguishes men from animals is this: men, but not animals, can contemplate universals, independently of particulars. Animals cannot contemplate universals. Animal thought is always tied to particular things.” He goes on to claim, “Animal thought lacks abstraction” and “In fact, an animal cannot think about universals, for the simple reason that animals have no language.”
How does Egnor know these things? He offers no empirical evidence in support of his claims. Empirical evidence is absolutely necessary, since there is nothing logically impossible about animals thinking abstractly. After all, Egnor’s own holy book, the bible, depicts talking snakes and talking donkeys. While I am amused to see Egnor undermine the claims of his own religion, animal language and thought are questions that have to be resolved scientifically.
And there is an area of science that is actively interested in testing these kinds of claims, although you’d never know it from reading Egnor. It is a branch of ethology, which is the science of animal behavior. (I am not an ethologist by any means, but I can recommend the eye-opening books of primatologist Frans de Waal.) Contrary to Egnor’s claims, the evidence for animal language is quite strong, although of course there are doubters. Animal language exists in many different animals, including bees, elephants, dolphins, baboons, and whales.
So how does Egnor back up his claims? By citing Aristotle. That’s it. He writes, “This rudimentary fact about animal and human minds was noted by Aristotle, and was common knowledge for a couple thousand years. Moderns have forgotten it, and it has led to a morass of confusion about animal minds and the differences between human and animal thought.”
I suppose if one’s worldview depends on a 2000-year-old book written by people lacking scientific knowledge of the universe, then it’s not a stretch to get your understanding of animal language and thought from a philosopher who lived 2300 years ago, and who simply asserted his claims without doing any experiments at all.
There is also evidence for abstract thought in animals other than people. Evidence exists for dogs, baboons, and crows, to name just three examples. Of course, all these examples are debatable (although I find these and others pretty convincing), and will likely continue to be debated until we know more about how abstract concepts are represented and processed in brains. Nevertheless it is pretty obvious that this is a question that, at least in principle, is capable of being resolved empirically.
I’ll conclude with the words of David Hume: “no truth appears to me more evident than that beasts are endow’d with thought and reason as well as man. The arguments are in this case so obvious, that they never escape the most stupid and ignorant.” Or maybe that should be “egnorant”.
I’ve been watching a bit of the British TV show QI lately, and they mentioned the fact that the word “typewriter” can be written using the top row of keys on a QWERTY keyboard.
This got me wondering about what commonly-used words are the longest for each row. In addition to “typewriter”, other 10-letter words you can type exclusively on the top row include “perpetuity” and “repertoire”. Claims for “teeter-totter” seem to be cheating, as it is almost always written with a hyphen. The OED lists a few more such words, but none that are common (“pepperwort”?).
For the middle row, the longest seems to be “alfalfa”.
The poor sad bottom row seems to have no examples at all, unless you include “zzz”, which is sometimes used to indicate the sound of sleep.
On a French AZERTY keyboard, one can type the English words “appropriate”, “perpetrator”, “preparatory”, “proprietary”, as well as the winner, “reappropriate”. The longest French words on their national keyboard seem to be “approprieriez” and “pirouetterait”.
Somehow I missed this: May 16 was the 65th birthday of singer Jonathan Richman.
Jonathan Richman is really hard to describe, but he’s sort of a weird blend of children’s singer Raffi and the Ramones. When I lived on Ellsworth Street in Berkeley from 1979 to 1983, he lived quite nearby, and we often saw him performing in Sproul Plaza on the Berkeley campus (where the photo above was taken, in 1981), or the club Berkeley Square on University Avenue. Once I saw him walking down the street, ran up to my apartment, and got one of his albums out for him to autograph, which he gladly did. I even have a picture of Jonathan and me together, but I only show it to close friends.
Here are a few of my favorite Jonathan Richman songs:
Happy 65th birthday, Jonathan!
Lately I’ve been listening to right-wing talk radio, to try to understand its attractions. In particular, I’ve been listening to Michael Savage and Mark Levin. These men are both conservative radio hosts with millions of weekly listeners. I have to admit, after more than a month of listening, I find it really hard to understand their appeal.
In some ways, Savage and Levin are very similar. They both use extensive call screening, so that practically no dissenting voices are ever allowed on the air. During the past month, I think I haven’t heard a single liberal caller on either program. If they do manage to get on somehow, they typically get shouted down and cut off.
They both shill for their own books, with Levin pushing Plunder and Deceit and Savage pushing Government Zero. They both advertise their books about dogs, with Savage pushing Teddy and Me, and Levin pushing a book written by his father, My Dog Spot. They both shill for companies that sell precious metals as investments, with Levin pushing Goldline and Savage pushing Swiss America. Levin also shills for AMAC (which bills itself as the conservative alternative to AARP) and Dollar Shave Club.
For radio professionals, they both seem to have trouble pronouncing certain words. Levin once referred to Mallorca as “Mall-er-ka”, and Savage pronounced “fiefdom” as “fife-dum”.
They both always refer to the “Democrat party”, a typical epithet of the far right.
They both love to name-call. Levin constantly uses terms like “puke”, “hack”, “jerk”, and “punk” to describe anybody he disagrees with. Sometimes he calls people “subhuman”. If there exists a single person in the world who is both personally honorable and disagrees with Levin on some substantive issue, you would not know about it by listening to him. For example, he called Elizabeth Warren “one of the biggest idiots”, “a complete freak” and a “dimwitted buffoon”. (He has a particular dislike for university professors.) Levin routinely refers to the New York Times as the “New York Slimes”, the Washington Post as the “Washington Compost”, MSNBC as “MSLSD”, Associated Press as “Associated Depressed”, Hillary Clinton as “Hillary Rotten Clinton”. I guess he thinks he’s being clever. Savage, on the other hand, routinely refers to people he disagrees with as “garbage” or “vermin”. He particularly dislikes Muslims, which he enjoys calling “Moose-lims”. He calls Rachel Maddow “Rachel Madcow”.
Both Savage and Levin like to portray themselves as brave, honest commentators who say what others dare not. When Levin says, “There! I said it!” you know for sure that something particularly ignorant has just preceded it.
Probably the most important commonality between Levin and Savage is they both lie. Unrelentingly. Repeatedly. In listening for a month or so, there were so many lies that I often had trouble recording them all. They’re not lying about things whose truth is hard to determine, either. Here are just a few:
Many more examples can be found on my twitter feed. Despite these lies, in my listening for more than a month I never heard either host issue a correction or retraction about anything. (In contrast, Rachel Maddow issues corrections all the time.)
Both hosts have their obsessions. Levin is completely obsessed with Barack Obama; nearly every show is on the same theme, about how Obama is destroying America. Obama, Levin claims, is “sick” and “hates America”. Similarly, Savage is obsessed with Obama, calling him a “psychopath”, but his obsessions also include George Soros, Google, Hollywood, and Facebook, frequently insulting Mark Zuckerberg (often with exaggerated Jewish accent) and Jeff Bezos. Indeed, although Savage is Jewish (his real name is Michael Weiner), many of his comments seem either overtly or covertly anti-Semitic.
Both hosts have extremely high opinions of themselves. Savage has a doctorate from Berkeley in ethnomedicine, which he frequently likes to mention (callers often call him “Dr. Savage”), and likes to boast for minutes at a time about how smart he is compared to everyone else. He says, “I’m far more creative, inventive, entertaining, informative, educated than everyone else in the history of radio.” However, he’s not as smart as he thinks: for example, Savage frequently uses the term “coelenterate” and says it means the same as “worm”. (Coelenterates are not worms or even closely related to them. They are creatures like jellyfish and sea anemones.) Here Savage quotes Hillel’s famous questions, but attributes them wrongly to Maimonides. On the other hand, Levin’s website describes him as “The Great One” or “Denali”, terms which Levin embraces with enthusiasm. He frequently turns testy, telling callers that he is going to “educate” them.
Despite their great similarities, both hosts apparently dislike the other one. Indeed, it seems that both are quite reluctant to mention the other by name. Levin has called Savage “a real cancer” and a “phony, fake conservative”.
Nevertheless, there are some differences between them. Savage, by far, has the stranger life story, whereas Levin had a more conventional career at the fringes of American right. Savage supports Donald Trump and Levin was a strong supporter of Ted Cruz. (Whether Levin will eventually back Trump is hard to tell, although I suspect he will eventually cave.) Savage seems to have no coherent political philosophy at all, other than his dislike of various minorities. For example, he seems to hate gay people, once telling a caller that he “should get AIDS and die … eat a sausage and choke on it”. Like his hero Trump, Savage seems to be a fascist in training; he admires Vladimir Putin and thinks bringing back the House Un-American Activities Committee would be a good idea. Levin is somewhat more consistent philosophically, claiming to be a “constitutional conservative”. However, his idea of the constitution is extremely narrow; it never seems to occur to him that there might be two or more different ways of interpreting constitutional provisions. Levin used to work under Ed Meese, whom he calls a “great man”. But remember that Meese did not believe in the principle of “innocent until proven guilty”; he once said, “If a person is innocent of a crime, then he is not a suspect.” Levin also buys into the typical craziness of the right, denying man-made global warming and claiming that environmentalists are responsible for the deaths of millions of people from malaria.
Savage seems genuinely unbalanced to me. For example, he thinks seltzer water is dangerous and claims that seltzer water has damaged Bernie Sanders’ sanity. He says things like, “I am a prophet. I have been a prophet. I was appointed to be a prophet since birth.” Levin is better, but his sanity is also not so clear to me. He once claimed violating transgender guidelines will get you put in “Leavenworth Prison” and once agreed with a caller that if Obama had been president during US Civil War “he would have continued slavery”. But perhaps these are just wild hyperbole as opposed to being actually crazy.
After a month of listening, I still don’t quite understand their appeal. Savage is an ignorant narcissist who is filled with hate. Levin is a boring partisan and ideologue with a single theme that he repeats with hardly any variation. Neither host is much concerned with the truth. Both like to hear themselves rant, and, despite praising their audience, rarely genuinely engage with any caller.
If these are the minds that the American right listens to on a daily basis, it’s no wonder that the right is so badly misinformed.
Stephen Talbott, one of the dreariest writers on subjects that should be interesting, manages once again to flail around a topic without saying much at all. He babbles meaningless garbage like “As we have seen, the life of the organism is itself the designing power. Its agency is immanent in its own being, and is somehow expressed at the very roots of material causation.” And when he does manage to say something factual, he is, not surprisingly, wrong.
In his latest piece, Can Darwinian Evolutionary Theory Be Taken Seriously?, Talbott (who apparently has no advanced training in evolutionary biology) once again takes on the theory of evolution, without exhibiting much understanding at all.
Rather than write a complete critique, I’ll just excerpt some of the stupider parts of his screed, with comments.
I would like to suggest that if half of all American citizens have become (as certain arch-defenders of biological orthodoxy like to put it) “science deniers”, then something important is afoot, and it does not look good for science. At the very least — if we assume the denial to be as unreservedly stupid as it is said to be — it would mean that science has massively and catastrophically failed our educational system.
As is usually the case with those who want to cast doubt on evolution, the fact that Americans have trouble accepting it is trotted out as something significant about the theory. Talbott makes no effort at all to look at acceptance in other countries because (I suspect) it would completely undermine what follows in his piece. After all, if you have to admit that the majority accepts evolution in Iceland, Denmark, Sweden, France, Japan, UK, Norway, Belgium, Spain, Germany, Italy, Netherlands, Hungary, Luxembourg, Ireland, Slovenia, Finland, Czechia, Estonia, Portugal, Malta, Switzerland, and so forth, then maybe ridiculously overblown claims like “science has massively and catastrophically failed our educational system” would be seen for what they are.
Now any fair-minded person knows very well what separates the US from the countries in the list above: it is that many Americans are under the grip of the appalling and anti-intellectual influence of fundamentalist Christianity. The evidence that religion is responsible is easily available and hard to contest. But the words “religion” and “Christianity” appear nowhere in Talbott’s piece.
Organisms are not machines.
Of course they are. Anybody who says otherwise is simply being ridiculous. They obey the laws of physics like other machines. The only citation Talbott gives for this claim is his own work.
No one has ever pointed to a computer-like program in DNA, or in a cell, or in any larger structure. Nor has anyone shown us any physical machinery for executing such program instructions.
Of course they have! I wonder what Talbott thinks ribosomes do?
how can it be that, 150 years after Darwin, we still have no widely accepted theory about how all the different body plans arose?
Let’s see… could it be, perhaps, because those events occurred hundreds of millions of years ago and didn’t leave behind much trace for us to find now? After all, my grandparents arrived here from Russia in 1912-1913, but there is no widely accepted theory about how they got from their home in Vitebsk to Hamburg. Did they walk, or take a train, or use some other method? We don’t have a “widely accepted theory” because the evidence is gone now.
If a beautiful, crystal-clear vision of “how evolution works” doesn’t give us answers to key questions about how evolution has in fact worked, perhaps we should begin to ask questions of the vision.
We know many different mechanisms of evolution. (Talbott seems not to know this.) If Talbott thinks there is another mechanism, why doesn’t he propose one?
This enables us to greet with a certain recognition the nagging question that has bothered a number of the past century’s most prominent biologists: “What does natural selection select — where do selectable variations come from — and why should we think that the mere selection of already existing variants, rather than the creative production of novel variants in the first place, directs evolution along the trajectories we observe?”
Umm, we know where these variations come from. One place they come from is recombination in sexual organisms. Another source is mutation, often induced by cosmic rays. This is taught in every introductory course on evolutionary biology. So why doesn’t Talbott know this?
What is life? How can we understand the striving of organisms — a striving that seems altogether hidden to conventional modes of understanding? What makes for the integral unity of every living creature, and how can this unity be understood if we’re thinking in purely material and machine-like terms? Does it make sense to dismiss as illusory the compelling appearance of intelligent and intentional agency in organisms? No one can deny that our answers to these questions could be critically important even for the most basic understanding of evolution. But we have no answers.
We have no answers to “What is life?”? Say what? Talbott doesn’t seem to know that there are books devoted to this question, one of the most famous being by Schrödinger, and another one, more recently, by Addy Pross. The problem is not that we don’t have answers — many answers have been proposed. The problem is, like every complicated concept (even the philosopher’s famous example of “chair” suffices) no single brief definition can capture all the nuances of the concept.
As for the other questions, I absolutely do deny that vague babble like “integral unity” has anything useful or helpful to say in trying to understand biology. And there hasn’t been a single advance in biology that comes from thinking in other than “purely material” terms. If there had been, you know Talbott would have shouted it to the rooftops.
Talbott does no experiments in evolution. He publishes no papers in evolutionary biology journals. As far as I can see, he has no expertise in evolution at all. He publishes his stuff in obscure venues like New Atlantis. Why would anybody take this vapid stuff seriously? Answer: you take it seriously if you’re a creationist. No one else should.
Here is yet more evidence that psychologist Robert Epstein is all wet when he claims that computation-based metaphors for understanding the brain are factually wrong and hindering research.
Actual research neuroscientists, summarizing what we know about memory, cheerfully use phrases like “storage of information”, “stored memory information”, “information retrieval”, “information storage”, “the systematic process of collecting and cataloging data”, “retriev[ing]” of data, and so forth. Epstein claims the brain does not form “representations of visual events”, but these researchers say “Memory involves the complex interplay between forming representations of novel objects or events…”. The main theme of the essays seems to be that spines and synapses are the fundamental basis for memory storage.
So who do you think is likely to know more about what’s going on in the brain? Actual neuroscientists who do research on the brain and summarize the state of the art about what is known in a peer-reviewed journal? Or a psychologist who publishes books like The Big Book of Stress-Relief Games?
Hat tip: John Wilkins.
P. S. Yes, I saw the following “Further, LTP and LTD can cooperate to redistribute synaptic weight. This notion differs from the traditional analogy between synapses and digital information storage devices, in which bits are stored and retrieved independently. On the other hand, coordination amongst multiple synapses, made by different inputs, provides benefits with regard to issues of normalization and signal-to-noise.” Again, nobody thinks that the brain is structured exactly like a modern digital computer. Mechanisms of storage and retrieval are likely to be quite different. But the modern theory of computation makes no assumptions that data and programs are stored in any particular fashion; it works just as well if data is stored on paper, disk, flash drive, or in brains.
I hate to pick on poor confused Robert Epstein again, but after thinking about it some more, I’d like to explain why an example in his foolish article doesn’t justify his claims.
Here I quote his example without the accompanying illustrations:
In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):
And here is the drawing she subsequently made with a dollar bill present:
Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.
What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
Now let me explain why Epstein’s example doesn’t even come close to proving what he thinks it does.
First, the average person is not very good at drawing. I am probably much, much worse than the average person in this respect. When I play “pictionary”, for example, people always laugh at my stick figures. Yet, given something to look at and copy, I can do a reasonable job of copying what I see. I, like many people, have trouble converting what I see “in my mind’s eye” to a piece of paper. So it is not at all surprising to me that the students Epstein asks to draw a dollar bill produce the results he displays. His silly experiment says nothing about the brain and what it “stores” at all!
Second, Epstein claims that the brain stores no representation of a dollar bill whatsoever. He is pretty unequivocal about this. So let me suggest another experiment that decisively refutes Epstein’s claim: instead of asking students to draw a dollar bill (an exercise which evidently is mostly about the artistic ability of students), instead give them five different “dollar bills”, four of which have been altered in some fairly obvious respect. For example, one might have a portrait of Jefferson instead of Washington, another might have the “1” in only two corners instead of all four corners, another might have the treasury seal in red instead of the typical green for a federal reserve note, etc. And one of the five is an ordinary bill. Now ask them to pick out which bills are real and which are not. To make it really precise, each student should get just one bill and not be able to see the bills of others.
Here’s what I will bet: students will, with very high probability, be able to distinguish the real dollar bill from the altered ones. I know with certainty that I can do this.
Now, how could one possibly distinguish the real dollar bills from the fake ones if one has no representation of the real one stored in the brain?
And this is not pure speculation: thousands of cashiers every day are tasked with distinguishing real bills from fake ones. Somehow, even though they have no representation of the dollar bill stored in their brain, they manage to do this. Why, it’s magic!
– Did you hear the news, Victoria? Over in the States those clever Yanks have invented a flying machine!
– A flying machine! Good heavens! What kind of feathers does it have?
– Feathers? It has no feathers.
– Well, then, it cannot fly. Everyone knows that things that fly have feathers. It is preposterous to claim that something can fly without them.
OK, I admit it, I made that dialogue up. But that’s what springs to mind when I read yet another claim that the brain is not a computer, nor like a computer, and even that the language of computation is inappropriate when talking about the brain.
The most recent foolishness along these lines was penned by psychologist Robert Epstein. Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not.
Here are just a few of the silly claims by Epstein, with my commentary:
“But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently.”
— Well, Epstein is wrong. We, like all living things, are certainly born with “information”. To name just one obvious example, there is an awful lot of DNA in our cells. Not only is this coded information, it is even coded in base 4, whereas modern digital computers use base 2 — the analogy is clear. We are certainly born with “rules” and “algorithms” and “programs”, as Frances Crick explains in detail about the human visual system in The Astonishing Hypothesis.
“We don’t store words or the rules that tell us how to manipulate them.”
— We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word “Epstein”, but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve “Epstein” and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to “store” the word.
As for rules, without some sort of encoding of rules somewhere, how can we produce so many syntactically correct sentences with such regularity and consistency? How can we produce sentences we’ve never produced before, and have them be grammatically correct?
“We don’t create representations of visual stimuli”
— We certainly do. Read Crick.
“Computers do all of these things, but organisms do not.”
— No, organisms certainly do. They just don’t do it in exactly the same way that modern digital computers do. I think this is the root of Epstein’s confusion.
Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn’t) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).
“The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.”
— This is just utter nonsense. Nobody says “all computers are capable of behaving intelligently”. Take a very simple model of a computer, such as a finite automaton with two states computing the Thue-Morse sequence. I believe intelligence is a continuum, and I think we can ascribe intelligence to even simple computational models, but even I would say that this little computer doesn’t exhibit much intelligence at all. Furthermore, there are good theoretical reasons why finite automata don’t have enough power to “behave intelligently”; we need a more powerful model, such as the Turing machine.
The real syllogism goes something like this: humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers). Humans can store information (we know this because I can remember my social security number and my birthdate). Things that both store information and process it are called (wait for it) computers.
“a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.”
— Of course, this is utter nonsense. If there were no representation of any kind of a dollar bill in a brain, how could one produce a drawing of it, even imperfectly? I have never seen (just to pick one thing at random) a crystal of the mineral Fletcherite, nor even a picture of it. Ask me to draw it and I will be completely unable to do so because I have no representation of it stored in my brain. But ask me to draw a US dollar bill (in Canada we no longer have them!) and I can do a reasonable, but not exact job. How could I possibly do this if I have no information about a dollar bill stored in my memory anywhere? And how is that I fail for Fletcherite?
“The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous”
— Well, it may be preposterous to Epstein, but there is at least evidence for it, at least in some cases.
“A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks.”
— So what? What does this have to do with anything? There is no requirement, in saying that the brain is a computer, that memories and facts and beliefs be stored in individual neurons. Storage that is partitioned in various location, “smeared” across the brain, is perfectly compatible with computation. It’s as if Epstein has never heard of digital neural networks, where one can similarly say that a face is not stored in any particular location in memory, but rather distributed across many of them. These networks even exhibit some characteristics of brains, in that damaging parts of them don’t entirely get rid of the stored data.
“My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
“That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”
— This is perhaps the single stupidest passage in Epstein’s article. He doesn’t seem to know that “keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery” is an algorithm. Tell that description to any computer scientist, and they’ll say, “What an elegant algorithm!”. In exactly the same way, the way raster graphics machines draw a circle is a clever technique called “Bresenham’s algorithm”. It succeeds in drawing a circle using linear operations only, despite not having the quadratic equation of a circle (x–a)2 + (y–b)2 = r2 explicitly encoded in it.
But more importantly, it shows Epstein hasn’t thought seriously at all about what it means to catch a fly ball. It is a very complicated affair, involving coordination of muscles and eyes. When you summarize it as “the simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery”, you hide all the amazing amount of computation and algorithms that are going on behind the scenes to coordinate movement, keep the player from falling over, and so forth. I’d like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all.
“there is no reason to believe that any two of us are changed the same way by the same experience.”
— Perhaps not. But there is reason to believe that many of us are changed in approximately the same way. For example, all of us learn our natural language from parents and friends, and we somehow learn approximately the same language.
“We are organisms, not computers. Get over it.”
— No, we are both organisms and computers. Get over it!
“The IP metaphor has had a half-century run, producing few, if any, insights along the way.”
— Say what? The computational model of the brain has had enormous success. Read Crick, for example, for an example of how the computational model has had some success in modeling the human visual system. Here’s an example from that book I give in my algorithms course at Waterloo: why is it that humans can find a single red R in a field of green R’s almost instantly whether there are 10 or 1000 letters, or a single red R in a field of red L’s almost as quickly, but has trouble finding the unique green R in a large sea of green L’s and red R’s and red L’s? If you understand algorithms and the distinction between parallel and sequential algorithms, you can explain this. If you’re Robert Epstein, I imagine you just sit there dumbfounded.
Other examples of successes include artificial neural nets, which have huge applications in things like handwriting recognition, face recognition, classification, robotics, and many other areas. They draw their inspiration from the structure of the brain, and somehow manage to function enormously well; they are used in industry all the time. If that is not great validation of the model, I don’t know what is.
I don’t know why people like Epstein feel the need to deny things for which the evidence is so overwhelming. He behaves like a creationist in denying evolution. And like creationists, he apparently has no training in a very relevant field (here, computer science) but still wants to pontificate on it. When intelligent people behave so stupidly, it makes me sad.
P. S. I forgot to include one of the best pieces of evidence that the brain, as a computer, is doing things roughly analogous to digital computers, and certainly no more powerful than our ordinary RAM model or multitape Turing machine. Here it is: mental calculators who can do large arithmetic calculations are known, and their feats have been catalogued: they can do things like multiply large numbers or extract square roots in their heads without pencil and paper. But in every example known, their extraordinary computational feats are restricted to things for which we know there exist polynomial-time algorithms. None of these computational savants have ever, in the histories I’ve read, been able to factor arbitrary large numbers in their heads (say numbers of 100 digits that are the product of two primes). They can multiply 50-digit numbers in their heads, but they can’t factor. And, not surprisingly, no polynomial-time algorithm for factoring is currently known, and perhaps there isn’t one.