Doug Hofstadter, Flight, and AI

Douglas Hofstadter, author of the fascinating book, Gödel, Escher, Bach, is someone I’ve admired for a long time, both as an expositor and an original thinker.

But he goes badly wrong in a few places in this essay in the Atlantic Monthly. Actually, he’s said very similar things about AI in the past, so I am not really that surprised by his views here.

Hofstadter’s topic is the shallowness of Google Translate. Much of his criticism is on the mark: although Google Translate is extremely useful (and I use it all the time), it is true that it does not usually match the skills of the best human translators, or even good human translators. And he makes a strong case that translation is a difficult skill because it is not just about language, but about many facets of human experience.

(Let me add two personal anecdotes. I once saw the French version of Woody Allen’s movie Annie Hall. In the original scene, Alvy Singer (Woody Allen) is complaining that a man was being anti-semitic because he said “Did you eat?” which Alvy mishears as “Jew eat?”. This was translated as “Tu viens pour le rabe?” which Woody Allen conflates with “rabbin”, the French word for “rabbi”. The translator had to work at that one! And then there’s the French versions of the Harry Potter books, where the “Sorting Hat” became the “Choixpeau”, a truly brilliant invention on the part of the translator.]

But other things Hofstadter says are just … wrong. Or wrong-headed. For example, he says, “The bailingual engine isn’t reading anything–not in the normal human sense of the verb ‘to read.’ It’s processing text.” This is exactly the kind of complaint people made about the idea of flying machines: “A flying machine isn’t flapping its wings, so it cannot be said to fly in the normal human understanding of how birds fly.” [not an actual quote] Of course a computer doesn’t read the way a human does. It doesn’t have an iris or a cornea, it doesn’t use its finger to turn the page or make the analogous motion on a screen, and it doesn’t move its lips or write “How true!” in the margins. But what does that matter? No matter what, computer translation is going to be done differently from the exact way humans do it. The telling question is, Is the translation any good? Not, Did it translate using exactly the same methods and knowledge a human would?  To be fair, that’s most of his discussion.

As for “It’s processing text”, I hardly see how that is a criticism. When people read and write and speak, they are also “processing text”. True, they process text in different ways than computers do. People do so, in part, taking advantage of their particular knowledge base. But so does a computer! The real complaint seems to be that Google Translate doesn’t currently have access to, or use extensively, the vast and rich vault of common-sense and experiential knowledge that human translators do.

Hofstadter says, “Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it–to ‘press it out’–in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

“I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental ‘halo’ has been realized—only when the elusive bubble of meaning is floating in my brain–do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising.”

That’s a nice description — albeit maddeningly vague — of how Hofstadter thinks he does it. But where’s the proof that this is the only way to do wonderful translations? It’s a little like the world’s best Go player talking about the specific kinds of mental work he uses to prepare before a match and during it … shortly before he gets whipped by AlphaGo, an AI technology that uses completely different methods than the human.

Hofstadter goes on to say, “the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.” I strongly disagree with the “end run” implication. Again, it’s like viewing flying as something that can only be achieved by flapping wings, and propellers and jet engines are just “end runs” around the true goal. This is a conceptual error. When Hofstadter says “There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are”, that is just an assertion. I can translate passages about war even though I’ve never been in a war. I can translate a novel written by a woman even though I’m not a woman. So I don’t need to have experienced everything I translate. If mediocre translations can be done now without the requirements Hofstadter imposes, there is just no good reason to expect that excellent translations can’t be eventually be achieved without them, at least in the same degree that Hofstadter claims.

I can’t resist mentioning this truly delightful argument against powered mechanical flight, as published in the New York Times:

The best part of this “analysis” is the date when it was published: October 9, 1903, exactly 69 days before the first successful powered flight of the Wright Brothers.

Hofstadter writes, “From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful…”.

But they already do think, in any reasonable sense of the word. They are already creative in a similar sense. As for words like “frightened, ecstatic, resigned, hopeful”, the main problem is that we cannot currently articulate in a suitably precise sense what we exactly mean by them. We do not yet understand our own biology enough to explain these concepts in the more fundamental terms of physics, chemistry, and neuroanatomy. When we do, we might be able to mimic them … if we find it useful to do so.

Addendum: The single most clueless comment to Hofstadter’s piece is this, from “Steve”: “Simple common sense shows that [a computer] can have zero “real understanding” in principle. Computers are in the same ontological category as harmonicas. They are *things*. As in, not alive. Not conscious.

Furthermore the whole “brain is a machine” thing is a *belief* based on pure faith. Nobody on earth has the slightest idea how consciousness actually arises in a pile of meat. Reductive materialism is fashionable today, but it is no less faith-based than Mormonism.”

Yet More Incoherent Thinking about AI

I’ve written before about how sloppy and incoherent a lot of popular writing about artificial intelligence is, for example here and here — even by people who should know better.

Here’s yet another example, a a letter to the editor published in CACM (Communications of the ACM).

The author, a certain Arthur Gardner, claims “my iPhone seemed to understand what I was saying, but it was illusory”. But nowhere does Mr. Gardner explain why it was “illusory”, nor how he came to believe Siri did not really “understand”, nor even what his criteria for “understanding” are.

He goes on to claim that “The code is clever, that is, cleverly designed, but just code.” I am not really sure how a computer program can be something other than what it is, namely “code” (jargon for “a program”), or even why Mr. Gardner thinks this is a criticism of something.

Mr. Gardner states “Neither the chess program nor Siri has awareness or understanding”. But, lacking rigorous definitions of “awareness” or “understanding”, how can Mr. Gardner (or anyone else) make such claims with authority? I would say, for example, that Siri does exhibit rudimentary “awareness” because it responds to its environment. When I call its name, it responds. As for “understanding”, again I say that Siri exhibits rudimentary “understanding” because it responds appropriately to many of my utterances. If I say, “Siri, set alarm for 12:30” it understands me and does what I ask. What other meanings of “awareness” and “understanding” does Mr. Gardner appeal to?

Mr. Gardner claims “what we are doing — reading these words, asking maybe, “Hmmm, what is intelligence?” is something no machine can do.” But why? It’s easy to write a program that will do exactly that: read words and type out “Hmmm, what is intelligence?” So what, specifically, is the distinction Mr. Gardner is appealing to?

He then says, “That which actually knows, cares, and chooses is the spirit, something every human being has. It is what distinguishes us from animals and from computers.” First, there’s the usual “actually” dodge. It never matters to the AI skeptic how smart a computer is, it is still never “actually” thinking. Of course, what “actual” thinking is, no one can ever tell me. Then there’s the appeal to the “spirit”, a nebulous, incoherent thingy that no one has ever shown to exist. And finally, there’s the absurd claim that whatever a “spirit” is, it’s lacking in animals. How does Mr. Gardner know that for certain? Has he ever observed any primates other than humans? They exhibit, as we can read in books like Chimpanzee Politics, many of the same kinds of “aware” and “intelligent” behaviors that humans indulge in.

This is just more completely incoherent drivel about artificial intelligence, no doubt driven by religion and the need to feel special. Why anyone thought this was worth publishing is beyond me.

Last Moose Story of the Year: The Limping Moose that Halted an Election

Over in Calgary, a limping moose delayed elections back in October.

The article contains helpful advice, such as “if you see a moose, you are always encouraged to back away slowly and to make your way into a building”. When hiking in the wilderness, I always keep this in mind.

Finally, remember these immortal words of Tom Shirlaw: “I spent years in the military and even overseas, you could cast your ballot… never stopped by a moose.”

How to Be a Demagogue

If you’re hoping to become a demagogue — that is, one who “seeks support by appealing to popular desires and prejudices rather than by using rational argument”, this article should be essential reading.

Here we have Allen C. Guelzo, a famous historian who really should know better, claiming that “it is not clear what daring thing the owners, coaches, and players of the National Football League thought they were doing Sunday when they collectively took a knee or raised clenched fists while the `The Star Spangled Banner’ was played.”

Let’s ignore for the moment the implied sneer in his choice of “daring”; we’ll come back to it later.

“Not clear” what they were doing? Only if you haven’t been paying any attention at all.

The recent origin of these protests is, as everybody knows, Colin Kaepernick’s 2016 refusal to stand for the national anthem. His motivations are clear, because he has discussed them on several occasions: it is to protest wrongdoing and very real police misconduct against blacks and other minorities.

How does Prof. Guelzo not know this?

Other players have since joined in the protests. Kaepernick “took a knee” in a game in September 2016, and was joined in his protest by teammate Eric Reid. Kaepernick was quoted as saying, “I can’t see another Sandra Bland, Tamir Rice, Walter Scott, Eric Garner. At what point do we take a stand and, as a people, say this isn’t right? You [the police] have a badge and you’re supposed to be protecting us, not murdering us.”

These were completely peaceful protests. Yet the reaction from the far Right has been insane. For example, Pastor Allen Joyner apparently advocated murder of the protesters: “If you don’t want to stand for the national anthem you can line up over there by the fence and let our military personnel take a few shots at you.”

In September of this year, a full year after Kaepernick began his protests, President Trump decided to weigh in, advocating that those who protested should be fired. After Trump’s remarks, many more players joined the protests. Their reasons have been discussed at length: many players felt that they had to stand up for their 1st amendment rights in the face of a government official — Trump — trying to prevent them from exercising them. In doing so, Trump was possibly in violation of 18 U.S. Code § 227 (a).

For example, Baltimore Ravens player Benjamin Watson was quoted as saying, “A lot of guys were upset about the things President Trump said, were upset that he would imply that we can’t exercise our First Amendment rights as players. We were upset that he would imply that we should be fired for exercising those rights. It was very emotional for all of us. We all had decisions to make.”

How does Prof. Guelzo not know this? It’s been discussed in dozens of articles and interviews.

In my opinion, Prof. Guelzo, who is no fool, probably knows quite well why the players are protesting. But to explore these reasons seriously would detract from his demagogic goal.

You might think a US history professor would applaud players who engage in peaceful protests. You might think a man posting on a site about “American Greatness” could use this to tell us about our 1st amendment rights and why they are vital to American democracy. You might think a professor who wrote a biography of Lincoln would express some concern about a President who uses his bully pulpit to attack protesters and try to get them fired from their jobs.

You would, sadly, be wrong.

Intead, Prof. Guelzo attacks the protesters. He claims they “generat[ed] the comprehensive fury of the American public”. But my examination of coverage of the protests shows that the “fury” was far from “comprehensive”; it was decidedly mixed. This is backed up by examining polls that came out last year, after Kaepernick started his protests. For example, in one poll, “70 percent of whites disagreed with Kaepernick’s stance, while only 40 percent of racial minorities disagreed with the 49ers quarterback.” This is hardly “comprehensive”.

Back to Prof. Guelzo’s sneer about “daring”. Yes, it was daring. It was daring because some players stand to lose their very profitable jobs, especially if owners take President Trump’s advice. Colin Kaepernick himself still does not have a position, despite his evident talent. It was daring because far-right demagogues are whipping up anger against the protesters, and who knows where that could lead? We know what happened when they similarly whipped up anger during Pizzagate.

The players certainly are risking a lot more than Prof. Guelzo did by writing his column.

Prof. Guelzo claims that the protesters don’t have the moral high ground because so many NFL players are criminals. This is the ad hominem fallacy. I could reply, I suppose, by citing Wyndham Lathem and Amy Bishop as evidence that we shouldn’t give much moral high ground to professors, either, but wouldn’t that be just adopting Prof. Guelzo’s slimy tactics? Hey, I’m not going to defend the misbehavior of professional football players, but does it really have much of a bearing on the protests and their motivations? Whether you’re an upstanding citizen or a criminal, you can recognize racism and injustice. Whether you’re an upstanding citizen or a criminal, you can support the Constitution. Edward Lawson was a black man who was repeatedly and unfairly treated by the police, even convicted once for nothing. He went to the Supreme Court to argue his case. He won.

You’d think Prof. Guelzo would recognize this.

Prof. Guelzo sneers at the “millionaires of the NFL” who “think they’re better or wiser” than a Civil War hero. He offers no evidence at all that these “millionaires” have ever said any such thing, nor that their protests imply such a thing. He does not mention that Kaepernick has a charitable foundation that has given hundreds of thousands of dollars to groups that help the poor and downtrodden.

You’d think this would rate a mention in Prof. Guelzo’s screed.

You’d be wrong. Because that’s not the way a demagogue works.

Cross Orbweaver

I don’t claim to be an entomologist, but I am pretty sure this is Araneus diadematus, the Cross Orbweaver. It’s not native to Canada, but common in Ontario gardens.

Are Religious Beliefs Really Off Limits for Voters?

Consider this letter from the President of Princeton University, Christopher Eisgruber.

I am not persuaded at all by these arguments. I wrote the following response.

Dear President Eisgruber:

I could not disagree more strongly with the sentiments expressed in your letter to the Judiciary Committee.

1. Religious beliefs are not, as you claim, “irrelevant” to the qualifications for a Federal judgeship. Would you, for example, be willing to confirm an otherwise-qualified judge who subscribed to the tenets of Christian Identity? (If you are not familiar with this religion, please consult .)

2. Beliefs do not magically become off limits to questioning, probing, or otherwise investigating simply because one labels them “religious”. As you know, there is significant debate about whether some belief systems, such as Scientology, are indeed religions. There is no bright line separating religion from other kinds of opinions one may hold.

3. It is absurd to claim that Professor Barrett’s religious beliefs are not part of her judicial philosophy, when you yourself cite an article of hers that addresses precisely this issue.

4. You also should know that the “religious test” clause in the Constitution was in response to legislation, such as the Test Acts, that made it impossible for members of certain religions to hold public office. This Constitutional clause says nothing at all about whether voters may make up their minds based on a candidate’s religious beliefs. Nor does it say that Judiciary Committee members may not evaluate the suitability of a candidate based on what he or she believes.

This kind of posturing is unworthy of you and unworthy of Princeton.

Jeffrey Shallit ’79

In addition, I note that polls show that a large percentage of the American public would not vote for an atheist candidate. Why is it that this never merits letters of concern by people like President Eisgruber?

Many religious people want a double standard: the freedom to hold beliefs, no matter how pernicious or unsupported, and the right to never be questioned on those beliefs.