Discovery Institute Branches Out Into Comedy

That wretched hive of scum and villainy, the Discovery Institute, has announced that its nefarious tentacles have snagged a new venture: a situation comedy called the “Walter Bradley Center for Natural and Artificial Intelligence”.

Walter Bradley, as you may recall, is the engineering professor and creationist who, despite having no advanced training in biology, wrote a laughably bad book on abiogenesis. Naming the “center” after him is very appropriate, as he’s never worked in artificial intelligence and, according to DBLP, has no scientific publications on the topic.

And who was at the kick-off for the “center”? Why, the illustrious Robert J. Marks II (who, after nearly four years, still cannot answer a question about information theory), William Dembski (who once published a calculation error that resulted in a mistake of 65 orders of magnitude), George Montañez, and (wait for it) … Michael Egnor.

Needless to say, none of these people have any really serious connection to the mainstream of artificial intelligence. Egnor has published exactly 0 papers on the topic (or any computer science topic), according to DBLP.  Dembski has a total of six entries in DBLP, some of which have a vague, tangential relationship to AI, but none have been cited by other published papers more than a handful of times (other than self-citations and citations from creationists). Marks has some serious academic credentials, but in a different area. In the past, he published mostly on topics like signal processing, amplifiers, antennas, information theory, and networks; lately, however, he’s branched out into publishing embarrassingly naive papers on evolution. As far as I can tell, he’s published only a small handful of papers that could, generously speaking, be considered as mainstream artificial intelligence, none of which seem to have had much impact.  Montañez is perhaps the exception: he’s a young Ph. D. who works in machine learning, among other things. He has one laughably bad paper in AI, about the Turing test, in an AI conference, and another one in AAAI 2015, plus a handful in somewhat-related areas.

In contrast, take a look at the DBLP record for my colleague Peter van Beek, who is recognized as a serious AI researcher. See the difference?

Starting a center on artificial intelligence with nobody on board who would be recognized as a serious, established researcher in artificial intelligence? That’s comedy gold. Congrats, Discovery Institute!

Yet Another Baseless Claim about Consciousness

If I live long enough, I’m planning to write a book entitled “The 100 Stupidest Things Anyone Ever Said About Minds, Brains, Consciousness, and Computers”. Indeed, I’ve been collecting items for this book for some time. Here’s my latest addition: Michael S. Gazzaniga, a famous cognitive neuroscientist who should know better, writes:

Perhaps the most surprising discovery for me is that I now think we humans will never build a machine that mimics our personal consciousness. Inanimate silicon-based machines work one way, and living carbon-based systems work another. One works with a deterministic set of instructions, and the other through symbols that inherently carry some degree of uncertainty.

If you accept that the brain functions computationally (and I think the evidence for it is very strong) then this is, of course, utter nonsense. It was the great insight of Alan Turing that computing does notdepend in any significant way on the underlying substrate where the computing is being done. Whether the computer is silicon-based or carbon-based is totally irrelevant. This is the kind of thing that is taught in any third-year university course on the theory of computation.

The claim is wrong in other ways. It is not the case that “silicon-based machines” must work with a “deterministic set of instructions”. Some computers today have access to (at least in our current physical understanding) a source of truly random numbers, in the form of radioactive decay. Furthermore, even the most well-engineered computing machines sometimes make mistakes. Soft errors can be caused, for example, by cosmic rays or radioactive decay.

Furthermore, Dr. Gazzaniga doesn’t seem to recognize that if “some degree of uncertainty” is useful, this is something we can simulate with a program!

The Sortition Solution: Representation by Randomly-Chosen Representatives

The US political system is clearly broken. To name just a few problems:

  • the legislative agenda is largely driven, not by citizen need, but by lobbyists and special interests that can afford large political contributions;
  • corruption is rampant;
  • the budget never gets balanced because existing funded items have strong special interest support;
  • new budget items get added (but rarely removed) by special interests;
  • special interests consistently block action where there is widespread public support (e.g., gun control);
  • political parties induce a tribalist “us vs. them” mentality that leads to gridlock and an inability to deal with corruption within a party;
  • minority political viewpoints (Greens, for example) rarely get elected because they cannot achieve a majority in their district;
  • representatives are typically chosen from a small number of professions (e.g., law), while other sorts of expertise (e.g., science) are not adequately represented;
  • almost all representatives are Christians; atheists and other minority religious viewpoints are wildly under-represented;
  • incumbents have a huge advantage over challengers, even when they are clearly unfit;
  • women and minorities are wildly under-represented;
  • rural voters and interests are over-represented;
  • instead of being seen as employees doing the work of citizens, representatives become media celebrities in their own right;
  • legislators are extremely reluctant to address controversial issues, for fear of being voted out in the next election;
  • first-past-the-post voting means that candidates that most voters dislike are often elected.

Proportional representation is often proposed as a solution to some of these problems. In the most typical version of proportional representation — party-list — you vote for a party, not a candidate, and representatives are then chosen from a list the party provides. But this doesn’t resolve the corruption and tribalism problems embodied in the first few items on my list.

My solution is exotic but simple: sortition, or random representation. Of course, it’s not original with me: we use sortition today to form juries. But I would like to extend it to all legislative bodies.

Support for sortition comes from all parts of the political spectrum; William F. Buckley, Jr., for example, once said, “I am obliged to confess that I should sooner live in a society governed by the first two thousand names in the Boston telephone directory than in a society governed by the two thousand faculty members of Harvard University.”

Here is a brief outline of how it would work. Legislators would be chosen uniformly and randomly from a universal, publicly-available list; perhaps a list of all registered voters.

In each election period (say 2-5 years), a random fraction of all representatives would be completely replaced, perhaps 25-50%. This would allow some institutional memory and expertise to be retained, while insuring that incumbents do not have enough time to build up fiefdoms that lead to corruption.

Sortition could be phased in gradually. For the first 10 years, sortition could be combined with a traditional electoral system, in some proportion that starts small and eventually completely replaces the traditional electoral system. This would increase public confidence in the change, as well as avoiding the problem of a “freshman class” that would be completely without experience.

I suggest that we start with small state legislatures, such as New Hampshire, as an experiment. Once the experiment is validated (and I think it would be) it could move to replace the federal system.

Advantages

Most of the problems I mentioned above would be resolved, or greatly reduced in scope.

The new legislative body would be truly representative of the US population: For example, about 50% of legislators would be women. About 13% would be black, 17% Hispanic or Latino, and 5% Asian. About 15% would be atheists, agnostics, humanists, or otherwise religiously unaffiliated.

Issues would be decoupled from parties: Right now, if you vote for the Republicans, you get lower taxes and restrictions on abortion. What if you support one but not the other? There is no way to express that preference.

Difficult legislative choices will become easier: Experiments have shown over and over that balancing the federal budget — traditionally one of the most difficult tasks in the existing system — turns out to be a brief and relatively trivial exercise for non-partisan citizen groups. (Here’s just one such example.) Sortition would resolve this thorny problem.

One significant motivation for corruption — getting donations for re-election — would essentially disappear. Of course, there would be other opportunities for corruption (there always are), but at least one would be gone.

A diverse elected body would be able to consider issues from a wide variety of different perspectives. Effective action could be taken where there is widespread public support (e.g., gun control).

Objections answered

People will not want to serve: We would pay them very well — for example, $250,000 per year. We would enact a law requiring employers to release representatives from the employment with a guarantee of re-employment after their term is over. If someone refuses to serve, we’d just move to the next person on the random list.

Sortition will produce stupid, incompetent, and dishonest representatives: Very true. Some will be stupid, some will be incompetent, and some will be dishonest. But this is also true for the existing system. (Have you ever seen Louis Gohmert being interviewed?) In my view, those with genuine expertise and leadership ability will naturally be seen as leaders by others and acquire some influence within the chamber. Stupid and incompetent people will quickly be recognized for what they are and will not have as much influence in the legislative agenda.

The public will not have trust in the selection process: Trust is a genuine issue; people will naturally distrust a new system. That’s one reason to phase it in gradually. Mathematicians and theoretical computer scientists know a lot about how to sample randomly; whatever specific method is chosen would be open-source and subject to scrutiny. To make a truly random choice even more convincing, a combination of different methods could be used. For example, we could use algorithmic methods to choose a sample of (say) a thousand names. Then we could use physical means (for example, the ping-pong balls used for lotteries) to choose 200 names of the legislators from this group.

The legislative agenda will not be clear: Political parties offer a legislative agenda with priorities, but where will the agenda come under sortition? My answer is that the major issues of the day will generally be clear. For example, today’s issues include anthropogenic global warming, terrorism, immigration, wage stagnation, and health care, to name just five. These are clear issues of concern that can be seen without the need of a political party’s ideology. The existing federal and state bureaucracies — civil servants — will still be there to offer expertise.

People will feel like they have no voice: Without elections, how do people feel their voice is heard? Another legitimate objection. This suggests considering some sort of mixed system, say, with 50% of representatives chosen by sortition and 50% chosen by election. Or perhaps two different legislative bodies, one based on sortition and one based on election. We have to be willing to experiment and innovate.

Sortition should be seriously considered.

Doug Hofstadter, Flight, and AI

Douglas Hofstadter, author of the fascinating book, Gödel, Escher, Bach, is someone I’ve admired for a long time, both as an expositor and an original thinker.

But he goes badly wrong in a few places in this essay in the Atlantic Monthly. Actually, he’s said very similar things about AI in the past, so I am not really that surprised by his views here.

Hofstadter’s topic is the shallowness of Google Translate. Much of his criticism is on the mark: although Google Translate is extremely useful (and I use it all the time), it is true that it does not usually match the skills of the best human translators, or even good human translators. And he makes a strong case that translation is a difficult skill because it is not just about language, but about many facets of human experience.

(Let me add two personal anecdotes. I once saw the French version of Woody Allen’s movie Annie Hall. In the original scene, Alvy Singer (Woody Allen) is complaining that a man was being anti-semitic because he said “Did you eat?” which Alvy mishears as “Jew eat?”. This was translated as “Tu viens pour le rabe?” which Woody Allen conflates with “rabbin”, the French word for “rabbi”. The translator had to work at that one! And then there’s the French versions of the Harry Potter books, where the “Sorting Hat” became the “Choixpeau”, a truly brilliant invention on the part of the translator.]

But other things Hofstadter says are just … wrong. Or wrong-headed. For example, he says, “The bailingual engine isn’t reading anything–not in the normal human sense of the verb ‘to read.’ It’s processing text.” This is exactly the kind of complaint people made about the idea of flying machines: “A flying machine isn’t flapping its wings, so it cannot be said to fly in the normal human understanding of how birds fly.” [not an actual quote] Of course a computer doesn’t read the way a human does. It doesn’t have an iris or a cornea, it doesn’t use its finger to turn the page or make the analogous motion on a screen, and it doesn’t move its lips or write “How true!” in the margins. But what does that matter? No matter what, computer translation is going to be done differently from the exact way humans do it. The telling question is, Is the translation any good? Not, Did it translate using exactly the same methods and knowledge a human would?  To be fair, that’s most of his discussion.

As for “It’s processing text”, I hardly see how that is a criticism. When people read and write and speak, they are also “processing text”. True, they process text in different ways than computers do. People do so, in part, taking advantage of their particular knowledge base. But so does a computer! The real complaint seems to be that Google Translate doesn’t currently have access to, or use extensively, the vast and rich vault of common-sense and experiential knowledge that human translators do.

Hofstadter says, “Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it–to ‘press it out’–in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

“I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental ‘halo’ has been realized—only when the elusive bubble of meaning is floating in my brain–do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising.”

That’s a nice description — albeit maddeningly vague — of how Hofstadter thinks he does it. But where’s the proof that this is the only way to do wonderful translations? It’s a little like the world’s best Go player talking about the specific kinds of mental work he uses to prepare before a match and during it … shortly before he gets whipped by AlphaGo, an AI technology that uses completely different methods than the human.

Hofstadter goes on to say, “the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.” I strongly disagree with the “end run” implication. Again, it’s like viewing flying as something that can only be achieved by flapping wings, and propellers and jet engines are just “end runs” around the true goal. This is a conceptual error. When Hofstadter says “There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are”, that is just an assertion. I can translate passages about war even though I’ve never been in a war. I can translate a novel written by a woman even though I’m not a woman. So I don’t need to have experienced everything I translate. If mediocre translations can be done now without the requirements Hofstadter imposes, there is just no good reason to expect that excellent translations can’t be eventually be achieved without them, at least in the same degree that Hofstadter claims.

I can’t resist mentioning this truly delightful argument against powered mechanical flight, as published in the New York Times:

The best part of this “analysis” is the date when it was published: October 9, 1903, exactly 69 days before the first successful powered flight of the Wright Brothers.

Hofstadter writes, “From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful…”.

But they already do think, in any reasonable sense of the word. They are already creative in a similar sense. As for words like “frightened, ecstatic, resigned, hopeful”, the main problem is that we cannot currently articulate in a suitably precise sense what we exactly mean by them. We do not yet understand our own biology enough to explain these concepts in the more fundamental terms of physics, chemistry, and neuroanatomy. When we do, we might be able to mimic them … if we find it useful to do so.

Addendum: The single most clueless comment to Hofstadter’s piece is this, from “Steve”: “Simple common sense shows that [a computer] can have zero “real understanding” in principle. Computers are in the same ontological category as harmonicas. They are *things*. As in, not alive. Not conscious.

Furthermore the whole “brain is a machine” thing is a *belief* based on pure faith. Nobody on earth has the slightest idea how consciousness actually arises in a pile of meat. Reductive materialism is fashionable today, but it is no less faith-based than Mormonism.”

Yet More Incoherent Thinking about AI

I’ve written before about how sloppy and incoherent a lot of popular writing about artificial intelligence is, for example here and here — even by people who should know better.

Here’s yet another example, a a letter to the editor published in CACM (Communications of the ACM).

The author, a certain Arthur Gardner, claims “my iPhone seemed to understand what I was saying, but it was illusory”. But nowhere does Mr. Gardner explain why it was “illusory”, nor how he came to believe Siri did not really “understand”, nor even what his criteria for “understanding” are.

He goes on to claim that “The code is clever, that is, cleverly designed, but just code.” I am not really sure how a computer program can be something other than what it is, namely “code” (jargon for “a program”), or even why Mr. Gardner thinks this is a criticism of something.

Mr. Gardner states “Neither the chess program nor Siri has awareness or understanding”. But, lacking rigorous definitions of “awareness” or “understanding”, how can Mr. Gardner (or anyone else) make such claims with authority? I would say, for example, that Siri does exhibit rudimentary “awareness” because it responds to its environment. When I call its name, it responds. As for “understanding”, again I say that Siri exhibits rudimentary “understanding” because it responds appropriately to many of my utterances. If I say, “Siri, set alarm for 12:30” it understands me and does what I ask. What other meanings of “awareness” and “understanding” does Mr. Gardner appeal to?

Mr. Gardner claims “what we are doing — reading these words, asking maybe, “Hmmm, what is intelligence?” is something no machine can do.” But why? It’s easy to write a program that will do exactly that: read words and type out “Hmmm, what is intelligence?” So what, specifically, is the distinction Mr. Gardner is appealing to?

He then says, “That which actually knows, cares, and chooses is the spirit, something every human being has. It is what distinguishes us from animals and from computers.” First, there’s the usual “actually” dodge. It never matters to the AI skeptic how smart a computer is, it is still never “actually” thinking. Of course, what “actual” thinking is, no one can ever tell me. Then there’s the appeal to the “spirit”, a nebulous, incoherent thingy that no one has ever shown to exist. And finally, there’s the absurd claim that whatever a “spirit” is, it’s lacking in animals. How does Mr. Gardner know that for certain? Has he ever observed any primates other than humans? They exhibit, as we can read in books like Chimpanzee Politics, many of the same kinds of “aware” and “intelligent” behaviors that humans indulge in.

This is just more completely incoherent drivel about artificial intelligence, no doubt driven by religion and the need to feel special. Why anyone thought this was worth publishing is beyond me.

Last Moose Story of the Year: The Limping Moose that Halted an Election

Over in Calgary, a limping moose delayed elections back in October.

The article contains helpful advice, such as “if you see a moose, you are always encouraged to back away slowly and to make your way into a building”. When hiking in the wilderness, I always keep this in mind.

Finally, remember these immortal words of Tom Shirlaw: “I spent years in the military and even overseas, you could cast your ballot… never stopped by a moose.”