Yet More Incoherent Thinking about AI


I’ve written before about how sloppy and incoherent a lot of popular writing about artificial intelligence is, for example here and here — even by people who should know better.

Here’s yet another example, a a letter to the editor published in CACM (Communications of the ACM).

The author, a certain Arthur Gardner, claims “my iPhone seemed to understand what I was saying, but it was illusory”. But nowhere does Mr. Gardner explain why it was “illusory”, nor how he came to believe Siri did not really “understand”, nor even what his criteria for “understanding” are.

He goes on to claim that “The code is clever, that is, cleverly designed, but just code.” I am not really sure how a computer program can be something other than what it is, namely “code” (jargon for “a program”), or even why Mr. Gardner thinks this is a criticism of something.

Mr. Gardner states “Neither the chess program nor Siri has awareness or understanding”. But, lacking rigorous definitions of “awareness” or “understanding”, how can Mr. Gardner (or anyone else) make such claims with authority? I would say, for example, that Siri does exhibit rudimentary “awareness” because it responds to its environment. When I call its name, it responds. As for “understanding”, again I say that Siri exhibits rudimentary “understanding” because it responds appropriately to many of my utterances. If I say, “Siri, set alarm for 12:30” it understands me and does what I ask. What other meanings of “awareness” and “understanding” does Mr. Gardner appeal to?

Mr. Gardner claims “what we are doing — reading these words, asking maybe, “Hmmm, what is intelligence?” is something no machine can do.” But why? It’s easy to write a program that will do exactly that: read words and type out “Hmmm, what is intelligence?” So what, specifically, is the distinction Mr. Gardner is appealing to?

He then says, “That which actually knows, cares, and chooses is the spirit, something every human being has. It is what distinguishes us from animals and from computers.” First, there’s the usual “actually” dodge. It never matters to the AI skeptic how smart a computer is, it is still never “actually” thinking. Of course, what “actual” thinking is, no one can ever tell me. Then there’s the appeal to the “spirit”, a nebulous, incoherent thingy that no one has ever shown to exist. And finally, there’s the absurd claim that whatever a “spirit” is, it’s lacking in animals. How does Mr. Gardner know that for certain? Has he ever observed any primates other than humans? They exhibit, as we can read in books like Chimpanzee Politics, many of the same kinds of “aware” and “intelligent” behaviors that humans indulge in.

This is just more completely incoherent drivel about artificial intelligence, no doubt driven by religion and the need to feel special. Why anyone thought this was worth publishing is beyond me.

Comments

  1. CJO says

    I would say, for example, that Siri does exhibit rudimentary “awareness” because it responds to its environment.

    I have a TV set that responds to its environment, too. Specifically to a few specified strings of infrared pulses originating in its environment. “It understands me and does what I ask” when I push a certain button on the remote control. A voice-activated device like one equipped with Siri, similarly, only “responds to its environment” if we restrict “environment” to a set of specified strings in a specified natural language originating in its environment.

    I think it’s the fact that the strings that voice activated devices respond to are natural language that’s leading you to think there’s anything closer to “understanding” in Siri’s code than in the code that allows my TV to be activated by a remote (but maybe you don’t; maybe you think my TV or my thermostat are also aware).

    Suppose your device had a “remote” with thousands of buttons, each one labeled with a natural-language utterance and the device performed whatever action was specified when you pushed the button so labeled. Would you still characterize the device as understanding anything? If not, what is the difference?

    Please don’t think I am endorsing everything in the article, certainly not any of the “spirit” business. But I do think there’s a very clear sense in which you and I understand and are aware that no hard-coded device of the sophistication of current voice-activated personal assistant devices does.

    • shallit says

      “maybe you think my TV or my thermostat are also aware”

      I do, because my definition of “aware” is “has a sensor that can detect changes to its environment and respond accordingly”. This is different from “self-aware”, which to me means “has a model of its environment sophisticated enough to include itself”.

      What are your definitions for these terms?

      There is a famous quip of John McCarthy about thermostats, along the lines of a thermostat having three “beliefs”: it’s too cold in here, it’s too hot in here, it’s just right in here. I think this is a correct and indeed, profound way to think about it.

    • shallit says

      “But I do think there’s a very clear sense in which you and I understand and are aware that no hard-coded device of the sophistication of current voice-activated personal assistant devices does”

      Sure, the sense is that we understand better. In other words, it is a distinction of degree, and not much more. There is a scale of understanding. My Iphone alarm understands only a very tiny aspect of my life; my dog (if I had one) understands a lot more; my students more (but a different subset than my hypothetical dog); my wife even more.

  2. CJO says

    If you focus entirely on outputs, though, you can cling to your absurdly broad definition of “aware” and not get anywhere near even the reasons why consciousness, awareness, sentience, what have you, is an open scientific problem and a philosophical question of some subtlety, much less an answer. “Awareness” denotes an interiority, an openness to phenomena that is not (narrowly) specified. The point you’re missing about a TV being “aware” of certain strings of infrared pulses is that valid inputs are specified by a mechanism and the relation of output to input is arbitrarily set by factors external to the device, that is decisions made by agents in the design and manufacture of the mechanism. All this is true of a lookup table, however exhaustive the valid inputs may be.

    And that brings me to why it is not merely a difference of degree. Understanding and awareness are the means by which animals “respond accordingly” to detected environmental conditions. The means by which a voice activated digital assistant device does so is not internal, the decisions about what outputs correspond to which inputs were made by engineers and coders and UI designers over millions of person-hours. We know exactly how the device generates appropriate outputs, and all the understanding involved is external to it.

    Regarding thermostats and their beliefs, I presume you’re familiar with some of Dennett’s work; what I am thinking of in particular is his system of stances. Beliefs belong to the intentional stance. If a system can be adequately described by reference to the physical or design stances, then attributing beliefs to it is nothing more than a category error, at best an overextended metaphor. So I know what you mean, but it’s not a proposition I need to take seriously. A mechanical device with a very simple arrangement of sensor and switch in a negative feedback relation does not admit of analysis from the intentional stance, our investigation is complete at the level of the physical. We don’t need to appeal to beliefs to explain its behavior, so why invent them? I anticipate that the rejoinder is something along the lines of, if neuroscience were complete, brains could be analyzed from the physical stance and we’d have all the answers we need with no need to appeal to the intentional stance. And my answer to that is simply, minds are not brains, animals with brains have minds. That is, leaving aside the daunting empirical task of a complete neuroscience, such a technical advance will not tell us all we want to know about how animals achieve awareness and understanding and come to hold beliefs.

    I expect this in turn to be met with accusations of mysterianism or even harboring dualist sympathies. For what it’s worth, I am a committed materialist. It is not that there’s “something else” in us that makes us conscious, it’s that the modern cognitive sciences have narrowed in on the workings of the brain to such a degree that they leave out most of what a brain does most of the time, which is constantly engage with a world outside. It is my view that an explanation of awareness needs to account for this, that consciousness –mind– is a function of the interface between the brain, with its perceptual and inferential capabilities, and the world of phenomena.

    • shallit says

      I’ll resist the overwhelming temptation to laugh because you used the term “category error”.

      “Awareness” denotes an interiority, an openness to phenomena that is not (narrowly) specified.

      It’s not a black-and-white, 1 or 0 thing. You and I, for example, are not aware of the vast majority of radiation outside the visible spectrum that rains down on us constantly. Does that mean we are not aware, because we only see visible light (and some of us not at all)? There are degrees of awareness. A remote-control TV is only very slightly aware. A cockroach and Google home are more aware. You and I are still more aware. And awareness lies on multiple dimensions; a dog is less aware of English conversation and more aware of smell.

      The point you’re missing about a TV being “aware” of certain strings of infrared pulses is that valid inputs are specified by a mechanism and the relation of output to input is arbitrarily set by factors external to the device, that is decisions made by agents in the design and manufacture of the mechanism.

      So what? Why should that be a determinative aspect of awareness? And besides, we, as people, are largely governed by decisions made inside our body, shaped by evolution, over which we have little conscious control ourselves.

      We know exactly how the device generates appropriate outputs

      This is, in fact simply not true for any system that uses a source of truly random numbers (generated, for example, through radioactive decay), or any sufficiently-complicated system that is generated through an evolutionary algorithm. Sometimes we actually simply cannot say why, in any simple-to-explain way, a particular system makes the decision it makes. This has become a serious legal issue in autonomous vehicles, for example.

      I expect this in turn to be met with accusations of mysterianism or even harboring dualist sympathies.

      Nope. I just don’t think your rejoinder is cogent.

      it’s that the modern cognitive sciences have narrowed in on the workings of the brain to such a degree that they leave out most of what a brain does most of the time, which is constantly engage with a world outside.

      Simply not true at all. A major theme of today’s cognitive neuroscience is concerned with exactly what you say they’re not doing — for example, the famous work of Hubel-Wiesel in the 60’s, the book The Astonishing Hypothesis, to fMRI studies, to the work of Rizzolatti et al. on mirror neurons andsocial cognition.

  3. CJO says

    I don’t get the joke. A thermostat belongs to a category of objects that do not need to be explained by imputing beliefs to them.

    • shallit says

      The joke is on people who think that spouting “category error” constitutes a genuine form of argumentation. It’s just a substitute for an actual argument.

      The reason why it’s useful to think of a thermostat as having beliefs is that it demystifies the concept, and makes it more conceivable to ascribe beliefs to things other than humans.

      An analogy: we don’t have to think of membranes as computers, but sometimes it is useful for us to do so (viz. natural computing).

  4. CJO says

    I didn’t “spout” anything; I clearly made an argument, and named its conclusion. I even allowed that “overextended metaphor” might be another interpretation, which indeed seems to be more apt.

    Another phrase that often functions as a substitute for an actual argument is “it’s a matter of degree”. Because here’s the thing about attributing beliefs to objects or entities that we don’t have to think of as having them: it’s unclear to me how we get from one to the other, given that you’ve admitted that the metaphor only does rhetorical work (it serves to make a more general statement “more conceivable”) and has no explanatory force. Call the beliefs of thermostats “small-b” beliefs, and call beliefs that we do have to treat as real to have a complete explanation of the behavior of an agent from the intentional stance “big-B” Beliefs. It seems to me that what you’re aiming at, as it’s “just a a matter of degree” is that, if we pile up and cobble together enough beliefs, presto! we have an agent with Beliefs. But I don’t see that as self-evident or entailed by any argument you’ve made.

    In any case, I think you and I have different interests and are to some extent talking past each other, so I’m not sure we have anything to gain by continuing to go back and forth. Thanks for the discussion; I appreciate the opportunity to work through my (still developing) thoughts on a fascinating topic.

    • shallit says

      I dispute the claim that it has no explanatory force. Indeed, it allows one to determine the relative complexity of systems. A thermostat, having only 3 beliefs, is clearly less complex than a person, who could hold a much larger number of distinct beliefs. “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.” (Lord Kelvin)

      And no, my point was not “if we pile up and cobble together enough beliefs, presto! we have an agent with Beliefs”. It is, rather, that there is no fundamental distinction between “beliefs” and “Beliefs”. I realize that you want there to be a distinction, but so far you haven’t presented any good argument for that.

Leave a Reply

Your email address will not be published. Required fields are marked *