Yes, Your Brain Certainly Is a Computer


Did you hear the news, Victoria? Over in the States those clever Yanks have invented a flying machine!

– A flying machine! Good heavens! What kind of feathers does it have?

– Feathers? It has no feathers.

– Well, then, it cannot fly. Everyone knows that things that fly have feathers. It is preposterous to claim that something can fly without them.

OK, I admit it, I made that dialogue up. But that’s what springs to mind when I read yet another claim that the brain is not a computer, nor like a computer, and even that the language of computation is inappropriate when talking about the brain.

The most recent foolishness along these lines was penned by psychologist Robert Epstein. Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not.

Here are just a few of the silly claims by Epstein, with my commentary:

“But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently.”

— Well, Epstein is wrong. We, like all living things, are certainly born with “information”. To name just one obvious example, there is an awful lot of DNA in our cells. Not only is this coded information, it is even coded in base 4, whereas modern digital computers use base 2 — the analogy is clear. We are certainly born with “rules” and “algorithms” and “programs”, as Frances Crick explains in detail about the human visual system in The Astonishing Hypothesis.

“We don’t store words or the rules that tell us how to manipulate them.”

— We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word “Epstein”, but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve “Epstein” and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to “store” the word.

As for rules, without some sort of encoding of rules somewhere, how can we produce so many syntactically correct sentences with such regularity and consistency? How can we produce sentences we’ve never produced before, and have them be grammatically correct?

“We don’t create representations of visual stimuli”

— We certainly do. Read Crick.

“Computers do all of these things, but organisms do not.”

— No, organisms certainly do. They just don’t do it in exactly the same way that modern digital computers do. I think this is the root of Epstein’s confusion.

Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn’t) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).

“The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.”

— This is just utter nonsense. Nobody says “all computers are capable of behaving intelligently”. Take a very simple model of a computer, such as a finite automaton with two states computing the Thue-Morse sequence. I believe intelligence is a continuum, and I think we can ascribe intelligence to even simple computational models, but even I would say that this little computer doesn’t exhibit much intelligence at all. Furthermore, there are good theoretical reasons why finite automata don’t have enough power to “behave intelligently”; we need a more powerful model, such as the Turing machine.

The real syllogism goes something like this: humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers). Humans can store information (we know this because I can remember my social security number and my birthdate). Things that both store information and process it are called (wait for it) computers.

“a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.”

— Of course, this is utter nonsense. If there were no representation of any kind of a dollar bill in a brain, how could one produce a drawing of it, even imperfectly? I have never seen (just to pick one thing at random) a crystal of the mineral Fletcherite, nor even a picture of it. Ask me to draw it and I will be completely unable to do so because I have no representation of it stored in my brain. But ask me to draw a US dollar bill (in Canada we no longer have them!) and I can do a reasonable, but not exact job. How could I possibly do this if I have no information about a dollar bill stored in my memory anywhere? And how is that I fail for Fletcherite?

“The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous”

— Well, it may be preposterous to Epstein, but there is at least evidence for it, at least in some cases.

“A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks.”

— So what? What does this have to do with anything? There is no requirement, in saying that the brain is a computer, that memories and facts and beliefs be stored in individual neurons. Storage that is partitioned in various location, “smeared” across the brain, is perfectly compatible with computation. It’s as if Epstein has never heard of digital neural networks, where one can similarly say that a face is not stored in any particular location in memory, but rather distributed across many of them. These networks even exhibit some characteristics of brains, in that damaging parts of them don’t entirely get rid of the stored data.

“My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

“That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”

— This is perhaps the single stupidest passage in Epstein’s article. He doesn’t seem to know that “keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery” is an algorithm. Tell that description to any computer scientist, and they’ll say, “What an elegant algorithm!”. In exactly the same way, the way raster graphics machines draw a circle is a clever technique called “Bresenham’s algorithm”. It succeeds in drawing a circle using linear operations only, despite not having the quadratic equation of a circle (xa)2 + (yb)2 = r2 explicitly encoded in it.

But more importantly, it shows Epstein hasn’t thought seriously at all about what it means to catch a fly ball. It is a very complicated affair, involving coordination of muscles and eyes. When you summarize it as “the simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery”, you hide all the amazing amount of computation and algorithms that are going on behind the scenes to coordinate movement, keep the player from falling over, and so forth. I’d like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all.

“there is no reason to believe that any two of us are changed the same way by the same experience.”

— Perhaps not. But there is reason to believe that many of us are changed in approximately the same way. For example, all of us learn our natural language from parents and friends, and we somehow learn approximately the same language.

“We are organisms, not computers.  Get over it.

— No, we are both organisms and computers. Get over it!

“The IP metaphor has had a half-century run, producing few, if any, insights along the way.”

— Say what? The computational model of the brain has had enormous success. Read Crick, for example, for an example of how the computational model has had some success in modeling the human visual system. Here’s an example from that book I give in my algorithms course at Waterloo: why is it that humans can find a single red R in a field of green R’s almost instantly whether there are 10 or 1000 letters, or a single red R in a field of red L’s almost as quickly, but has trouble finding the unique green R in a large sea of green L’s and red R’s and red L’s? If you understand algorithms and the distinction between parallel and sequential algorithms, you can explain this. If you’re Robert Epstein, I imagine you just sit there dumbfounded.

Other examples of successes include artificial neural nets, which have huge applications in things like handwriting recognition, face recognition, classification, robotics, and many other areas. They draw their inspiration from the structure of the brain, and somehow manage to function enormously well; they are used in industry all the time. If that is not great validation of the model, I don’t know what is.

I don’t know why people like Epstein feel the need to deny things for which the evidence is so overwhelming. He behaves like a creationist in denying evolution. And like creationists, he apparently has no training in a very relevant field (here, computer science) but still wants to pontificate on it. When intelligent people behave so stupidly, it makes me sad.

P. S. I forgot to include one of the best pieces of evidence that the brain, as a computer, is doing things roughly analogous to digital computers, and certainly no more powerful than our ordinary RAM model or multitape Turing machine. Here it is: mental calculators who can do large arithmetic calculations are known, and their feats have been catalogued: they can do things like multiply large numbers or extract square roots in their heads without pencil and paper. But in every example known, their extraordinary computational feats are restricted to things for which we know there exist polynomial-time algorithms. None of these computational savants have ever, in the histories I’ve read, been able to factor arbitrary large numbers in their heads (say numbers of 100 digits that are the product of two primes). They can multiply 50-digit numbers in their heads, but they can’t factor. And, not surprisingly, no polynomial-time algorithm for factoring is currently known, and perhaps there isn’t one.

Comments

  1. says

    Ya, I don’t get all the denial that happens over this subject, but I think you’re right that it’s often about people thinking something needs to be this or that to count as a “computer”. And then there’s the people who believe in supernatural stuff and can’t stomach the mere possibility of a non-supernatural way to do it.

  2. Pierce R. Butler says

    We don’t store words or the rules that tell us how to manipulate them.

    Noam Chomsky made a very strong case concerning innate human “rules to manipulate them”, Dr. Epstein. Following his theories, much of linguistics qualifies as a subset of a field called “psychology”, Dr. Epstein.
    For that matter, we have a plethora of other theories involving human hardwiring by DNA, Dr. Epstein (okay, a lot of them require ignoring a lot of contrary data, but the “blank slate” concept has very few supporters any more).
    All of these developments date back longer than 50 years, Dr. Epstein.

    That said, an apparatus made of meat – nerve tissue powered by and data-intaking through a supercomplex fluid called “blood”, over 3.6 billion years in development – has more points of difference than of similarity with a box of electrical switches in its technical infancy as recently as, say, Chomsky’s first linguistic book.

    TL;DR: that short word “is” entails a long set of questionable assumptions, and asymptotically approaches uselessness when one can say “A brain is a computer” and “A brain is not a computer” with equal validity.

  3. Reginald Selkirk says

    I enjoyed the article, and agree. It would be easier to read if you would do more to set off Epstein’s text from your responses, such as blockquoting or indenting or italicizing his text.

  4. tcmc says

    Excellent article–very interesting. Few things brighten my day as much as when I open Recursivity and find a new post. I know you are busy, but you write well, and I wish you’d write more often.

  5. invivoMark says

    I will grant you that a brain is a type of computer (at least for certain reasonable definitions of “computer”), but beyond that, you have completely lost me.

    Maybe it’s just a bad coincidence that the very first thing you discuss is DNA as information. You could not have missed Epstein’s point harder. In fact, in comparing 4-base DNA to base-2 computer programming, you are committing the very fallacy Epstein is talking about. Worse, even, because anyone who knows anything about molecular biology will tell you that the metaphor of DNA as computer code is completely invalid. Actual molecular biologists do not talk about DNA sequences in terms of computer code. Doing so would cloud our thinking and make it difficult to talk about our subject with precision.

    And that’s the entire point of Epstein’s article: talking about the brain in terms of a metaphor is adverse to progress and understanding. I’m not convinced you understood that point at all, and you seem to have gotten caught up in nitpicking the details of arguments that Epstein never even made.

    You’re not even correct in some of your nitpicking, either. Memories are not thought to be stored in individual neurons, and the study mentioned in your link never claimed that they were. Memories are thought to exist as networks of connected neurons, and a memory of an object (say, a dollar bill) will not correspond to a similar-looking network in two different people.

  6. shallit says

    In fact, in comparing 4-base DNA to base-2 computer programming, you are committing the very fallacy Epstein is talking about.

    I’m committing no fallacy at all. DNA stores information in a base-4 code; nobody denies that. But I never said DNA was “computer code”; I said it was information. You seem rather confused.

    The brain is a computer. That’s not a “metaphor”; it’s a fact. I explained why.

    I completely reject your charge that I “nitpicked” the details of arguments “Epstein never even made”. You didn’t give a single example, and that’s because I didn’t do what you claim.

    I didn’t say all memories are stored in individual neurons. There is evidence, however, that in some cases individual neurons are directly linked to single memories. In addition to the article I linked, here’s another and another.

    In any case, whether they are not is completely irrelevant to the argument I made; I was simply addressing Epstein’s “completely preposterous” claim. Apparently it’s not so preposterous as all that.

  7. shallit says

    Oh, I should add that both you and Epstein claim “talking about the brain in terms of a metaphor is adverse to progress and understanding”, but you don’t give a single example of how this has played out. I don’t buy it. It’s a cheap assertion with nothing backing it up at all.

  8. invivoMark says

    Perhaps I wasn’t clear enough. There are two separate claims here.

    1) The brain is a literal computer.

    2) The brain functions metaphorically like a modern 21st century personal desktop computer, and we can understand its function through this metaphor.

    Your entire case seems to hinge on Epstein making a case that (1) is false. However, it seems very clear to me that Epstein is only talking about (2), and takes no explicit stance on (1). (I will grant you that his language is ambiguous enough that he may have been talking about (1) for a portion of his essay. I would suggest that this is due to the fact that he is clearly writing for a layperson audience.) Moreover, I explicitly stated that I agree that (1) is true.

    You didn’t make a single argument in favor of (2), which Epstein refers to as the information processing (IP) metaphor. I didn’t provide any examples of why the IP metaphor is counterproductive, because Epstein provided some in his article, and I had assumed you would have noticed them. Maybe you didn’t read it very carefully. The IP metaphor has become so deeply ingrained that even top experts have difficulty describing the brain in terms other than the IP metaphor.

    One downside to relying on such a metaphor is that you’ve built your own barriers to explaining the science to the public. That’s why you get popular articles making silly claims like, say, specific memories are stored in single neurons. I had to track down the papers related to your links, since neither had a direct link or a citation. These are they:

    http://www.nature.com/neuro/journal/v12/n2/abs/nn.2245.html
    http://www.nature.com.ezproxy.library.wisc.edu/nature/journal/v435/n7045/full/nature03687.html

    The latter is behind a paywall, and while your Telegraph article didn’t mention a specific study, that is the most relevant one published by that author. As you should be able to see, the first article says nothing about specific memories being stored in individual neurons. The second doesn’t either. Although it might seem to suggest that individual neurons are responsible for recognizing specific people, the authors explicitly avoid this claim:

    We do not mean to imply the existence of single neurons coding uniquely for discrete percepts for several reasons: first, some of these units responded to pictures of more than one individual or object; second, given the limited duration of our recording sessions, we can only explore a tiny portion of stimulus space; and third, the fact that we can discover in this short time some images—such as photographs of Jennifer Aniston—that drive the cells suggests that each cell might represent more than one class of images.

    Not one of your links actually provides evidence that specific memories can be stored in single cells. Because that’s a preposterous idea. Maybe sometimes you shouldn’t dismiss out of hand the opinions of an expert.

  9. shallit says

    Let me guess, InvivoMark: you’re involved in the life sciences somehow, but have no advanced training in computer science. You’ve never taken a course in theory of computation. Am I right?

    The brain functions metaphorically like a modern 21st century personal desktop computer, and we can understand its function through this metaphor.

    I don’t know what it means to “function metaphorically”. If Epstein meant the brain is made of silicon, or has an actual disk drive, or an actual operating system like Linux, then of course he would be correct that none of these is true. But it didn’t occur to me that anybody could be so stupid as to believe that, so it didn’t occur to me that an entire article devoted to refuting it would be necessary.

    The brain *is* a computer. Whether it “functions metaphorically like a modern 21st century personal desktop computer” is, by the work of Turing, completely irrelevant to this question. When he claims that it isn’t, he is mistaken. Do you agree or disagree?

    As for the single-neuron idea, I repeat again that it is not even remotely necessary to my argument. I am not wedded to the idea, I just don’t find it completely preposterous as Epstein does. As for “dismiss[ing] out of hand the opinions of an expert”, I don’t know who you are referring to. Epstein? He is not a neuroscientist. As far as I can see, he has no advanced training in neuroscience.
    You? I don’t know who you are. You provide no real name or credentials.

    I asked you to provide specific examples of your claims: that I “nitpicked” details of arguments Epstein never made. You did not respond to this. I asked you to provide specific examples of how viewing the brain as a computer has impeded research. You did not respond to this either. I don’t think you’re arguing in good faith.

  10. shallit says

    Invivo, here is a helpful and brief list of the false claims in Epstein’s article that I took issue with. I claim all these, repeated word-for-word here, are false. To avoid distracting side issues, why not just tell us which of them you think are true?

    1. Your brain does not process information, retrieve knowledge or store memories.
    2. In short: your brain is not a computer.
    3. We are not born with: information
    4. We don’t store words or the rules that tell us how to manipulate them.
    5. We don’t create representations of visual stimuli
    6. a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
    7. [catching a ball] is actually incredibly simple, and completely free of computations, representations and algorithms.
    8. We are organisms, not computers.

  11. invivoMark says

    I honestly don’t know how to make myself any clearer. You obviously don’t understand the metaphor Epstein is discussing. Do you even know what a metaphor is? If you think it could imply that the brain is literally made of silicon, then I don’t think you do.

    I also don’t know how much more explicitly I can say that I agree that the brain is a computer. Fucking hell, I’ve said that three times now. Hopefully you get it this time. That has absolutely no bearing on Epstein’s article at all, and that’s why your post did not address Epstein’s points. I can’t bloody well give you examples where you didn’t address something, because you never addressed anything at all! You can’t address an argument against a metaphor unless you understand the metaphor in the first place!

    And it’s rich for you to suggest I’m arguing in bad faith. You attempted to provide evidence that specific memories can be stored in individual neurons, and I have shown you that you never provided evidence of any such thing. I even tracked down those papers and provided links so you could verify for yourself. Yet instead of admitting that you were mistaken, you’re just shrugging it off and saying that it’s irrelevant (and yet the idea is still somehow totally possible to you). That’s extremely dishonest of you, and if that’s how you’re going to act, there’s clearly no point in continuing this discussion.

    You’re blogging at a place that ostensibly encourages “free thought.” If you want to uphold that ideal, you need to learn how to admit that you’re wrong when the evidence is so clearly stacked against you.

  12. shallit says

    You obviously don’t understand the metaphor Epstein is discussing.

    You seem very confused. Epstein is not simply discussing a metaphor. He is also making positive assertions about what the brain is, what it stores, and so forth. I listed eight of them above, all of which I deny, and I asked you which you agreed with. You did not reply.

    Do you even know what a metaphor is?

    Yes, thanks for your condescension. I’ve even read Death is the Mother of Beauty by Mark Turner.

    I think it is impossible to claim that Epstein’s article was only about metaphor. Brain-as-computer is not simply a metaphor; it is (for computer scientists at least) a statement of what kinds of things brains can do. It is kind of like saying that it is a mistake to think that a car is a machine, and that it is time to discard the “metaphor” of machine for automobiles. If a car is a machine, then it is absurd to deny it, or discard it as “metaphor”.

    because you never addressed anything at all!

    That’s a lie. I gave specific reasons to disbelieve many of Epstein’s claims.

    Yet instead of admitting that you were mistaken, you’re just shrugging it off and saying that it’s irrelevant

    Whether memories could reside in single neurons is completely irrelevant to my points, yes. You could delete the phrase you take issue with and all my points would remain. My point in citing the articles I did was not to prove definitively that memories reside in single neurons — I don’t know enough neuroscience to make a claim — merely that there seemed to be evidence for it in at least some cases, and it is not preposterous to believe that they could. And even that is largely irrelevant to my points. If I turn out to be wrong about it, I’m wrong. Nothing changes in my argument. Before I accept your word for it, however, I wanted to know what training you have in neuroscience that I should accept your word for it. You didn’t reply.

    I provided references which you did not find convincing. So here is one more article to show that the idea is perhaps not so “preposterous”:

    Invariant visual representation by single neurons in the human brain

    If you think the idea is preposterous, then tell me something to read that gives evidence for that claim. I’m happy to read anything you suggest, but you don’t seem to be into providing much evidence.

    You’ve also evaded all the other questions I’ve asked:
    – what training do you have in computer science?
    – how precisely is the IP metaphor counterproductive?
    – which of the eight claims of Epstein do you agree with?

    Yes, I think that’s arguing in bad faith.

  13. invivoMark says

    Whoah, did you seriously just provide, as evidence for your claim, a link that I just posted two comments up?

    Yes. Yes you did.

    And here I think we have discovered the problem. You’re simply not reading what I’m writing. And you’re apparently also not reading your own links, because the paper you just linked to is the same claim that is discussed in one of your links up above.

    Maybe if you’re willing to start thinking and writing more carefully, I’ll engage with you again on this. But if you won’t put in the effort, then I don’t see any reason to either.

  14. shallit says

    Still no answers to any of my questions. I asked for references for your claim about “preposterous”, and you didn’t provide any. Let me know when you’re willing to answer.

    As for the link, I guess it didn’t occur to you that one of your links is broken. When I clicked on it, all I got was a request to log in at the University of Wisconsin. But I guess it’s easier to use it as an excuse to bail than to answer the questions.

  15. tcmc says

    Epstein:

    “That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”

    Now that is just funny. Just what the hell does he think an algorithm is? I would like to see his definition for an algorithm under which the described method for catching a ball is disqualified.

  16. invivoMark says

    WTF? Why should it have “occurred” to me that one of my links was “broken?” It worked fine for me, and you never mentioned that it didn’t for you. I guess you don’t care about references?

    Why didn’t it occur to you that when YOU provided a link to the paper, you were linking to the same claim you linked to in the Telegraph? And why hasn’t it occurred to you all this time that not a single one of your links actually supports the claim you’re trying to make?

    You have serious reading comprehension issues. I am not “bailing” because you can’t access a link I posted. I’m simply not going to argue with someone for whom evidence is so clearly not important in an argument. You can go stew in your own ignorance.

    • shallit says

      You are bailing. For someone who says they’re “simply not going to argue”, you’ve spent a huge amount of time doing exactly that about a completely inessential point, but evading the questions I posed. There’s a word for a person like that.

  17. kaleberg says

    I’m reading Hasselmo’s ‘How We Remember’, so it is hard to think of the brain as anything but a computer. Of course, it stores information a fair bit differently from the way digital computers do, but that’s a big point of this book. I’m still in the early chapters, but Hasselmo’s discussion of the different types of memory – working, episodic, semantic and procedural – and the methods used to study them has been fascinating.

    I read Epstein’s piece a few days ago and couldn’t help refuting the article mentally, paragraph by paragraph. His opening paragraph alone has a whopper. He argued that we will “never find a copy of Beethoven’s 5th Symphony in the brain”, but in light of what I have been reading in Hasselmo’s book that is almost certainly wrong. I don’t do music, but I’ve known people who could pick a measure in Beethoven’s 5th and work their way forwards or backwards through the piece. This clearly involves episodic memory. Episodic memory, Hasselmo argues, is based on spatio-temporal trajectories represented by the firing patterns of specific types of neurons.

    Apparently there are representations of real world and synthetic places in our memories based on what are called location and grid neurons. (Talking about losing one’s place in a piece of music is suggestive.) These neurons are stimulated as we explore or recall the world. Episodic memory links locations into trajectories ordered in time. The location neurons link to other information associated with that place like the point of view or semantic memory which lets us represent things we remember being present. Kant was right about the limits of pure reason. Evolution has given us a flexible navigation system that reflects the way we experience our world and our reason is built on it.

    I really don’t see how Epstein has an argument in light of what we have been learning about how the brain works. For example, there was recently an article about imaging a neuron developing new pathways for distinguishing two similar sensory inputs. Is this how we understand classes and instances? The mechanism described could be used for such, and it would explain how we inductively build our complex network of semantic relationships.

    Thanks for your spirited refutation. I’m glad to see others were as appalled as I was.

  18. mk says

    “But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently.”

    Computers aren’t born with any of these things. I’ve been programming for over 5 decades and have had to invent many of these and insert them into computers. Epstein’s article is a mess of Dunning-Krugeresque arrogance, stupidity, ignorance, and above all intellectual dishonesty.

  19. mk says

    “Furthermore, there are good theoretical reasons why finite automata don’t have enough power to “behave intelligently”; we need a more powerful model, such as the Turing machine.”

    Um, computers and brains are finite entities and thus are modelable by FSA. Your statement is almost as ignorant as Epstein’s piece. The computational power of TMs is only needed for certain unbounded problems, such as recognizing the any (the important word) sentence of a context-sensitive grammar.

    • shallit says

      Of course a brain is modelable by a finite state machine. The question is, is it a useful model of a brain? I don’t know anyone who thinks so.

      The computational power of TMs is only needed for certain unbounded problems, such as recognizing the any (the important word) sentence of a context-sensitive grammar.

      On the contrary, nearly every single algorithm that one studies in a basic course on algorithm design and analysis is analyzed as if it runs on a RAM, which is computationally equivalent to a Turing machine.

Leave a Reply to invivoMark Cancel reply

Your email address will not be published. Required fields are marked *