What is a “computer”? What is “information processing”?


Just before I left the States, I read this, shall we say, interesting article about how your brain is not a computer. The subhead, which does more or less summarize the content, is:

Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

Curiously, in order to comprehend the article, I had to retrieve knowledge and stored memories about neuroscience (I have a degree in that) and computers (I worked in the field for several years), and I had to process the information in the article and in my background, and I found that article confusing. It did not compute.

Jeffrey Shallit, who knows much more about the information processing side of the story, also found it somewhat enraging.

The foundation of the author’s argument is that the brain does not store information in the naive way that he expects. And that’s the heart of the problem: he seems to have a crude knowledge of how modern computers implement information processing, and has decided that because our brains don’t have registers or 8-bit data storage or shuffle around photographic images of the world around us internally, the brain must not compute things or remember things. It’s a bizarre exercise. Computer scientists don’t restrict their appreciation of computation to what they learned while programming a 6502 chip, either.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

But the core of his rejection of the information processing power of the human brain rests entirely on the fact that we don’t encode information in the way he demands we must. He uses this example:

In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

The student does a lousy job of drawing a dollar bill, as you might expect. When they’re allowed to copy directly from the bill to the blackboard, they do a much better job of getting the details right. Therefore, the brain does not work like a computer…or more precisely, the brain does not contain a detailed, easily accessible, high resolution photograph of a dollar bill embedded in a ‘memory buffer’ somewhere that can be pulled up and suspended in the mind’s eye (a whole ‘nother complex issue; I wonder if Epstein is a dualist?). And that’s true. It’s also irrelevant unless you’re arguing that the brain is a PowerBook Pro, which we’d all agree, it isn’t.

What is the internal representation of a dollar bill in your brain? It isn’t a photograph. It’s a more complex hierarchy of associations.

For example, when I arrived in Seoul, I took my familiar American dollars and handed them over to a man in a currency exchange booth. He handed back a stack of Korean won, and what I saw I instantly recognized as paper money. It was rectangular pieces of paper with a soft foldable texture, digits printed on them, and the kind of fine-grained printed imagery that is difficult to forge, with a picture of a person on it. Money, my brain said. It meets a series of criteria I associate with currency.

But not all. It was a different color — most American money is greenish, while these were different muted shades, like blue or yellow. The denominations were very different. I didn’t recognize this guy at all.

south-korea-1000-won

But it’s still obviously money. My brain seems to have a more sophisticated representation of what money looks like than correspondence to a specific nationality and denomination of bill, and this more complex encoding is regarded by Epstein as an indicator that the whole metaphor of information processing is wrong. I’d really like to see how he defines IP.

Traveling also makes that we encode information even more obvious. Imagine that you’re in your home country, and you buy something at the store: “$3.40”, the clerk says. You open your wallet, you’ve got a disordered assortment of ones, fives, and tens in there, but you instantly figure out which ones to pull out. You reach into your pocket for some change; you don’t even see it, but you can tell right away that you’ve got some quarters and dimes and miscellaneous coins, by touch. It’s fast, almost thoughtless. The facility with which you can recognize quantities and multiple units ought to tell you that there is some kind of internal representation of money in your head that you use.

We take it for granted until we’re traveling abroad. Now the clerk says “3400”, and you open your wallet, and the cues are all awry. It’s money, all right, but hey, what color are the thousand won notes again? Instead of instantaneous pattern matching, one flavor of information processing, I have to scrutinize each bill and stare at the digits and count zeroes (boy, there are a lot of zeroes on Korean cash), and do a different kind of tedious information processing to figure out how much to hand over. Forget about the coins; it’s a jumble of metal disks with no associations in my head at all, and I’d have to peer carefully at each one to figure out how much its worth.

This is true in any foreign country you visit. Canada isn’t so bad for us Americans — the numbers on the bills are just like the ones on ours, and the values are roughly similar, even if the colors are weird. Australia has even funkier colors, and the texture is off — so plasticky. Europe, forget about it — paying your bill is an exercise in calculation. More than once I’ve pulled out a wad of bills and had the clerk pick out the correct amount for me.

Isn’t it obvious that we have very specific mental representations of these things in our heads? It’s just not the stupid mental photograph Epstein demands that we have in order to fit his poor mental image of what information processing is.

Comments

  1. says

    No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli.

    never
    How does he know that. If my decades in computer programming have taught me anything, it is that “never” does not seem to exist.

  2. Athywren - not the moon you're looking for says

    A computer is a Victorian era woman who does mathematical calculations for a living. My brain is most definitely not that.

  3. fakeusername says

    Your information on Canadian currency may be out of date – it’s made out of plastic now, too.

  4. taraskan says

    It seems like the point of that section of the article was just to set up how childish singularitists and immortalists are when they talk about things like uploading your brain to a hard drive.

    But as a linguist that line about not storing words and the means to manipulate them infuriates me, too. Yes, we definitely store the means to recall words. No, they aren’t stored as words, but they are stored as representations of sounds (or signs if you are deaf), which link to a ton of different other areas of the brain to assign meaning, statistical well-formedness, memory, and oral musculature. We’ve only been exploring this exclusively in my field for 50 years now.

    This was clearly a throw-away line that sounded too good to go and mess up by googling whether it’s transparently full-bore wrong.

  5. says

    This NYRB review of Kurzweil’s How to Create a Mind makes the same point a little less contentiously, mostly because the author avoids using the word “information” in an overloaded way.

    So like, all a computer does is take input symbols, change them according to a set of rules, and spit out the result. It has no sense of what those symbols “mean” in any sense. Or rather, the hypothesis that a computer understands the meaning of what it’s doing isn’t falsifiable.

    (Frankly the hypothesis that me or PZ understands the meaning of what we’re reading isn’t falsifiable, either. We take these things as a given.)

  6. applehead says

    Computers are machines, artifices designed by human intelligence for a specific purpose. The brain is an organ and the product of milions of years of blind, goal-less evolution.

    Computers have a strict and well-defined division between hardware and software. In the human brain, hardware is software. The way your neurons are aligned in your connectome dictates how you think.

    Computers are serial machines. Brains are massively parallel objects.

    Brains are NOT computers.

    (I’m kinda insistent here because “brain’s a computer” is one of the main arguments of the transhumanuts. “The brain is not magic, so this means we can recreate it completely ’cause computers. And ’cause Moore’s Law(TM) this means immortal Matrix upload robot gods.”)

  7. shallit says

    Computers are machines, artifices designed by human intelligence for a specific purpose. The brain is an organ and the product of milions of years of blind, goal-less evolution.

    Planes are machines, artifices designed by humans for a purpose. Birds are organisms and the product of millions of years of blind, goal-less evolution. Therefore planes do not fly. Oops! Something wrong with your argument there.

    Computers have a strict and well-defined division between hardware and software.

    Early digital computers, such as ENIAC, didn’t even have software the way we think of it now. Just like the brain, the hardware was the software, too.

    Computers are serial machines. Brains are massively parallel objects.

    Pay no attention to all those massively parallel computers! They mean bubkis.

    Brains are NOT computers.

    Brains certainly are computers. Anyone saying otherwise hasn’t thought seriously about the issues and can be safely disregarded.

  8. Athywren - not the moon you're looking for says

    @applehead

    Brains are NOT computers.
    (I’m kinda insistent here because “brain’s a computer” is one of the main arguments of the transhumanuts. “The brain is not magic, so this means we can recreate it completely ’cause computers. And ’cause Moore’s Law(TM) this means immortal Matrix upload robot gods.”)

    Brains are not computers in the IBM sense, but then, neither are Victorian era women who make their livings from performing mathematical operations. The brain still takes data and mashes it together. It still computes. The brain isn’t a PC (nor is it a mac) but it is still an analytical engine. Ok, analytical organ… not engine, but I wanted to make the reference, and analytical organ doesn’t quite work.

  9. says

    I would just add that, I have zero doubt that within the next 30 years, we’ll have computers that can totally stand in for a human intelligence. We may even have some way of transferring the informational patterns in a brain into a computer and “upload it.”

    But. we’ll have absolutely no way of knowing wether or not the AI’s we create actually are thinking and know what we’re talking about, or if they’re merely fooling us into thinking that’s what’s happening. And when people upload their brains, we might have computers that have their memories, talk like them, and interact in exactly the way we expected their meatspace counterparts to interact, but we’ll have no way of knowing if they’re conscious, or if they’re just “passing” for conscious.

  10. says

    The brain still takes data and mashes it together. It still computes.

    Can a computer, of any configuration or design, hate?

  11. Athywren - not the moon you're looking for says

    Can a computer, of any configuration or design, hate?

    Well, Victori- ok, you know where I’m going with that, forget that.
    Did you hear the one about the bot that twitter taught to be a nazi? Granted, that’s more hate emulation than actual hate, but it’s getting hard to tell one from the other in humans, so I’m not sure we can hold that against computers.

  12. says

    The question is: What is a computer?

    If it’s something equivalent to a Turing Machine in a strict way, then nothing is a computer.

    If it’s an electronic machine, then our brains are obviously not computers.

    If it’s something equivalent to a Turing machine loosely enough to include your desktop computer, then the definition becomes so vastly inclusive that I’d be amazed if our brains didn’t qualify. Unless we’re able to do things Turing machines can’t, which would be really surprising.

  13. says

    Unless we’re able to do things Turing machines can’t, which would be really surprising.

    A human brain is called a “Turing Oracle”, it can resolve questions that are not formally “decidable”. For one example, a human being can look at an arbitrary computer program and it’s input, and given enough time, decide wether or not it will halt.

  14. Menyambal says

    The brain doesn’t need to store a detailed image of a dollar bill, it just needs to be able to recognize one. As PZ says, it’s a piece of paper of a greenish color, with a face and a number. Being able to reproduce the Treasury Seal from memory would be completely pointless.

    Yeah, a computer could store a detailed image, but it can also just transmit a $ and a 1. The guy is making a pointless point.

    I was just today recalling seeing a woman who had the same distinctive hair as my sister. The hair was so much the identifier, that I was willing to assume she’d borrowed some clothes and a bicycle, and was miles from where she was supposed to be.

  15. slithey tove (twas brillig (stevem)) says

    Therefore planes do not fly. Oops! Something wrong with [my] argument there.

    umm, what you mean, is both birds and planes fly, but a plane is NOT a bird (I guess). As in, the way a plane flies is quite different from how a bird flies. A plane separates the thrust from the lift while in a bird, a single structure does both simultaneously. [see ornithopters EG]
    This is similar to the OP, that Computers are not Brains, nor vice versa. Like the wing analogy: computers have broken functions down into discrete units of operation while the brain seems to do all these functions simultaneously in a single “soup”.

  16. slithey tove (twas brillig (stevem)) says

    The brain doesn’t need to store a detailed image of a dollar bill, it just needs to be able to recognize one.
    which is why counterfeiting is so easy. (to pass to a human, while machines can be a little more thorough )

  17. shallit says

    A human brain is called a “Turing Oracle”, it can resolve questions that are not formally “decidable”.

    Really? How fascinating. Where we can we find proof of this amazing assertion?

    For one example, a human being can look at an arbitrary computer program and it’s input, and given enough time, decide wether or not it will halt.

    OK, then, here is an inputless program written in idealized Pascal, where integers can be arbitrarily large. Does it halt or not? Feel free to take all the time you want. Get back to me on that, OK?

    program opn;

    var
    n, d, s: integer;

    begin
    n := 1;
    repeat
    n := n+2;
    s := 0;
    for d := 1 to n-1 do
    if n mod d = 0 then s := s+d
    until s = n
    end.

  18. shallit says

    umm, what you mean, is both birds and planes fly, but a plane is NOT a bird (I guess).

    Nope. Maybe it would have been clearer if I had said, “Therefore birds are not flying machines.”

    Don’t you see how silly the argument is?

  19. says

    @shallit – The issue is that humans can make axiomatic statements but computers can’t. I can say that there are an uncountably large number of real numbers between two integers, but a computer can never actually prove such a thing, because such statements aren’t provable.

    You have an example program and you’re making the prediction that it runs for ever — that’s a reasonable assumption, and, critically, you were able to make that statement without running the program yourself and formally proving it.

  20. says

    No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain

    It didn’t come out of Beethoven’s arse, so I am guessing it came out of his brain.

  21. Ed Seedhouse says

    I think that this is just an argument about what words mean. If we define “computer” one way then the brain is a computer and if we define it another way then the brain is not a computer. The question is then what is the most useful and general definition of the word.

    The brain certainly computes, or I couldn’t do simple arithmetic. It can do other things that computers are not yet good at, but they probably will eventually be. I can remember and sing at a moment’s notices many of the songs of Gordon Lightfoot (world’s greatest living composer – don’t argue!), but I don’t store this information in little capacitors like a computer does. But there has to be some kind of record in there or I couldn’t sing them from memory.

    Assuming for the sake of argument that the brain can do things that no computer will ever be able to do, then that doesn’t mean the brain is not a computer, it only means that the brain is not *only* a computer. Or that it contains a computer, but the brain’s computer may use different methods for computing than digital computers use. But surely it’s a bit of a big step to say that all computers must be digital and that even though something obviously computes it isn’t a computer because it isn’t digital.

  22. says

    Does it halt or not? Feel free to take all the time you want.

    Is it running on a Windows box? What version? ;)
    Yeah, it halts when the autopatch update makes the system reboot.

    Here’s one:
    A mathematician is asked how many sides there are in a cube: “6! That’s easy!”
    Then an architect is asked how many sides there are in a cube: “12! There’s the outer walls and roof and the inner walls and ceilings and stuff!”
    The computer programmer says: “2! The inside and the outside.”

  23. numerobis says

    I can say that there are an uncountably large number of real numbers between two integers, but a computer can never actually prove such a thing, because such statements aren’t provable.

    Funny how I struggled with really understanding the proof of that in undergrad. Now you tell me it’s unprovable! https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument

  24. numerobis says

    @Marcus Ranum: the video game 3d artist says there’s 24 if you want two-sided faces, because only triangles exist.

  25. Owlmirror says

    Every program will halt, because entropy tends to a maximum. All computers will run out of energy to compute with, and halt.

    All computers will eventually wind up inside of black holes, where time stops, and they will perforce halt (assuming they avoid being shredded by the tidal forces, of course).

    Every program must halt because otherwise they would be perpetual motion machines, which are impossible.

    /Things physicists might say to troll computer scientists

  26. WhiteHatLurker says

    Wow. PZ, I take back a bunch of the nasty things I said on your blog simply because you’ve now mentioned the 6502 processor.

  27. says

    I think it may also be worth mentioning that “computers” and “calculators” were originally humans, who performed steps of a “program” (or algorithm) to do a mass calculation. Richard Feynman, in his “Los Alamos From Below” talk gives a really interesting description of how human “programs” were used in parallel to calculate bomb behaviors using massed Marchant calculators — which were eventually replaced by a machine from International Business Machines called a “computer”

    So the question is whether human brains can do what machines have been programmed to do in order to emulate human brains? I think so.

  28. shallit says

    I can say that there are an uncountably large number of real numbers between two integers, but a computer can never actually prove such a thing, because such statements aren’t provable.

    You are so confused that I am pessimistic I can possibly relieve you of your confusion. Not only is the statement you say provable, it can be proved by a computer.

    I can only suggest taking a course in computability theory, and one in logic. We teach both at my university, and I teach the former.

  29. says

    Addendum: in his talk “Computers from the inside out” Feynman does a pretty fun breakdown of computers as being highly efficient card catalogs operated by little gnomes inside the box. Given his initial exposure to computation, Feynman didn’t see digital computers as much more than faster calculators. This was back in the days where you were expected to be able to calculate, yourself, if you were a physicist.

    Feynman makes some bloopers and is clearly not very experienced with microcomputers as they were at the time when the lecture was recorded. It’s on youtube and it’s not a great lecture but it’s fun. I see that the “Los Alamos From Below” talk is also on youtube. The parts on computation are kinda neat – basically they used human render-farm doing parallel computing.

    We’re meat! Meat that computes!

  30. Menyambal says

    I know that I store a lot of information as the rules to reconstruct it, not as the information itself. For instance, I don’t know a formula for converting Fahrenheit to Centigrade, but I know the ratio is 9:5, and I can figure out which is the 9 and which is the 5, and I also know they meet at 40 below and what the boiling points are.

    I think the problem we are having here is that there is no single term that includes both brains and computers. That gives the idea that they are two different things. I propose the term “information benders”.

  31. shallit says

    You have an example program and you’re making the prediction that it runs for ever

    I made no such prediction. You made the claim that you could determine whether any program halts or not. I gave you one. OK, now determine it for me. Take as much time as you like.

  32. moarscienceplz says

    No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain

    Ummm, I can hum a fair amount of the first and second movements for you on demand. If you have a cassette tape of it, there are ways to make the magnetic pattern visible, but it looks nothing like Beethoven’s score. Is the pattern on the tape a “copy” of Beethoven’s 5th Symphony? If yes, then how is the “pattern” stored in my neurons that allows me to hum it not pretty much the same kind of thing?
    I think the patterns stored in Robert Epstein’s brain need much improvement.

  33. komarov says

    Strange. Why would you ever need a complete and faithful picture of a dollar bill in your head? Even computers dealing with the same problem use an incomplete set of reference points to identify bank notes. Just enough to get a reliable match that’s good enough for the task at hand.
    A simple image processor might, for example, try to pick out the digits and the portrait or some other distinguishing feature for comparison with a set of training images. Then, based on the degree of similarity*, it tries to decide what the bill in question is most likely to be.
    Yes, a computer could take a full-sized, high resolution picture of the bill presented and compare it to a database of pictures. But there’d be little or no gain, the process would probably take longer and could might even become less reliable than before (both depending on the methods used).

    P.S.: For what it’s worth, after years of practice I still struggle with Euros. The bills are ok – they’re colour-coded. But the coins are practically identical in terms of size and shape and are all either gold or copper in appearance. Is this 10c or 20c, 1c or 2c? I have to check every bloody time. The British coins are very nice by comparison. Different sizes and shapes let you pick out the right coins immediately. They are very distinct to the human braint and, presumably, computers as well.

    *Another topic of some debate

    [The preview isn’t working for me anymore, so I’ll apologise in advance for broken code and shoddy(ier) proof-reading. Oh, and I get timeout messages trying to post.]

  34. Pierce R. Butler says

    Athywren @ # 2: A computer is a Victorian era woman who does mathematical calculations …

    The job title and the function (and its restriction to women) persisted through World War II (maybe a little later). At least in the US Army & Navy, every artillery officer used a handbook of formulae worked out (with triple redundancy to catch errors) by women solving ballistic equations for every plausible value of each relevant variable.

  35. EnlightenmentLiberal says

    Computers are serial machines.

    Yeah, no. You almost certainly typed that on a parallel processing computer, i.e. a computer with multiple independent processing units on the central processing unit, e.g. a multi-core CPU.

    Maybe you meant that the brain is a massively distributed processing network, whereas most artificial computers today are not? I might be able to go with that. But then, what do you say to a grid of thousands of computers that are doing a physics simulation with massive parallel computing? What do you say about a GPU, which is a massively parallel computing unit?

    /important nitpick

    But. we’ll have absolutely no way of knowing wether or not the AI’s we create actually are thinking and know what we’re talking about, or if they’re merely fooling us into thinking that’s what’s happening.

    Is there a difference?

    IMAO: There is no difference. That’s a silly question. Questions about philosophical zombies are silly questions. On this, I’m generally with Dan Dennett all the way.

    A human brain is called a “Turing Oracle”, it can resolve questions that are not formally “decidable”. For one example, a human being can look at an arbitrary computer program and it’s input, and given enough time, decide wether or not it will halt.

    Lolno. It seems that you think that the human brain can violate physics, and boy do I have news for you.

  36. turnips53 says

    When evolution came out as a theory, the evolutionary tree took off in a big way – other fields, including linguistics, applied it to their own work and this gave them new perspectives. So the tree that biologists drew to show common ancestors between two (or more) species could be used to show how languages had changed over time, with Latin as the common ancestor of French, Spanish, etc. While this tree model that was cribbed from evolution was helpful and did in fact demonstrate something new that was true about languages (they can evolve from each other), it didn’t mean the same thing as it does in biology (languages don’t have genes, and while they have mechanisms like random mutations, it’s not identical – and how these changes catch on is another story). The evolutionary tree model is a helpful metaphor in historical linguistics that explains a lot (if I remember correctly, this model was applied to other fields with less success).

    Similarly, the planetary model was applied to atoms to explain electrons orbiting around the nucleus, and I’m not a physicist, and it’s been ages since high school physics, but it’s my understanding that this model helped further the understanding of physics at the time (indeed, through my days in high school), even though now it is understood that this model is not how atoms actually work in reality. When I read this article about how the brain is not a computer, this is what I was thinking of – our understanding of the brain is based on a computational model in the way that our understanding of atoms was based on a planetary model. The brain is like a computer in many ways, and thinking of the brain as a computer helps us to explain and understand a lot about it, because we understand computers (and they actually do have a lot in common). But I think what he’s saying is that we don’t know very much about the brain, and that another model or metaphor will likely come along that will explain it better, and perhaps further metaphors and models will come, until we have an actual understanding of how brains work in reality (if we ever do understand that). His example that the models we use to discuss the brain have followed advances in technology sort of backs this up – each change in understanding of the brain happens with an advance in tech, and the brain is discussed as if it were (metaphorically, but also seriously) the most complicated technology available at the time. I think he’s saying the best we have right now is not the complete answer for explaining brains (but yeah, I don’t think anyone was arguing that it was). I think he’s hoping that if people stop thinking about the brain as a computer, they might be able to discover something new that they wouldn’t see if they were looking through the lens of the computer model. I suspect in the long run he could be right, but in the short term, it’s pretty hard to come up with a revolutionary idea that changes how we see everything (to put it mildly). And if someone does come up with such an idea, he wouldn’t deserve credit for it because of this essay.

    Damn! I thought I disagreed with you when I started this, but the writing process has ruined everything once again. Maybe you guys are computers, but I’m clearly buggy.

  37. Pierce R. Butler says

    turnips53 @ # 40: … the brain is discussed as if it were (metaphorically, but also seriously) the most complicated technology available at the time.

    Dig through some of Freud’s writing sometime, if you have the stomach for it, and see how frequently he discusses emotions in terms fully applicable to steam engines.

  38. Jake Harban says

    Brains are NOT computers.

    Except that all your arguments were that brains aren’t PCs which no one claims they are. They’re still computers.

    (I’m kinda insistent here because “brain’s a computer” is one of the main arguments of the transhumanuts. “The brain is not magic, so this means we can recreate it completely ’cause computers. And ’cause Moore’s Law(TM) this means immortal Matrix upload robot gods.”)

    Brains are computers, and as such uploading a human mind into a machine will not violate any fundamental laws of physics.

    I’ll be uploading my own brain as soon as I’m finished installing a 100% open source OS on my iPhone. After all, writing the system (and its associated drivers) should be easy as long as I can figure out exactly how the hardware works and that shouldn’t be too difficult, right?

  39. John Morales says

    re “Brains are NOT computers” — I think it’s the wrong comparison.

    A better one is that both brains and computers are instantiated Turing machines with inputs and outputs.

    (Also, has everyone forgotten analog computers?!)

  40. Infophile says

    @7 applehead:

    (I’m kinda insistent here because “brain’s a computer” is one of the main arguments of the transhumanuts. “The brain is not magic, so this means we can recreate it completely ’cause computers. And ’cause Moore’s Law(TM) this means immortal Matrix upload robot gods.”)

    This sounds an awful lot like an appeal to consequences here. The brain might still be a computer even if that fact would help others make bad arguments based on it.

    @10 sigaba:

    But. we’ll have absolutely no way of knowing wether or not the AI’s we create actually are thinking and know what we’re talking about, or if they’re merely fooling us into thinking that’s what’s happening. And when people upload their brains, we might have computers that have their memories, talk like them, and interact in exactly the way we expected their meatspace counterparts to interact, but we’ll have no way of knowing if they’re conscious, or if they’re just “passing” for conscious.

    We also have no way of knowing whether other humans are actually thinking and know what they’re talking about, or if they’re just fooling us into thinking that’s what’s happening. We can each only say that about ourselves – there’s no way to ever know for sure that anyone else really does have “ghost in the shell.” This is the basis of the whole philosophical idea of the p-zombie.

    But in practice, we all know that we each do have minds, understanding, etc., so we assume that others do as well, even if we can’t know it for sure. And in fact, we often end up wrong about how other people’s minds work. You might assume that someone else has a mind like yours, but in reality they’ve got aphantasia and don’t possess a mind’s eye, or they’re on the autistic spectrum and pass as non-autistic whenever you see them.

    And what about animals? Does an ant have a consciousness? What about a dog? A chimpanzee? I can safely say that my smartphone probably doesn’t have a consciousness, but with more and more advanced computers, it gets harder to tell. Maybe humanity has already created a computer with some form of consciousness without realizing it (and of course we wouldn’t believe it if it told us).

  41. says

    “the brain does not contain a detailed, easily accessible, high resolution photograph of a dollar bill embedded in a ‘memory buffer’ somewhere that can be pulled up and suspended in the mind’s eye.”

    Actually, some exceptional people do have this capability, or close to it. It’s called eidetic memory. Oliver Sacks wrote a whole long essay about an autistic British boy who could make highly detailed, very accurate architectural drawings from memory after a brief viewing. Just sayin’.

  42. Jeff W says

    turnips53 @ 40
    You are on the right track, I think.

    I think Epstein is arguing badly in a lot of ways but my guess is that he would address some of what was said in the post as follows (although I can’t say for sure):

    You can, obviously, discriminate between US bills and foreign bills, and various coins in your pocket—you’ve learned to do so because of your environmental history and your genetic makeup. But all we know is that some changed state has occurred which allows you do so and that changed state does not equal “information encoded.” The better analogy, behaviorists say, is a dry cell battery that is charged with electricity, changes state, and discharges electricity—in that changed state it does not contain electricity. Again, the changed state ≠ whatever was put in or whatever comes out (either in terms of electricity for the battery or in terms of behavior for the organism).

    You don’t have “specific mental representations” in your head and you certainly don’t “store” them in your head and then “retrieve” them. Instead, when you are visualizing, say, a dollar bill, you are seeing that dollar bill when it is not present—Epstein says “seeing something in its absence.” Your behavior is seeing—it’s not some “representation” of what you saw. That you can do so is, again, as part of your state as an organism—if you seen an actual dollar, you might be able to visualize it in its absence—but that behavior, that visualization are no more “retrieved” in your brain than what you experience when you see the dollar bill in the first place—it is instantiated at that moment and your ability to do so reflects the state of a person who has seen an actual dollar bill before. Epstein is saying that that changed state is nothing like what is stored on a computer—in effect, he might say that “we don’t encode information in the way we [the people are making the argument] demand we must”—and, in fact, we do nothing like “encode information.” He is saying that, viewing that state (or changed state) of the organism as similar to the way information encoded is doing runs the risk of being as wrong as those who, in the past, thought human intelligence ran on some principle of hydraulics. (I don’t know if his history of science is correct but I’ll take him at his word.)

    When an organism discriminates with regard to a stimulus, we don’t say that it has a “specific mental representation” that it is comparing the stimulus against—it is simply responding, that is, behaving with regard to the stimulus. So, when we say, very generally, that mating selection occurs on the basis of quality (or attractiveness) we don’t (as far as I know) go further and say that the organism is using a “specific mental representation” as a basis of comparison in order to do so—it’s just responding to some features of the mate. Epstein, it seems, is saying something similar.

    Again, I don’t know what Epstein would actually argue here—all we have is his (badly) argued piece—but I think it would be something along those lines.

  43. Rich Woods says

    @Owlmirror #26:

    All computers will eventually wind up inside of black holes, where time stops

    Where time stops — at the event horizon — only from the point of view of an external observer. Inside the black hole, time might well be running backwards: the computer falls through time towards the singularity, rather than through space. Though I’m not sure if ‘towards’ is the right word to use there. It may be that it’s falling to the point in space-time when the singularity first came into existence.

    OK, that’s enough of that. I need more beer.

  44. applehead says

    #8

    “Planes are machines, artifices designed by humans for a purpose. Birds are organisms and the product of millions of years of blind, goal-less evolution. Therefore planes do not fly. Oops! Something wrong with your argument there.”

    Even by misrepresenting my argument you fail to convince. As others have alluded, a better comparison would be “birds are not planes,” which any person of sound mind agrees is objectively true. And not only because you can’t board a bird for a quick trip to Frankfurt.

    Planes fly in a way fundamentally different from birds. Birds don’t use rotary elements, but flap their wings. (Duh, I know.) What’s more, though ornithopters and biomimetic robots exist, no human artifice has ever managed to replicate the performance of a real life bird. There’s no aircraft that matches a kolibris or an eagles consumption of joules per mile.

    The same is true for computers. While we happened to recreate something humans can do – calculating numbers – computers do so in an unhuman fashion. You can talk to person, ask them how they came to that conclusion. With computers you have to use a cumbersome command interface. And anyway, if juggling numbers is just mere computation, how come it takes humans to come up with and innovate the necessary math?

    Also, isn’t it a logical fallacy to assume if we managed to – sorta, kinda, if you squint your eyes – human abilities with computers that they are somehow capable of all the myriad others?

    The fact you Sooper-Science Freethinkniks can’t tell that there’s a fundamental, indisputable and pivotal difference between an artifice crafted by an intelligent designer and an organ shaped by natural selection makes me wonder why you’re on an anti-creationist blog.

    Brains. Are. NOT. Computers.

  45. Zmidponk says

    In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

    Then you get a student with an eidetic memory that can also draw well, and is therefore perfectly capable of drawing a completely accurate dollar bill purely from memory, and the point you’re trying to prove is utterly negated.

    Even leaving aside the fact there’s more than one kind of ‘computer’, it does seem that Epstein doesn’t see some pretty clear parallels between the brain and a PC. For example, there’s this:

    To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

    A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight.

    If you’re building a new machine, after you have successfully connected together all the various parts (which may or may not involve a certain degree of swearing and threatening with large hammers), the first thing you do is connect the power cable(s) and hit the power button. You’ve not put any data on there, not even an operating system, so how does the computer know this means ‘turn on’? Just like the instincts that babies have, every motherboard has a BIOS or, on newer ones, a UEFI, that has a basic set of programming on it that allows the motherboard to do basic things like switch the machine on when the power button is hit, find and initialise all the components connected to it, and start booting the operating system (which, on a freshly built PC, isn’t there, so it would halt with a error). With the baby, the ‘operating system’, so to speak, is installed very slowly by the baby learning about the world around it over the next several years. With a PC, a prepared one is installed relatively quickly from an external source.

    There’s also something that Epstein misses here:

    But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

    Where did any computer get these things? They are programmed with them – by humans. But if humans are not born with them and never develop them, how can a human have them to program them into a computer?

  46. jack lecou says

    But all we know is that some changed state has occurred which allows you do so and that changed state does not equal “information encoded.” The better analogy, behaviorists say, is a dry cell battery that is charged with electricity, changes state, and discharges electricity—in that changed state it does not contain electricity. Again, the changed state ≠ whatever was put in or whatever comes out (either in terms of electricity for the battery or in terms of behavior for the organism).

    I’m not sure who’s arguing what with whom, but I’d just note that if the argument is supposed to go something like “humans aren’t computers because computers work by recording a literal representation of their input and then reproducing a perfect copy later on output (and humans aren’t like that)”, that’s trivially silly.

    Digital computers and human brains are obviously very different things with different capabilities. But at a sufficient level of abstraction they both work kind of the same way: by taking note of the interesting characteristics of the inputs, transforming those notes into more useful — but potentially totally different and weird — internal representations, then processing and interacting with those internal representations to produce some kind of output.

    Even when a computer is literally performing the task of recording, say, audio, and reproducing it later, it’s a) not recording the entire signal (it’s sampling at a finite rate, and quantizing it with a finite digital resolution), b) it’s not literally storing “sound” in it’s memory banks (it’s storing first, a buffer of quantized voltage samples from a transducer, then probably processing that still further: maybe digitally filtering out noise, discarding portions of the sampled signal which it deems unnoticeable by human ears, chopping the processed data into chunks or packets and adding timing information, and saving it somewhere, maybe with a different byte order, or encoded with additional compression or encryption schemes) and c) it’s not reproducing the sound perfectly later: if the encoding and output hardware and software is reasonable, and the frequency response of your ears is in the normal human range, then there should be no audible differences, but what comes out the speakers is still only an approximation of the original waveform.

    Kind of like how that dollar picture is an approximation based on the features your brain sampled and stored. If you ask a computer to reproduce a sound at 96kHz that it only sampled at 24kHz, it’s not going to be exactly right either.

  47. Rob Grigjanis says

    Rich Woods @48:

    Inside the black hole, time might well be running backwards: the computer falls through time towards the singularity, rather than through space.

    I think that notion comes from trying to interpret the weird behaviour of Schwarzschild coordinates around the event horizon as something physical. They have a singularity at the event horizon, but unlike r=0, it’s not a physical singularity. Inside the event horizon, the time coordinate becomes spacelike, and the radius coordinate becomes timelike. That doesn’t mean that time and space somehow change places; it just means the coordinate system isn’t very good at or around the event horizon. Kruskal–Szekeres coordinates are much better at visualizing what’s happening in this region.

  48. jack lecou says

    I’d also note that you could probably get a much better drawing from a human if you asked a forger, or a Treasury engraving artist.

    The point being that most of us are running “money recognition programs” if you will, not “money reproduction programs”, so of course we’re not going to be very good at high fidelity reproductions. Just like how if you ask computer voice command or image recognition software to backtrack and reproduce what it thought it heard or saw, you’ll get something much weirder than if you asked a voice recording program or a digital camera.

  49. says

    #49
    The question of whether brains are “Turing lite” is fundamental here. Because, contrary to birds and planes, all things that are equivalent to Turing machines can do all the same things as each other. So, in this case, a computer could do all the things a brain can do and a brain could do all the things a computer can do. If I say “brains are computers”, that’s what I mean.

    If you define computers as being electronic machines, though, then we can all agree that brains are not that.

  50. ck, the Irate Lump says

    Computers are serial machines. Brains are massively parallel objects.

    This seems like a category error to me. Software is mostly a serial affair. An execution queues inside a processor may appear to run serially, but instructions may actually execute out-of-order, concurrently, or the processor may even start executing instructions down a decision branch before even deciding which branch is true (speculative execution), and most modern processors are multi-core. The transistor network inside a modern computer is massively parallel (tens or hundreds of millions inside each modern CPU, plus millions or billions in all the other support ICs, all concurrently operating to make the computer work).

    How much of the parallelism of the brain vanishes when you start talking about the mind (i.e. what the brain does) rather than neurons (how the brain works)? I’m guessing a fair bit of it.

  51. says

    @49, applehead

    “Planes are machines, artifices designed by humans for a purpose. Birds are organisms and the product of millions of years of blind, goal-less evolution. Therefore planes do not fly. Oops! Something wrong with your argument there.”

    Even by misrepresenting my argument you fail to convince. As others have alluded, a better comparison would be “birds are not planes,”

    The key is the word “fly”. It is a verb. So is “compute”. And something that computes is a computer. That’s what it means for something to be a computer :P

    Here’s how the comparison goes:

    Do both birds and planes fly? Yes. So they are both flyers. Do both brains and technological computers compute? Yes. So they are both computers.

  52. Infophile says

    @49 applehead:

    I have to wonder if perhaps we’re working from different definitions of “computer” here. I’m guessing that your personal definition is akin to that from Wiktionary:

    3. A programmable electronic device that performs mathematical calculations and logical operations, especially one that can process, store and retrieve large amounts of data very quickly; now especially, a small one for personal or home use employed for manipulating text or graphics, accessing the Internet, or playing games or media.

    And yes, by this definition, the human brain doesn’t qualify (unless you want to play loose with what “electronic” means). But that’s not the only definition. For instance, the Wikipedia page for computer defines it in the first line as:

    A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically.

    By this definition, the human brain would indeed qualify – you program it by teaching it how to do these operations, and it can do so.

    For the purposes at hand, such as answering the question of if it’s even theoretically possible to create an artificial computer which functions like a human brain, I think it makes sense to use the definition which is closest to the fundamentals. For instance, all modern airplanes use fixed wings, but this doesn’t mean we can’t someday create an airplane which flies by flapping its wings like a bird. Similarly, typical modern artificial computers use digital data, but this doesn’t mean we can’t build one that uses analog data like the human brain does. And it doesn’t mean that if we do, it’s no longer a computer.

    You mentioned before that you’re passionate about this topic because of the fact that transhumanists use this argument to support their beliefs. So, rather than getting bogged down in definitions, perhaps a more useful question to discuss would be, “Is it theoretically possible to create an artificial device which has a ‘mind’ identical to that of a human brain?” This is different from the related questions of if this is realistically possible, realistically possible given current or near future technology and understanding of the human brain, and how we can demonstrate or prove that we’ve succeeded at this task. If we’re arguing about different questions here, we obviously aren’t going to make any headway.

  53. jamesbalter says

    “all things that are equivalent to Turing machines can do all the same things as each other”

    No, all things are equivalent to *universal* Turing Machines (UTMs) can do all the same things as each other.

    There are numerous technical errors on all sides. I was quite dismayed to see Shallitt make the absurd claim that only TMs, and not finite automata, can be intelligent. This statement is quite inept. First, intelligence is not a formal property and so computer science has nothing to say about what can or cannot display it. Second, *all finite entities can be modeled by finite state automata*. The entire universe, over any finite time span, can be modeled as an FSA. TMs have an infinite tape and are only required for modeling infinite processes (that cannot be reduced to finite ones). The main reason that theory deals mostly with TMs rather than FSAs is because its mathematically cleaner to avoid introducing arbitrary finite limits. This is the same reason that we do geometry with infinite lines even though no line we encounter is infinite, and we operate on real numbers even though nothing in real life has infinite precision or is unbounded. The math is much much cleaner for the infinite abstractions.

  54. jamesbalter says

    “Computers are serial machines. Brains are massively parallel objects.
    Brains are NOT computers.”

    Petitio principii. You have selected a meaning for “computer” for which your statement is true, but the statement doesn’t hold for the meaning that everyone else intends, and that is relevant.

  55. jamesbalter says

    “No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli.”

    I dare Epstein to find it in his computer, either. He has no understanding of the concept of representation, or of anything else that he writes about.

  56. jamesbalter says

    “So like, all a computer does is take input symbols, change them according to a set of rules, and spit out the result. It has no sense of what those symbols “mean” in any sense.”

    Neither do neurons, or the molecules and atoms they are made of. but this is the bogus Chinese Room argument that has been completely debunked.

    “Or rather, the hypothesis that a computer understands the meaning of what it’s doing isn’t falsifiable.”

    It is if you forego the magical mystical sense of “understanding” — the sort that Searle uses in his bogus Chinese Room argument — and instead use the *actual* meaning of “understanding” that we all use in practice; that of *competency*. Whether someone understands something is falsifiable via things we call “tests”.

  57. jamesbalter says

    “A human brain is called a “Turing Oracle”, it can resolve questions that are not formally “decidable”.”

    Call it whatever you want, but this is false.

    “For one example, a human being can look at an arbitrary computer program and it’s input, and given enough time, decide wether or not it will halt.”

    This is ignorant nonsense, unless by “decide” you include the possibility reaching the *wrong* decision … just like any other formal system that *is not consistent*.

  58. colonelzen says

    my post on this article first made elsewhere:

    You know, I pretty much agree with *everything* Epstein says about how the brain is different from computers. Completely.

    And still he’s wrong.

    I keep trying to stress this and nobody seems to be able to get it. Our brains are not computers. They don’t work like computers. We built computers from Turing’s seminal theories onward to work like (some specific aspects) of our brains do.

    Now as it turns out – by design actually – those aspects are those most significant in turning abstract ideas (like numbers) into concrete physically manipulable analogs. In doing so we’ve created deep formalisms for the manipulations of abstractions in aggregates.

    Do understand this “Information Processings” …. as “Computer Science” is NOT about computers at all. It’s about the rules and consequences, the hows what’s and whys of how abstractions can be instantiated and manipulated. There’s a “pure form” that is all mathematics and logic … and some academic CS professors work on theory that will never actually be instantiated on real hardware … it is in fact, a form of mathematics. But some of their droppings sometimes find their way down as interesting and very useful algorithms and heuristics that can be implemented.

    There is no possible escape from the metaphor of “IP” for the plain and simple reason that what the brain does *is* information processing. It is what collects and saves the history and details of the host envirionment and converts that into physical action. It does not itself perceive … the nerves are outside the brain. It does not act … the muscles are outside the brain. The connection between them, and what the difference between the body as say a rotting log of equal mass, and a living human being is the collection and abstraction from the senses and the action of the body differing from an inanimate physical mass of similar shape.

    Remember in the end, it’s fat and gristle in a bone box. But its three pounds of fat and gristle that are the difference between us and a rotting log. What can it do to be that difference. In the dark, in a box, with nothing but signals (and a bit of chemistry) coming in and going out. What *can* it do except process information?

    If you got up on the right side of the bed put your pen on the right side of your keyboard. left, on the left.

    You could in fact do that from memory of how your bedroom is arranged and your sleeping habits, and possibly actual memory of this morning. That is *information* determines your actions.

    The memory of the brain works radically differently than that in computers. We know that. We do think it is stored somewhat in the way neural nets store information … no individual storage per say a raster image, but as weights of varied pieces of information that can only be abstracted and regenerated collectively and conjointly. But even with our CS experience with NN theory we’re a long, long way from being able to identify the direct correspondences.

    But *how* the information processing doesn’t matter. How it differs from the computer on your desk – or even IBM’s Watson or the latest from Cray Research, doesn’t matter. It’s *still* information processing. It *must* follow the rules of information processing (in particular what information and what resources have to be available at each step in the information process to achieve the end – and any accessible intermediate – result. That does not obviate the realities of the substrate… Epstein’s commentary on the decay and reconstruction of memory matches what I’ve read … some articles I’ve seen in passing suggest strongly that memory in the brain is always “read once” … multiple copies can be made (but per NN’s they will all be slightly different) and reading once the brain can “rewrite” them – with serious degredation per the brain being biology, not 10^18+ mbtf silicon as well as the NN – composite memory issue. It may be less efficient and informationally “pure” than the idealized processing we try to accomplish in silocon and metal, but it is still information processing and has to follow the laws and pragma’s we’ve discerned about information processing.

    Let me emphasize that again. The brain is NOT a computer. It’s vastly less efficient, vastly less reliable at each step and stage (but with *incredible* bandwidth!) But to the extent that what it does is a correct abstraction, interpretation and action based upon that, it IS “IP” as we know it in computer science.

    And “IP” is the only reasonable metaphor to use in its description because “IP” is very exactly and precisely an intellectual creation specifically derived from an idealized model of how our brains themselves deal with abstraction and manipulation of information (Alan Turing modelling formalized arithmetic – actually using actuary and ballistic calculation “computers” – human beings – manuals to do so! – and thinking of means of implementing it physically and subsequently creating a mathematical model of its “purest” form).

    I suppose you could start from scratch at creating a metaphor .. what chance do you think there is of doing better in reasonable time? AMT was undoubtedly a real genius. As was JvN. Any candidates to do better?

    Yes, it’s always important to remember that the brain is “not a computer”. It doesn’t work the way our silicon, metal and plastic computers do. I’ve really only seen the incredibly naive make that mistake; otherwise it is always those who don’t understand what the metaphor actually means – those who do not understand that “computer science” is about formal representation and manipulation of abstractions, NOT about computers – who actually think in terms of the brain being “like” a computer.

    But it *is* an information processor, no matter what Epstein, or anyone else says. What else *can* it be?

    — TWZ

  59. jamesbalter says

    “you fail to convince.”

    It’s impossible to convince intellectually dishonest people of that which they don’t want to believe. I could give a formal proof, but you would refuse to accept it.

  60. jamesbalter says

    “I keep trying to stress this and nobody seems to be able to get it.”

    Maybe people are able to get it, but you are just wrong and your arguments are fallacious.

    “Our brains are not computers.”

    This claim has repeatedly been refuted.

    ” They don’t work like computers. ”

    Strawman. Anything else that came from the same source as that statement is worthless and can be ignored.

  61. chigau (違う) says

    jamesbalter</b
    Doing this
    <blockquote>paste copied text here</blockquote>
    Results in this

    paste copied text here

    <b>bold</b>
    bold

    <i>italic</i>
    italic

  62. EnlightenmentLiberal says

    I have to echo what Infophile said in 57. It is a rather indisputable claim IMAO that for a person, there is a program in C, with the perhaps additional input of a true random number generator function, that can perfectly emulate the bodily behavior of that person, including speech. This is true whether the person has a soul or not.

  63. John Morales says

    EnlightenmentLiberal:

    It is a rather indisputable claim IMAO that for a person, there is a program in C, with the perhaps additional input of a true random number generator function, that can perfectly emulate the bodily behavior of that person, including speech.

    What? I hereby dispute that claim, thus proving that it is disputable.

    This is true whether the person has a soul or not.

    Zimboes, eh? Heh.

  64. numerobis says

    Finally read the piece. It’s worse than I was led to believe. Not only is Epstein ignorant of information theory, he also rejects empiricism at its base: he just declares much of neuroscience to be false, by fiat, apparently because it offends his sense of self. His one experiment discussed in the piece shows the opposite of what he concludes.

    And this idiot teaches our youth? Terrifying.

  65. shallit says

    TMs have an infinite tape and are only required for modeling infinite processes (that cannot be reduced to finite ones).

    This is completely wrong. TM’s and their sister model, the RAM (random access machine) are the basic model for nearly every algorithm that one studies in a basic course in algorithm design and analysis. These algorithms are typically not “infinite processes”.

    I teach this stuff at the university level.

  66. numerobis says

    @shallit: This is one of my favourite quandaries.

    In practice, our machines are bounded, so in practice everything is really just constant time, constant space, constant communication complexity.

    But in practice, this observation is utterly useless.

    Meanwhile, the completely theoretical asymptotic analysis is hugely useful in practice.

    On the flip side, Bob Harper, while teaching us about how to prove that a type system was decidable, was fond of pointing out simultaneously that for a programmer, an undecidable type system that has a fast acceptance in practical cases was not necessarily worse than a decidable type system with no efficient algorithm. At least I think it was Bob; might have been one of his flock.

  67. Zeppelin says

    Even if I *did* have a perfectly detailed, “photographic” recollection of what something looks like, that still wouldn’t mean I’d be able to draw it on a blackboard. Because I’d also have to translate that recollection into motions of my arm and hand and process the image in some way that allows me to convert it into two-dimensional monochrome lines. Which is difficult, as evidenced by the fact that it takes many years of constant practice to become proficient at drawing.

    So this doesn’t seem like a sensible test either way — a computer without an appropriate output device will also give you a lousy picture of a dollar bill.

  68. colonelzen says

    Sorry, no. By what we’ve meant ever since we surrendered people with pencils and accepted AMT, a computer is a Turing machine capable of implementing his UTM. It is by formal (mathematical!) definition a deterministic machine.

    Our brains are not deterministic in the means and mechanisms by which they accomplish things. It’s plainly obvious that the brain has a huge amount of organizational structure and modalities invested in making up for its (concomitant to biology) lack of determinism. Even our language is in many ways an adaption to increase determinism in the functioning of our brains.

    I have of course argued the inverse … with those who insist that computers can never do what brains do. But that inverse argument only applies at that level of abstraction. Brains are not computers. But YES AND SURELY – per Turing’s prime theorem (not theory – it’s mathematically, not empirically provable) everything potentially “meaningful” that our brains can do can be implemented by ANY Turing complete mechanism … including digital computers. Computers in the end CAN DO what brains do. The brain is not a computer … but it computes. And computers are not brains … but they can do what brains do. (including if we ever agree on definitions and uncover the internal refrerence structure such are, of feel, live, empathize …. and covet, and hate).

    — TWZ

  69. colonelzen says

    Er …. ANY Turing complete mechanism … of adequate resource …

    — TWZ

  70. colonelzen says

    And, oh, no a human brain CANNOT solve the halting problem generally.

    In certain cases, yes it can see heuristics that a general analysis program could not see. That just means that there are more “algorithms” for testing cases available than were incorporated in to the analysis program.

    — TWZ

  71. consciousness razor says

    colonelzen, #75:

    Our brains are not deterministic in the means and mechanisms by which they accomplish things. It’s plainly obvious that the brain has a huge amount of organizational structure and modalities invested in making up for its (concomitant to biology) lack of determinism. Even our language is in many ways an adaption to increase determinism in the functioning of our brains.

    Being chaotic or complex or evolved (or something other than indeterministic) doesn’t imply a thing is indeterministic.

    I have no idea what you mean by “increase determinism”…. Whatever that is, are there other examples of this besides language, that could help me get a grip on what you’re claiming? Or maybe about what you mean by “determinism,” since it looks different from how I’d use the word?

    The brain is not a computer … but it computes.

    Something which computes is a computer, like a sailor is somebody who sails. If sailors do other things with their time besides sailing constantly, they are at least sailors while they’re sailing. But you might say somebody is a good or bad sailor, for instance, which is a reasonable and non-confusing description even while they’re not currently sailing (or not currently “being a sailor” in a more restricted sense). It’s clear what I’m saying there, because it’s easy enough to say it in plain English, and I don’t think it’s terribly controversial.

    Otherwise, if it isn’t about as straightforward as that, it isn’t clear what non-sailing criteria there are for “being a sailor.” (What would definitely not count as a criterion? Are there any gray areas where we can’t say anything very definite? etc.) It’s also not clear why we should think those are in some way better uses of the word (more reflective of the actual world, what people now mean by the word, what it traditionally meant to others, etc.) than the obvious choice of a person who sails. I can’t make out a clear argument for that, partly because it’s not even clear (or it’s not agreed upon) what’s supposed to be at stake in this discussion. How to use a word correctly, some set of facts about human beings and/or computers, or what?

    So, what distinction are you trying to make between a computing thing which “is not a computer” and another computing thing which “is a computer”? If you’re a computing thing which “is not a computer,” what is that like? Is that like being a non-professional sailor, not merely or only a sailor, not a good sailor, not a sailor who loves sailing, not someone who is always sailing or currently sailing, not what most people ordinarily refer to when they talk about sailors, not like the usual sailors you read about in old stories about “real” sailors, not the type of sailor that you’re thinking of right now whatever type that may be because I can read your mind, etc.? Or is there some other analogy that might be more helpful?

  72. colonelzen says

    “Computer” is a word. And a recent one at that. It means what we choose to define it to mean.

    AMT’s formalization is so devastatingly complete and world changing that it is essentially world building. It quite literally created a new realm of intellectual discourse.

    Mechanism derivative of AMT’s UTM formality are so common and powerful, so widely implicit in intellectual discussions, and absorbed by the lay public (even when they don’t understand the implications) that such are the most reasonable most generally useful meaning of “computer”.

    A UTM is a deterministic machine, however implemented. The brain is not. Certainly even some cellular structures have the logic switching and storage reference for the (very simple in fact) definition of the UTM. But they cannot do so consistently for the thousands to millions of operations to carry out even a simple algorithm. Likewise for larger structures in the brain.. The brain is organized to accomplish similar “information processing” despite the non-determinacy of its component construction.

    Now again, I quite agree that so far as what is coherent and meaningful (a lot of the brain process results in stuff that isn’t), any that is so CAN be accomplished mechanistically – by computer. And (aside from some chemical tweaking) such information processing is really all the brain *can* do.

    So it is a mechanism – and in fact the archetype of such mechanisms – for “computing”. But we’ve achieved an abstraction and formalization of “computing” that is massively powerful and useful in its own right. But we’ve since learnt that that formalization is not in fact the way that the brain does the information processing it does.

    Not really a contradiction so much as a historic artifact.

    On the flip side you can define “computing” to mean whatever you want remotely related to information. John Searle once infamously said that paint molecules vibrating in the wall are “computing” thus you can call a wall a computer. Well, that’s a meretricious inversion … the molecules can be represented and thus information processing can simulate them easily enough …. and in fact by physics such molecular vibration *is* in fact information …. so if you really, really want to stretch the point, yes you can call the wall a computer.

    But the wall is a profoundly uninteresting as a computer. The brain is not “uninteresting”, but we know enough to know that the way it does the information transforms about the relatinship of the outside world to its internal informaton structuring is not the kind of mechanism we call “computer”. But such information processing is in fact what computers are “for”.

    So again brains are not “computers” (as we’ve formalized the word). They do what we do “computing” for …. ergo they “compute”. But our machines do in fact have a teleology … we build them “for”. Our brain’s don’t. They ARE as they are. And it is a massive and interesting sub discipline of human endeavour now to figure out how brains do the information processing that we’ve built machines to do. Some call it a “mystery” (it isn’t once you release the belief in “self”), but it certainly is a hugely complex and fascinating field that we will certainly learn a lot from. Almost surely some new “tricks” to invest in our formalized computers (but I expect nothing revolutionary so far as physics or chemistry … I am hopeful we may learn a few exotically useful algorithms and mechanisms for computing …. neural nets are EXACTLY such a “trick” that we’ve already learnt).

    — TWZ

  73. What a Maroon, living up to the 'nym says

    Something which computes is a computer, like a sailor is somebody who sails.

    Something which opens cans is a can opener. Something which blows snow is a snowblower. Something which shits bulls is a bullshitter.

    Does that mean that because I sometimes use my hands to open cans my hands are can openers? In the most literal sense, perhaps, but the facts of the English language are that we use the term “can opener” to refer to a certain type of tool or machine that is specifically designed to carry out that task; in doing so, it functions in a way that is distinct from the way our hands function.

    For me, at least, the interesting question is not “is the brain a computer?”, because ultimately that rests on how broadly or narrowly you want to define “computer”. A more interesting question is “How are the workings of our brains similar to and different from the workings of the machines we call computers?” As an expert in neither neuroscience nor computer science I don’t have the answer, but I think that would be a more enlightening way to frame the question.

  74. What a Maroon, living up to the 'nym says

    Ack, forgot to close the blockquote. The first sentence in @80 is consciousness razor’s, the rest is mine.

  75. consciousness razor says

    What a Maroon:

    In the most literal sense, perhaps, but the facts of the English language are that we use the term “can opener” to refer to a certain type of tool or machine that is specifically designed to carry out that task; in doing so, it functions in a way that is distinct from the way our hands function.

    Sure, that’s how we use “can opener.” I agree with that, although I wasn’t saying that it was just a matter of how an entire class of English words are formed. It wasn’t meant to be about all words that end with “-er” and “-or.” I just said that the particular case of “computer” is like the particular case of “sailor” in a particular way. I’d like to know where the analogy falls apart (since they always do that), and why that’s so.

    For me, at least, the interesting question is not “is the brain a computer?”, because ultimately that rests on how broadly or narrowly you want to define “computer”. A more interesting question is “How are the workings of our brains similar to and different from the workings of the machines we call computers?” As an expert in neither neuroscience nor computer science I don’t have the answer, but I think that would be a more enlightening way to frame the question.

    A few questions. Is it a necessary component of the meaning of “computer” that it is a purposely-built machine? According to a lot of people with more expertise than either of us, they don’t want to define the word “computer” that way, because that isn’t a very interesting or enlightening way to use it. When they make their arguments clearly in intelligible natural language, instead of a convoluted mess of jargon, I’m usually convinced enough by what they have to say.

    Is it necessary that the intelligently designed machines at issue function in the same way as other things that we might good reasons to call a “computer” (specifically, brains)?

    It doesn’t even seem to be the case that all of the intelligently designed machines function in the same way as each other. Of course, at some very high level abstraction, presumably they all do “function the same way”, if for no better reason than because they all consist of matter subject to the same physical laws. Other than trivial points like that though, we’re all (implicitly at least) accepting of a large number of significant differences in how they function. So why are certain differences less interesting than others? What makes certain differences special? How do we know which ones are the special ones, if we know anything like that? Why should we assume the differences between brains and machines are the most interesting ones, or that they’re the ones which warrant some new conceptual category? Maybe it would be more helpful to carve things up at some other place — and besides, if this is not yet violating the Copernican principle, it seems to be getting awfully close. (Notice that it’s not usually brains but human brains that we human beings tend to believe are somehow better or special or more interesting.)

    No one here is claiming that brains are intelligently designed machines, and there doesn’t seem to be any confusion about any point relating to that. (At creationist websites, it would be a different story.) If it’s clear enough here that there are many ways in which brains are not like intelligently designed machines, so that we can basically set that aside as irrelevant, then are there any substantial problems with considering brains to be a type of generalized computer or thing-that-computes (one which isn’t an intelligently designed machine)?

  76. What a Maroon, living up to the 'nym says

    consciousness razor,

    Thanks for the response. I think the questions you pose are far more interesting than the rather simplistic question, “Is the brain a computer?” (or the much less used, but still simplistic, question “Is a computer a brain?”). I do think there’s a disconnect here in how people are using the word “computer”. colonelzen in 79, for example, uses a definition of computer that excludes the brain; as you point out, others use a definition which (they claim) includes the brain (and yes, not just human brains). If the discussion is about whose definition is correct, then, as a linguist of a certain stripe, I find it unelightening.

  77. colonelzen says

    As implied, the word “computer” is fluid.

    But we who believe that “mind” is instantiable as purely physical mechanism are put in a linguistic damned if you do, damned if you don’t.

    Use the AMT definition, and critics will say “Aha! See, the way the brain works is not like a computer see xxxx research!”. Never mind that such does nothing to invalidate the reality that any “meaning” what the brain does is only “meaning” by the relationships of symbols ultimately referring to inputs (senses) and outputs( actions, including speach) … which is ultimately the definition of a “program” in computer science. Hence any UTM of sufficient resource can do it. But such niceties are lost on the not so technically inclined.

    Use the “anything goes” definition and you get the “Aha … so the wall is a “computer” and you don’t mean anything “real” by “computation”.

    As I tried to express in the first post Computer Science is not about computers. It’s about how information transforms have to happen, what they cost, etc. It ALL applies in spades to whatever we think, perceive, feel, and whatever we *can* think etc no matter how you define “computer”.

    The upshot is that everything the brain CAN do, including all of our internally visible symbols of perception and inclination, – our emotions and feelings (I like to say “thinking” – even when solving mathematical equations, is just a particular way of “feeling”) – can be done mechanistically.

    == TWZ

  78. says

    I’m very late to the party. My computer has been having issues related to a negatively shifted environmental context that biases sensory input, self-reference and operations.
    @sigaba 11

    Can a computer, of any configuration or design, hate?

    If we accept that hate is a more intense and longer lasting version of anger the answer is yes since anger is associated with felt states related to destroying obstacles. The felt part of emotions has to do with a simulated metabolic status of the body so if a computer had the capacity to destroy obstacles and a sense of being in a different state than a state that did not need to destroy an obstacle, I would call that anger.

    Otherwise I think I got the reference.

  79. EnlightenmentLiberal says

    A UTM is a deterministic machine, however implemented. The brain is not.

    For that second part, how do you know that?

    Everything I’ve read suggests the opposite. Sure quantum physics might be true-random, but AFAIK neurons are classical – not quantum – in behavior.

  80. Rob Grigjanis says

    EnlightenmentLiberal @87: Classical doesn’t necessarily mean non-random. See here:

    At one extreme [of randomness in spontaneous firing] lie pacemaker cells, which fire almost periodically in spite of intrinsic sources of noise. At the other are cortical pyramidal cells with highly irregular firing; they almost embody the mathematical notion of a renewal process whose successive time intervals between firings are vanishingly correlated.

  81. jack lecou says

    Classical doesn’t necessarily mean non-random.

    But computers can also do random just fine. There are even many algorithms that rely intrinsically on randomness or noise for their function — e.g., Monte Carlo methods.

  82. EnlightenmentLiberal says

    To Rob Grigjanis
    Ignoring the reality of actual neurons, of which I admit ignorance, the term “classical” refers to a certain kind of physics, Newtonian physics, and sometimes relativity. It refers specifically to the physics that is preceded quantum physics, and the physics that happens today without quantum physics. This physics – this model of physics – is deterministic.

    Further, our lack of knowledge does not mean something is not deterministic. It may be that a system is too complicated for us to ever develop a fully accurate and deterministic model of it, but it may still be deterministic. There may be systems which are deterministic, but for which it would be actually flatly impossible to construct a fully accurate model because there are not enough particles in the universe to construct a computer that large, but it would still be a deterministic system.

  83. firstapproximation says

    jack lecou,

    But computers can also do random just fine. There are even many algorithms that rely intrinsically on randomness or noise for their function — e.g., Monte Carlo methods.

    There are some subtleties here.

    For the computers we use, when you need randomness you either mimic it with pseudorandomness or get the randomness from some noisy process and input it into the computer. The computers we use are very much deterministic machines.

  84. jack lecou says

    There are some subtleties here.
    For the computers we use, when you need randomness you either mimic it with pseudorandomness or get the randomness from some noisy process and input it into the computer. The computers we use are very much deterministic machines.

    Only if the computer you use doesn’t have a high quality random number generator in it. Pretty sure the one I’m using does, and if not it’d be pretty easy to incorporate one.

    Drawing a distinction based on whether the noise is injected from a peripheral component or produced within cells seems rather arbitrary — it’s already obvious brains and computers use different hardware configurations, that’s not really what the argument is about.

  85. jack lecou says

    I mean, it’d be more correct to say “we *currently* tend to mostly use computers to perform very deterministic operations”, but that wouldn’t really do much work for the brains aren’t computers argument. All that’s saying is that most of the *programs* that we run don’t even try to work like organic brains. Which is true enough.

    But, obviously, if you manage to build a computer/computer program that more closely approximates the responses of a human brain, it’s not going to be programmed like a spreadsheet or an Angry Birds app. By definition, it’s going to involve much more complex algorithms — including as many noise sources and indeterministic or stochastic processes as necessary.

    And of course, what EnlightenmentLiberal was pointing out is that, assuming the brain is basically classical, it’s behavior is, in fact, deterministic at the appropriate level of analysis.

    In other words, much of this seems to be about making sloppy comparisons by applying the wrong level of analysis. For example, we observe:

    1. The “basic unit” of an electronic computer is the logic gate, and its behavior is (ideally) very simple and deterministic.

    2. The “basic unit” of a brain is a neuron, and its behavior is very complicated and non-deterministic, with many complicated processes/internal noise sources/etc.

    And we conclude “ok, therefore computers are ‘deterministic’ and brains are ‘indeterministic'”.

    But that’s flawed reasoning. The “basic units” we chose aren’t appropriate to compare. If our basic unit is a logic gate, we’d need to compare some much more basic physical unit within a neuron. If we want to compare a neuron to something, we need to compare it to some kind of computational unit consisting of however many layers of circuitry and programming might be necessary to emulate the behavior of a neuron. Some of the components of that unit would naturally be noise sources, if that is indeed necessary to properly mimic the behavior of a neuron.

  86. Nerd of Redhead, Dances OM Trolls says

    If we want to compare a neuron to something, we need to compare it to some kind of computational unit consisting of however many layers of circuitry and programming might be necessary to emulate the behavior of a neuron.

    And it still wouldn’t be the same as the wetware (not hardware) of the brain.

  87. jack lecou says

    And it still wouldn’t be the same as the wetware (not hardware) of the brain.

    No, it wouldn’t be the same. Trivially.

    But the question is whether or they each perform an analogous function within their respective (hardware/wetware) systems.

  88. Rob Grigjanis says

    EL @90:

    …but it would still be a deterministic system.

    So what? That and $2.75 will buy you a tall caffe latte*. A theory or model can be indeterministic, even if you believe or know that the underlying theory it emerges from is deterministic. For example, classical statistical mechanics. Observing that the ‘system’ is deterministic is both obvious and useless.

    *Haven’t actually checked the price lately.

  89. Rob Grigjanis says

    jack lecou @93:

    And of course, what EnlightenmentLiberal was pointing out is that, assuming the brain is basically classical, it’s behavior is, in fact, deterministic at the appropriate level of analysis.

    Horse, meet cart. The appropriate level of analysis is the one at which you can do analysis. When you can deterministically emulate the behaviour of a neuron, call Stockholm and demand your prize.

  90. Nerd of Redhead, Dances OM Trolls says

    But the question is whether or they each perform an analogous function within their respective (hardware/wetware) systems.

    You are thinking the only inputs comes from other neurons. Minor details like peptides, steroid hormones, sugar, salt concentration, etc, can affect the output of a neuron given a stimulus from an adjacent neuron. Some folks understand this complexity, and see the problem of scale-up to a billion neurons working together, and changing with time and experiences.

  91. firstapproximation says

    jack lecou,

    Only if the computer you use doesn’t have a high quality random number generator in it

    Depends on the application. In cryptography, true randomness is more secure than pseudorandomness. The fact that hardware random number generators exist should be a big clue that pseudorandom generation isn’t always preferable.

    Drawing a distinction based on whether the noise is injected from a peripheral component or produced within cells seems rather arbitrary

    It’s not arbitrary at all. It’s fundamental and based on the fact that the computers we use are deterministic (I’ll ignore the issue of quantum mechanics and what role it may or may not play in warm, macroscopic environments).

    I’ll quote someone more knowledgeable than myself on the topic :

    One thing that traditional computer systems aren’t good at is coin flipping,” says Steve Ward, Professor of Computer Science and Engineering at MIT’s Computer Science and Artificial Intelligence Laboratory. “They’re deterministic, which means that if you ask the same question you’ll get the same answer every time. In fact, such machines are specifically and carefully programmed to eliminate randomness in results. They do this by following rules and relying on algorithms when they compute.”

    You can program a machine to generate what can be called “random” numbers, but the machine is always at the mercy of its programming. “On a completely deterministic machine you can’t generate anything you could really call a random sequence of numbers,” says Ward, “because the machine is following the same algorithm to generate them. Typically, that means it starts with a common ‘seed’ number and then follows a pattern.” The results may be sufficiently complex to make the pattern difficult to identify, but because it is ruled by a carefully defined and consistently repeated algorithm, the numbers it produces are not truly random. “They are what we call ‘pseudo-random’ numbers,” Ward says.

    Mark C. Chu-Carroll also wrote a good blog post a while back about creationist misconceptions about randomness. I recommended reading the entire thing, but the relevant quote:

    Computers are deterministic: they suck at randomness. In fact true randomness must come from a data source outside of the computer. Real randomness comes from random external events, like radioactive decay. Building a true random generator is a very non-trivial task.

    But, obviously, if you manage to build a computer/computer program that more closely approximates the responses of a human brain….

    My point was narrow and about randomness in today’s computers. While the relation between brains and computers is interesting, I don’t feel qualified enough to weigh in on the issue.

  92. Rob Grigjanis says

    jack lecou @93: Assuming that the state of the atmosphere is basically classical, is particles bashing against each other the appropriate level of analysis for weather?

  93. EnlightenmentLiberal says

    To Rob Grigjanis
    Regarding “determinsm” and “indeterministic”: You’re misusing these words.

  94. says

    @jack lecou

    The “basic unit” of a brain is a neuron, and its behavior is very complicated and non-deterministic, with many complicated processes/internal noise sources/etc.

    While this is true, what a neuron does is actually very simple. It can receive an input signal, and transmits an action potential. A single pulse of a signal. Off/On/Off.

    The input can be modified in many ways (and dendrites participate in computational details) but ultimately they result in action potentials.That can result in varied in strength, repetition, speed and patterns of action potentials, and can be made to occur via specific molecules that produce the same phenomena not unlike an electron traveling down a wire.

    How would you (or anyone else here) describe a logic gate in these terms? Because I honestly don’t see how a brain is not a computer in a realistic sense and I’ve spent the last seven years trying to understand how mine works in computational terms. I’m at the point where I can tie parts of my consciousness to anatomy with some precision (particularly parts of the basal ganglia, thalamus and somatosensory cortex).

    “The functional anatomy of Gilles de la Tourette syndrome.”
    http://www.ncbi.nlm.nih.gov/pubmed/23237884

  95. colonelzen says

    Neural transmission occurs across the synaptic cleft at a scale where the randomness of Brownian motion (ultimately quantum in origin) is huge making migratory time, within broad limits unpredictable and variable. And the release of neuro-transmitters from receptors is also asserted in at least one article I’ve read to be essentially a thermal – aka random – event. So the timing of firing of neurons communicating signals across the brain is pretty much random … and since there are innumerable channels fired in any “thought” which one “gets there” first is pretty much a chance matter.

    (Of course the correspondence of pathways to particular subtleties and evolution of thought is speculative … but I – and many others – would be surprised if it weren’t at least quite broadly true).

    So the brain is far, far, far from deterministic in low level operation.

    As written the brain seems particularly structured to, so much as possibe lessen the —- aside here, on contemplation CR is right that “more deterministic” doesn’t make linguistic/semantic sense so … —- inconsistency of its processing. It seems in many ways a rube-goldberg jury rigging to achieve sufficient consistency to hold and evaluate information models that do have reasonable levels of mutual and external correspondence.

    But of course, as I’ve written, it only has genuine meaning to the extent that there is mutual and external correspondence. And actually being deterministic to better than 10**18 ops per element, our Turing derivative mechanisms certainly can transact similar transforms. The only trick is actually mapping them.

    (I agree with our host … Kurzweil and followers are loons, the numbers of relations to do real mapping are many orders of magnitude greater than the “upload”niks usually quote, and the same quantum level randomness that makes the brain non-deterministic will make reading meaningful “signal” in any attempt to map the brain at low level well nigh impossible. We’re not going to be able to map living human brains into cyber space with any tech that’s gonna happen this century…. and probably never Yes the logic can, in principle, fit; the real world messy stuff is just too messy … even, or perhaps especially, looked at on the naometer scale. There can easily soon be machine minds as complex and intelligent … and almost instantly then, more so …. than ours. But they won’t be “ours”. )

    — TWZ

  96. jack lecou says

    Depends on the application. In cryptography, true randomness is more secure than pseudorandomness. The fact that hardware random number generators exist should be a big clue that pseudorandom generation isn’t always preferable.

    I’m perfectly aware of the difference between true RNGs and PRNGs. The intel processor my laptop has does in fact incorporate a hardware-based, true RNG. A number of modern processors and architectures do. VIA C3 processors have had one for more than a decade. (These are all intended primarily for use in cryptography, as you point out. Some folks are worried we can’t trust the manufacturers’ implementation for that, because NSA, for one thing, but that’s not relevant to whether it is — in principle — possible. It is.)

    Again, the quotes you supply only apply to computers WITHOUT a high quality random number generator. I certainly agree that an ideal (digital) computer is designed to be very deterministic, but all bets are off as soon as you add a RNG input. And yet somehow, it is still a computer. And no, I don’t see how *where* the noise is inject particularly matters, as long as it is available within the system (it’s a given that the brain and electronic systems are very different. What matters is whether they have resources to perform similar functions.)

    Note also that even many real world computers are less than ideal, with e.g., random memory glitches resulting from electrical noise, cosmic rays etc. Depending on how critical such errors might be in the computer’s application, it might be designed with elaborate techniques to cope with that fundamental randomness — ECC memory, for example.

    Come to that, transistors themselves are fundamentally noisy. The *digital* computer is basically an elaborate architectural technique to sidestep the impact of that noise and get useful results despite it. But analog *computers* also in fact exist, with somewhat different techniques for making sure that fundamental noisiness doesn’t impact the result too much.

    The brain naturally uses different elements, and has evolved different strategies for producing useful high-level results despite whatever inherent low-level noisiness exists, but that’s not really here or there in the question of “is it a computer” as far as I can see.

  97. jack lecou says

    As written the brain seems particularly structured to, so much as possibe lessen the —- aside here, on contemplation CR is right that “more deterministic” doesn’t make linguistic/semantic sense so … —- inconsistency of its processing. It seems in many ways a rube-goldberg jury rigging to achieve sufficient consistency to hold and evaluate information models that do have reasonable levels of mutual and external correspondence.

    A I pointed out above with firstapproximation, the digital and error correcting techniques of our computers are also basically just strategies to overcome the inherent noisiness and inconsistencies (e.g., slightly different saturation voltage levels) of the transistors of which they’re made. The fact that the brain also uses strategies to coax useful high-level results from potentially unreliable low-level components (elaborate strategies, granted, with the typically Byzantine flair of evolved systems) is hardly a point of difference.

  98. jack lecou says

    jack lecou @93: Assuming that the state of the atmosphere is basically classical, is particles bashing against each other the appropriate level of analysis for weather?

    No. Nor for climate, which is, wait for it… Fairly deterministic.

    As, in fact, is the behavior of brains at the appropriate level of comparison. People don’t actually behave totally randomly, regardless of whatever their neuronal action potentials may be up to.

  99. jack lecou says

    You are thinking the only inputs comes from other neurons.

    I don’t see where anything I’ve said implies that I think that.

    Minor details like peptides, steroid hormones, sugar, salt concentration, etc, can affect the output of a neuron given a stimulus from an adjacent neuron. Some folks understand this complexity, and see the problem of scale-up to a billion neurons working together, and changing with time and experiences.

    Complex system is complex. Yes. Not really to point though.

    If a computer gets too complex are we required to say it’s not a computer anymore?

  100. jack lecou says

    So what? That and $2.75 will buy you a tall caffe latte*. A theory or model can be indeterministic, even if you believe or know that the underlying theory it emerges from is deterministic. For example, classical statistical mechanics. Observing that the ‘system’ is deterministic is both obvious and useless.
    *Haven’t actually checked the price lately.

    So the whole “deterministic”/”nondeterministic” business is a total red herring.

    On some scales, yes, meat brains probably have noisy processes and variation. So what. On other scales they don’t. And on some scales, silicon devices have noise and variation. In some ways, meat brains might have stochastic processes driving outputs (the first doesn’t imply the second, of course). So what. You can make a good random number generator out of silicon and drive stochastic outputs, ( and the result is still called a computer).

    And then there’s the matter that on the scales that really matter, brains seem pretty deterministic. I know that I can walk into a coffee shop with my money and order a latte, and that what will happen is that the cashier takes my money, and the barista hands me a latte, and the other patrons (who will be dressed in clothes) will sit around quietly chatting or reading or tapping at laptops. I’m not anywhere near a coffee shop right now, but I still know exactly what’s happening there (at a relatively high level – I don’t know, e.g., *what* everyone is drinking or reading).

    There’s lots of other things that *could* happen at a coffee shop, things that are entirely within the hardware capabilities, as it were, of the people in a coffee shop — an almost infinite variety of singing and yelling and spasming and biting — yet somehow that almost never happens to me. It’s almost like brains aren’t indeterministic random number generators.

    “Determinism” is a red herring. If there’s a good argument out there that brains aren’t computers, that’s not it.

  101. Rob Grigjanis says

    jack lecou @109:

    Nor for climate, which is, wait for it… Fairly deterministic.

    What on Earth does “fairly deterministic” mean? Do you mean that the output of our models consists of ranges of values with associated confidence levels? I think most people would call that “probabilistic”.

    @111:

    So the whole “deterministic”/”nondeterministic” business is a total red herring.

    No, but your use of constructions like “fairly deterministic” or “pretty deterministic” certainly confuses matters. We’re talking about models here, right? The models that we use for complicated systems with vast numbers of degrees of freedom tend to be probabilistic, i.e. indeterministic.

    You can make a good random number generator out of silicon and drive stochastic outputs, ( and the result is still called a computer).

    The computer (or our model of the computer) is executing the algorithm deterministically. For example, there’s nothing random about the instruction “move one step to your left when lightning hits that tree”, even though the “output” can appear random.

    A red herring would be your coffee shop anecdote.

    And then there’s the matter that on the scales that really matter, brains seem pretty deterministic.

    Seriously, the “scales that really matter” are the routines of certain individuals? Haven’t come across many papers about that. Most stuff I’ve seen about modelling human behaviour uses expressions like “probabilistic”, “Markov decision process”, “stochastic dynamics” and so forth.

    If there’s a good argument out there that brains aren’t computers, that’s not it.

    I’m not making any such argument. I’m addressing the really slack use of terms like “deterministic”, and you can’t get much slacker than sticking “fairly” or “pretty” in front of it.

  102. jack lecou says

    I’m not making any such argument. I’m addressing the really slack use of terms like “deterministic”, and you can’t get much slacker than sticking “fairly” or “pretty” in front of it.

    That’s fair enough. It doesn’t appear that the word “determinism” is being used in a consistent way by everyone here, probably least of all by myself.

    I don’t believe it affects the argument though. The test I’m applying is roughly, “Take this characteristic you think makes brains not computers. Can we imagine a silicon-and-circuits device that has that characteristic? Yes? Would we not call that device a computer for some reason?”

    I think “determinism” fails that test, whichever definition you use or scale you look at.

    The computer (or our model of the computer) is executing the algorithm deterministically. For example, there’s nothing random about the instruction “move one step to your left when lightning hits that tree”, even though the “output” can appear random.

    Sure. And a neuron that sometimes triggers and sometimes doesn’t depending on the particular combination of surrounding hormone levels and salinity is in fact following a fixed course, despite generating an “output” that can appear random.

    Where, exactly, is the distinction?

    Seriously, the “scales that really matter” are the routines of certain individuals? Haven’t come across many papers about that. Most stuff I’ve seen about modelling human behaviour uses expressions like “probabilistic”, “Markov decision process”, “stochastic dynamics” and so forth.

    By “scales that matter” I’m talking about characterizing the inputs and outputs of the brain as a whole, and the ability to store internal states and base future reactions (output) on them. The brain isn’t really separable from the body – so that means “a person” essentially, and their responses and reactions.

    The point being, it’s trivial that a human being (and her brain) is a not an electronic computer built out of logic gates and digital memory cells. If the question “are brains computers” is to be an interesting one, the answer really can’t just be some triviality about implementation that boils down to “brains aren’t made of ideal silicon transistors, QED”.

  103. Nerd of Redhead, Dances OM Trolls says

    Can we imagine a silicon-and-circuits device that has that characteristic?

    Nope, no way. Presuppositional drivel. Which is why you aren’t getting anywhere.

    Sure. And a neuron that sometimes triggers and sometimes doesn’t depending on the particular combination of surrounding hormone levels and salinity is in fact following a fixed course, despite generating an “output” that can appear random.

    It appears random because we don’t understand things, and you fallaciously believing the neuron is a digital gate instead of perhaps being a combination digital/analog multiple gated object. Your presuppositions about neurons being totally digital get in your way of understanding how they work.

  104. jack lecou says

    Nope, no way. Presuppositional drivel. Which is why you aren’t getting anywhere.

    Um, lolwut?

    You realize that was a generalized argument, for weighting the validity of a variety of different characteristics people might try to use as tests. You’re saying there’s literally *no* characteristic of the brain you could imagine instantiated in a computer type mechanism? You don’t even have to think about it, or consider some examples? That sounds pretty silly.

    It appears random because we don’t understand things

    Yep. (And speaking of not understanding, it kind of sounds like you didn’t understand the argument you pulled that sentence out of…)

    … and you fallaciously believing the neuron is a digital gate instead of perhaps being a combination digital/analog multiple gated object. Your presuppositions about neurons being totally digital get in your way of understanding how they work.

    WTF. Do you have me confused with someone else? If not, you need to stop trying to stuff straw in my mouth. I’ve never, ever said I believe a neuron is a digital gate (in fact, I don’t!), and I’m not really making any suppositions at all. (Hell, that very sentence you’re quoting is talking about responding to subtle gradients of chemical concentration – how much more analog can you get? And are they even realistic looking strawmen out there that think brains are totally digital? Who are you arguing with?)

    All I’ve been pointing out here are that some of the suppositions OTHER people are making about neurons and brains are, even if true, not sufficient to demonstrate that we can’t usefully regard a brain as an exotic sort of computer.

  105. jack lecou says

    Nope, no way. Presuppositional drivel. Which is why you aren’t getting anywhere.

    And, presuppositional? Really? How can a general test be presuppositional? What, exactly, is the presupposition? This response makes no sense to me.

  106. firstapproximation says

    I certainly agree that an ideal (digital) computer is designed to be very deterministic, but all bets are off as soon as you add a RNG input. And yet somehow, it is still a computer. And no, I don’t see how *where* the noise is inject particularly matters, as long as it is available within the system (it’s a given that the brain and electronic systems are very different. What matters is whether they have resources to perform similar functions.)

    Let’s leave the messiness of the real world and go into the world of thought experiments .

    So, you have a Turing Machine (TM) with some input (I). However, there is a random number generator (RNG) connected to the TM and what the TM does depends on the RNG (in addition to I). From my perspective, the machine is TM and the input is I + RNG. TM is deterministic (if RNG happens to produce the same random numbers and I is the same, the output is the same). However, from what I gather, your position is that the machine is I and the machine is TM + RNG, which is non-deterministic.

    This could just be a difference in definition/perspective.

  107. colonelzen says

    jl:

    The transistors and gates in our computers typically have a 10^18+ ops between to failure rating True there are millions of such in your computer but only a fraction of them are in use at any instant.

    Those transistors are built depending on quantum effects to do switching. I.e it doesn’t happen until the voltage potentials are high enough to begin a cascade. They are binary with a *large* voltage gap between “on” and “off” or zero and one.

    Isolated from entropy sources – the internet, other external sources, your desktop really, really is truly deterministic for hundreds, even thousands of hours of operation.

    That is nothing at all happens which is not a direct and specific consequence of the exact prior state. Manage to recreate the machine exactly as it was an hour ago (including clocks etc), and launch it again and in an hour it will be *exactly* in the same state it is in now to each and every zero and one voltage on each and every transistor. And no you don’t have to keep the outside temperature or noise levels or even power line voltages consistent with the prior run so long as they remain within specified values.

    What’s more being not only deterministic but a discrete state machine it genuinely is possible (at great effort) to capture an instant state and restore it later. (As intimated some not so great hardware drivers may make this more difficult than it should be … but it remains in principle, and with enough effort and expense in practice, possible.)

    No neuroscientist would claim – at least by every paper and article on neuroscience I’ve ever read – would claim the same is true of the brain, even if it were remotely possible to snapshot and later restore it.

    (Once again I stress I very much agree that whatever the brain does that is genuinely meaningful can indeed be accomplished by computational mechanism of sufficient resources. It just doesn’t happen in the way or predictably as (or by, ahead of time) that deterministic computation would do it. )

    For what it’s worth I think we both agree with the mechanistic (per the type of mechanism) and “information processing” aspect of what the brain is doing. That it’s not doing anything a (n electronic) computer couldn’t do. It’s simply arguments over words here. But the word “computer” has come to mean a Turing Machine operating deterministically. The brain is Turing Complete, but it can’t reliably and consistenly exercise the determinacy of a UTM without a *lot* if external corrective feedback.

    — TWZ.

    — TWZ

    — TWZ

  108. jack lecou says

    So, you have a Turing Machine (TM) with some input (I). However, there is a random number generator (RNG) connected to the TM and what the TM does depends on the RNG (in addition to I). From my perspective, the machine is TM and the input is I + RNG. TM is deterministic (if RNG happens to produce the same random numbers and I is the same, the output is the same). However, from what I gather, your position is that the machine is I and the machine is TM + RNG, which is non-deterministic.
    This could just be a difference in definition/perspective.

    That’s more or less it.

    The reason being is the brain, whatever we want to call it, is certainly not an *abstract* turing machine, so the appropriate “computer” to compare it with is also not a perfect, isolated tape machine floating in space.

    You can’t just put a brain in a jar, so It’s always inseparable from trillions of “inputs” (from the body’s environment, from sense organs, etc.) It’s composed of messy organic tissue, non idealized electronic switching elements, so there might indeed might be a lot of noise and funkyness, especially at small scales.

    These things are given, so if we’re comparing it to a computer, it’s important to make an appropriate comparison rather than a trivial one. I.e., to a complete machine that is processing complex input/ouput, has access to noise (or has noise injected into it) as necessary, etc.

    Such a machine would end up being “non-deterministic” in at least the same sense as the brain, but I submit that we probably wouldn’t let that stop us from calling it a computer.

    Even simple micro-controllers, particularly if badly programmed, can be “non-deterministic” inasmuch as their output might depend on, for example, exactly when an external event occurs and triggers an interrupt. Or consider environments where faults might be more common and/or particularly catastrophic — e.g., in the space program — there you often see elaborate engineering strategies to cope with potentially unreliable processing elements – redundancy, consensus systems, etc.

    We still call systems like those “computers”.

    I’d also argue that we also don’t really know how “deterministic” human beings and their brains are. You can assume they are if you want, and I don’t think it makes a difference (see above), but they also might not be. Nobody’s done the experiment. I’d be willing to bet that grad students might be a lot more deterministic than some here might expect, assuming you could erase their memories, reset their hormone levels, and run them through the same experiment over and over with identical conditions.

  109. jack lecou says

    That should be “messy organic tissue, *not* idealized electronic switching elements”.

  110. jack lecou says

    colonelzen-

    Real transistors are definitely not perfect switching elements. They have finite switching times (look up e.g., ‘CMOS eye diagram’), variances from transistor to transistor, etc. That’s why digital computers are designed as you say – e.g., with a wide threshold between ‘off’ and ‘on’. This is a strategy — the whole concept of ‘digital’ really — to coax useful results out of real-world, non-ideal, noisy, components. The brain has presumably evolved strategies to accomplish the same thing (though they are of course likely to be radically different).

    That is nothing at all happens which is not a direct and specific consequence of the exact prior state.

    You have not actually shown that this is not also the case for human brains. (Particularly a brain “isolated from entropy sources [i.e., input]” as you posit in order to get a computer to behave more deterministically.)