What are you going to simulate?


The EU is sinking €1.2bn (and the US is proposing to spend more, $3 billion) into a colossal project to build a supercomputer simulation of the human brain. To which I say, “What the hell? We aren’t even close to building such a thing for a fruit fly brain, and you want to do that for an even more massive and poorly mapped structure? Madness!” It turns out that I’m not the only one thinking this way: European scientists are exasperated with the project.

"The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," Peter Dayan, director of the computational neuroscience unit at UCL, told the Guardian.

"We are left with a project that can’t but fail from a scientific perspective. It is a waste of money, it will suck out funds from valuable neuroscience research, and would leave the public, who fund this work, justifiably upset," he said.

There is a place for Big Science. I’d suggest that when you’re at the preliminary exploratory stage, as we are with human brain function, it’s better to fund many small exploratory parties to map out the terrain, rather than launching a huge invasion with charts that are made out of speculation. We know a computer simulation is going to fail, because we don’t know what it’s going to simulate. So why are they doing this? Maybe it’s a question of who “they” are.

Alexandre Pouget, a signatory of the letter at Geneva University, said that while simulations were valuable, they would not be enough to explain how the brain works. "There is a danger that Europe thinks it is investing in a big neuroscience project here, but it’s not. It’s an IT project," he said. "They need to widen the scope and take advantage of the expertise we have in neuroscience. It’s not too late. We can fix it. It’s up to Europe to make the right decision."

I’ve noticed this, that a lot of gung-ho futurists and computer scientist types have this very naive vision of how the brain works — it’s just another computer. We can build those. Build a big enough computer, and it’ll be just like the brain. Nope. That’s operating on ignorance. And handing ignorant people billions of dollars to implement a glorious model of their ignorance is an exercise in futility.

Comments

  1. PaulBC says

    I’ve noticed this, that a lot of gung-ho futurists and computer scientist types have this very naive vision of how the brain works — it’s just another computer.

    It doesn’t sound like it’s intended as a billion-euro giveaway to “gung-ho futurists” though I’m having trouble figuring out who is going to participate. The article states that “changes sidelined cognitive scientists who study high-level brain functions” and it will be “drawing on more fundamental science, such as studies of individual neurons.” Do they just plan to simulate a big bunch of neurons and hope it does something interesting when they start it up?

    The funding seems backwards. If I understand how, for instance, the NSF works, researchers are supposed to propose a project and apply for funding. This sounds like government is just dangling a big wad of cash and saying “Make it so.” I agree that as described, it is guaranteed to fail, but I’d still like to know: What researcher isn’t boycotting the project? What are they proposing to do with the money?

    I doubt they will have trouble finding takers, but it sounds like they’ll either get someone promising something that cannot possibly deliver, or maybe just wind up diverting the money to many unrelated neuroscience projects that are recast to sound like they are part of this.

  2. chigau (違う) says

    Isn’t this the stuff of science fiction?
    Science fiction from the 1960s?

  3. Gerard O says

    The problem with trying to “simulate” a human brain is that there is 580 million years of evolutionary history that can’t possibly be replicated in a few years, and we’re not even sure what “human” even means — twenty years ago people who claimed that modern humans interbred with Neanderthals were denounced as wackos, today it is accepted as fact.
    Stephen Jay Gould once compared evolutionary history to a kind of tape that, if it were replayed, would almost certainly be different to history as we know it. To claim that you can create that tape is ludicrous.

  4. leftwingfox says

    I can simulate that by taking a hammer to a Nintendo game boy. GIMME MY $3 BILLION!

    Try the National Endowment for the Arts.

    Won’t get 3 billion, but they might reimburse you for the cost of the hammer.

  5. pyrion says

    It’s not the case that they really are going to simulate a human brain. That’s all press voodoo. Here is the actual info:
    https://www.humanbrainproject.eu/de

    In the project bubble they summarize: “Gaining profound insights into what makes us human, developing
    new treatments for brain diseases and building revolutionary new computing technologies.”

    So hold your horses. I think it’s actually a really good project.

  6. Jason Nishiyama says

    Think of how well such a computer could simulate galaxy collisions though!

  7. jonmoles says

    @#4 chigau Yes. Apparently they are taking Heinlein’s The Moon is a Harsh Mistress as a how-to guide and not a work of fiction.

  8. David Chapman says

    7
    pyrion

    It’s not the case that they really are going to simulate a human brain. That’s all press voodoo. Here is the actual info

    In the actual info you linked to, which appears to be the website for the project, it states under “Project Objectives:”. number one:

    Simulate the Brain

    Develop ICT tools to generate high-fidelity digital reconstructions and simulations of the mouse brain, and ultimately the human brain.

    In what sense is a plan to simulate the human brain, not a plan to simulate the human brain?

  9. pyrion says

    Yes, they will ultimately try do simulations of the brain. But that’s just the far goal. In the mean time they will try to understand how the brain works, do simulations for specific diseases and so on. And notice that they say “simulations”. It’s not (at least initially) about simulating the whole brain at once, but to simulate parts in order to be able to validate models of how it all works.

  10. Holms says

    They are making the classic mistake of thinking that the computer model of X is X itself. Their ‘research’ on this model will only uncover things that reflect on the assumptions made in the creation of that model.

  11. Jesse A. says

    It would actually be incredibly interesting if it could be proven that the way brains process information can’t be simulated by a computer. I expect that we simply don’t know enough about how the brain works to get there yet, though.

    I’m a PhD student studying Computer Science, though the theory of computation is not my particular subfield. However, most CS folks are aware of an unproven thesis which nevertheless has survived for 60 years called the Church-Turing thesis. To oversimplify it somewhat, it suggests that any information processing system can be simulated by a computer (actually, by a Turing Machine). It came at a time when there were multiple competing models of information processing, and its proponents were able to prove the equivalence of all the existing systems. To my knowledge, nobody has since come up with any system which can process information but which can’t be simulated by computer. So: if the brain turns out to be one such system, that would be revolutionary for Computer Science. It could lead to an entirely separate class of computers with who-knows-what capabilities (enjoying opera?).

    None of that is to say that we’re capable of building such a simulation now, or even building a mathematical model of human cognition which can be simulated in theory but not in practice (which would be good enough to satisfy Church-Turing). But perhaps good science will be motivated by a goal that’s currently out of reach?

  12. says

    But Markram staunchly defends the project, arguing that it was always about developing technology rather than basic neuroscience. He said its goal was not to churn out more of the data that neuroscientists already produce, but to develop new tools to make sense of the vast data sets coming out of brain sciences.

    What a nutter butter. Who needs to make sense of data? Next thing you know, climatologists will start developing computer models that try to tie together the data that gets collected from weather monitor stations into models. It’s just like all those wacky sci-fi stories with weather control! Such a waste.

  13. vereverum says

    Give credit where credit is due: the grant writer(s) snagged over a billion; can’t help but bring them more clients.
    On the positive side, it looks like real science and not an attempt to prove ESP or some such nonsense.
    Serendipity: even when it fails, most likely something good will come out of it: just not what they’re looking for.
    And while it is a great deal of money (1.2 + 3 billions) it wouldn’t even have paid for a week of the Iraq war.
    But it does keep the government in the habit of spending money on science.

  14. nrdo says

    As a computer scientist who works for scientists, none of my colleagues or professors would like to be lumped in with futurists. (Who rarely produce original research anyway) The actual function of CS in a big science project should be to focus the requirements so that the resulting tool works on a given problem. The LHC wouldn’t have been possible without some pretty impressive innovations in data processing.

    Obviously, the scientists who won the funding with the sexy press releases are not sharing with the cognitive scientists. I think it would be a totally valid and interesting project to do bottom-up simulations straight from physiology data, but of course it won’t lead to a human brain. Their press materials are misleading.

  15. vereverum says

    @ 16
    It’s not science fiction stories; it’s real. Well…maybe….
    I was listening on the short wave the other night to a person who actually believes that they control the weather already. I don’t recall his name or show title, but it’s on regularly.

  16. nrdo says

    @ Jesse A.

    But perhaps good science will be motivated by a goal that’s currently out of reach

    That’s usually the way it happens. Based on the material on their site, I don’t think they are likely to produce anything as that would prove or disprove Church-Turing in the theoretical sense. I think the most likely beneficial outcome of this approach, assuming it goes really well, would be the development of algorithms that enable brain-machine interfaces. Unlike grandiose stuff like human-level AIs, this is a well-defined line of research: a matter of figuring out which neurons/nuclei to listen to and what the electrical signals mean.

    Personally, I think that a success in this area would be 3 or even 10 billion well-spent.

  17. abusedbypenguins says

    This will become either “Skynet” or “Deep Thought” and yes chigau, I hope the mice commissioned this project.

  18. HolyPinkUnicorn says

    @PZ Myers #2:

    I can simulate that by taking a hammer to a Nintendo game boy. GIMME MY $3 BILLION!

    Maybe the Air Force can just start buying hundreds of video game systems (again!) to build another supercomputer.

    Assuming they were to go with another 1,760 PlayStations, at retail price ($400 per PS4), that would still only be a fraction of the cost of this project, though I’m sure they could come up with something to make up the difference. “It’s those damn HDMI cables! We have to buy the really good ones!”

  19. says

    @14
    Holms

    They are making the classic mistake of thinking that the computer model of X is X itself. Their ‘research’ on this model will only uncover things that reflect on the assumptions made in the creation of that model.

    you do realize that scientists have many legitimate uses for simulating their models in research, don’t you?

  20. says

    @5
    Gerard O

    The problem with trying to “simulate” a human brain is that there is 580 million years of evolutionary history that can’t possibly be replicated in a few years,

    They are not doing an evolution simulation (unless I missed that somewhere). They are producing a map of the current brain.

    and we’re not even sure what “human” even means

    *rolls eyes*

  21. moarscienceplz says

    How much you wanna bet the NSA/Homeland Security is involved in this somehow?

  22. moarscienceplz says

    Well, maybe if they start with a a Republican brain….

    BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…BENGHAZI!…

    Mission accomplished. Where is my check?

  23. busterggi says

    pengy @ 22. Its Skynet, the mice would never trust us to build Deep Thought.

  24. David Chapman says

    12
    pyrion

    Yes, they will ultimately try do simulations of the brain. But that’s just the far goal. In the mean time they will try to understand how the brain works, do simulations for specific diseases and so on. And notice that they say “simulations”. It’s not (at least initially) about simulating the whole brain at once, but to simulate parts in order to be able to validate models of how it all works.

    In other words, the project, or part of it, is to simulate the human brain. The whole human brain, otherwise you wouldn’t have added in that “( at least initially )” fudge factor. Can we just accept that the project is what it says it is please? You are intent on circumnavigating an admission that PZ Myer’s description was right in the first place.
    Speaking as a confused layman, I don’t know whether Professor Myers is right about this project or not. If you want to try to explain why he’s wrong in dissapproving of the scheme, please do so, I’d be interested to read what you say. But you’re trying to obfuscate the issue as to the nature of it, and that’s irritating.

  25. PaulBC says

    Jesse A. #15

    To my knowledge, nobody has since come up with any system which can process information but which can’t be simulated by computer.

    The only non-magical suggestion I have heard for cognition that isn’t Turing equivalent is some speculation by Penrose that human thought arises from “quantum tubules” in neurons, and I’m being generous about the “non-magical” part. Even quantum effects could be simulated to any desired approximation, though the slowdown might eliminate any kind of practical utility, so I’m not sure if your question is well defined.

    I also don’t think that trying to build a simulated brain would help very much in answering this, except in the unlikely event that your computer simulation worked just like a real brain. If it doesn’t, then any failure can be explained by the fact that we probably just don’t understand the brain well enough to make the simulation work. I don’t see how to reach a negative conclusion that there are things the brain does that computers can’t.

    Actually, there is a lot of active research into quantum computation, so we could eventually be able to say that a quantum system does more computation in a certain amount of time than a conventional digital simulation possibly could. Even in this case, the model is still Turing-equivalent, just accelerated by a restricted form of massive parallelism.

  26. Usernames are smart says

    The funding seems backwards. ~. This sounds like government is just dangling a big wad of cash and saying “Make it so.” I agree that as described, it is guaranteed to fail ~.
    — PaulBC (#3)

    Sounds like some high-up idiot read something in a supermarket tabloid and said, “WE NEED THIS!”

    That’s how St. Reagan, who knew nothing about science and/or technology, got a boner for SDI (“Star Wars”) without any of the requisite knowledge. Thanks for the $50 billion (adjusted from $30 bbn 1993 dollars) boondoggle, Zombie Reagan!

  27. PaulBC says

    moarscienceplz #27

    BENGHAZI!…BENGHAZI!…BENGHAZI!…

    Thanks! Now I know how to win the next Turing Test contest.

  28. PaulBC says

    HolyPinkUnicorn #23
    Yeah, but at the price the Pentagon pays for hammers, they’ll be way over budget before they have the first prototype.

  29. gabrielcosta says

    I know a good dose of skepticism sounds appropriate, but after seeing a presentation here in Portugal by Hanry Makram, the main reasearcher behind the project, I became much more confident that the intentions are valid.
    This project is actually less about reaching a human brain simulation and more about checking whether all the information and results coming from distinct fields of the neurosciences are accurate in terms of producing a functional brain.

    During the presentation Makram several times highlighted that the most important aspect would be not to produce new data but to make a huge effort on building simulations and frameworks where the knowledge from already published work could be put to a test. He even mentioned the amount of work that is not reproducible, even excluding fraudulent data, and that thus should be identified – mainly by checking whether a functional brain/cortex patch works when simulated under the parameters/assumptions of these papers – and put aside. Sort of a way to sieve the more accurate and reliable results.

    It still sound very ambitious, but the PR campaign and the press releases are awfully misleading.

  30. bortedwards says

    1. It seems like a lot of money.
    2. Money that could be put to more immediate use elsewhere.
    3. But who knows what intended and unintended benefits might come of it.
    4. As a broad fan of blue-sky funding in science, and in the vague/vain hope it may benefit my own research one day, I guess I can’t object.
    Roundabout self interest FTW.

  31. badgersdaughter says

    If we could build a simulation of the human brain accurate enough to pass for a human brain, it would for all intents and purposes be human. To be an accurate simulation of a brain older than a newborn’s, it would have to simulate or reproduce the effects of development and experience. To give it less than the best education, enrichment, and care we know how to would in effect be child abuse.

  32. David Chapman says

    35
    gabrielcosta

    I know a good dose of skepticism sounds appropriate

    What does that mean??

    This project is actually less about reaching a human brain simulation and more about checking whether all the information and results coming from distinct fields of the neurosciences are accurate in terms of producing a functional brain.

    What other way is there for them to be accurate? Either the information and results coming from etc etc are accurate about what a living functioning brain is and does or their just poor, innaccurate results. I would have thought. Aren’t you just trying to find excuses for the project website cheerfully announcing that they’re going to try to simulate the human brain? ( Mice first of course. )

    It still sound very ambitious, but the PR campaign and the press releases are awfully misleading.

    They’re not off to a very good start then are they? But if the official publication is saying one thing about this undertaking and the head scientist is saying another, how do you know which one to trust? How do you know he isn’t just trying to fend off the inevitable criticisms that are going to be attracted by such an extreme, sensational claim?
    ( These questions are in earnest, in case there’s any doubt about that. )

  33. alexanderz says

    I’m a not a particularly smart layperson, but I happen to live not far from one of the scientists who is working on this project. From my conversation with him on the subject that happened several years ago (when he was very aggressively advocating for the project) I can confirm what gabrielcosta #35 have said – they don’t think they can create a human brain. It’s just the sale pitch. What they are trying to do is make the best use of all available models, figure out where they work and where they fail and then readjust the model based on empirical research, run the re-adjusted model and see whether the changes made to the initial model have any analogues in a real brain, which would lead to a better idea of where to direct future research and a powerful way to test said research.
    He was aware that this method isn’t very precise, but he believed it to be the best method to figuring out where to look for the next breakthrough. His analogy was to “whole-genome shotgun sequencing”.

    PaulBC #3:

    The funding seems backwards. If I understand how, for instance, the NSF works, researchers are supposed to propose a project and apply for funding. This sounds like government is just dangling a big wad of cash and saying “Make it so.”

    That’s not how it happened. There several scientific committees that proposed large scale projects that cannot be completed using regular ways of funding, but rather the united power of the EU. Something like the LHC. Then there were several years of debates over which project is best endeavor and the brain thingy won.

    David Chapman #30:

    Can we just accept that the project is what it says it is please?

    You can if you’re extremely naive and have never seen the EU in action. Due to its cost and nature this project is very political and the scientists (or at least the one I’ve talked to) know it. They know they need to appeal to popular views to get the funding and they know that once they do get the funding the project is very unlikely to be stopped.
    Their hope that once the project does start they can direct it in a less sensational and more fruitful direction. Something similar (though not as cynical) as PaulBC #3 said: “wind up diverting the money to many unrelated neuroscience projects that are recast to sound like they are part of this.”

  34. Azkyroth Drinked the Grammar Too :) says

    I think the most likely beneficial outcome of this approach, assuming it goes really well, would be the development of algorithms that enable brain-machine interfaces. Unlike grandiose stuff like human-level AIs, this is a well-defined line of research: a matter of figuring out which neurons/nuclei to listen to and what the electrical signals mean.

    Personally, I think that a success in this area would be 3 or even 10 billion well-spent.

    We could finally have a web forum that wouldn’t get right-wing trolls :D

  35. cubist says

    sez chigau:

    Isn’t this the stuff of science fiction?
    Science fiction from the 1960s?

    sez jonmoles:

    Apparently they are taking Heinlein’s The Moon is a Harsh Mistress as a how-to guide and not a work of fiction.

    It’s highly unlikely that they took Heinlein’s novel as a how-to guide. The narrator explicitly described Mike-the-computer’s sentience as an unplanned, accidental side-effect of the Lunar Authority’s extensive series of capacity upgrades to the computer that ran the penal colony on the Moon, and that’s about as far from what the EU is doing as one can decently get. If the EU actually is taking 1960s SF as their inspiration, Colossus (1966 novel, basis for the 1970 film Colossus: the Forbin Project) is a better candidate.

    Perhaps the EU is actually taking NASA’s Apollo program as their model: Spend mass quantities of cash on an extravagantly ambitious goal, on the theory that the new technologies they’ll need to develop along the way will, in their own right, be sufficiently worthwhile to justify the project, even if the project fails to achieve its ‘official’ goal.

  36. PaulBC says

    alexanderz #39
    I can believe your description of the project, though it is worth noting that it is controversial enough to be opposed in an open letter by hundreds of researchers. (The open letter actually calls for specific changes in the organization, rather than canceling it outright.)

    PZ’s comment about “gung-ho futurists” seemed to be associating this effort with something like “brain builder” Hugo de Garis, who had web pages back in the 1990s claiming to be on the verge of developing brains by evolving neural networks using genetic algorithms. I could never figure out if he was a serious researcher of any kind or a pure crackpot, but he seemed to be able to get more funding than your garden variety crank.

    This project is clearly very different, but at the very least, they need to revise their PR strategy to avoid such associations.

  37. nrdo says

    @ PaulBC

    Yeah, the Penrose idea is interesting, but it’s been pretty conclusively disproved in the original formulation that was presented. The quantum states collapse too readily in the noisy, energetic environment of the brain.

    Quantum models of brain function do have some prior-plausibility in the sense that there are biomolecules (like chlorophyll) that are known to “use” quantum mechanics. Not intelligently; just in same sense that the ribosome “uses” electromagnetism to figure out which tRNA to install in a protein. So it could conceivably be the case that brains use quantum effects and a simulation would have to account for that.

    Ultimately though, consciousness is still a process occurring through the interaction of physical elements that behave according to knowable laws.

  38. gmacs says

    I can simulate [a Republican brain] by taking a hammer to a Nintendo game boy. GIMME MY $3 BILLION!

    Um… Do you mean to simulate the impenetrability of such a brain? Because you ought to know that any pre-Game-Cube Nintendo is fucking indestructible.

    Many have theorized they were made with a secret element (or alloy?) called Nintendium. Adamantium ain’t got shit on Nintendium.

  39. says

    @38
    David Chapman

    This project is actually less about reaching a human brain simulation and more about checking whether all the information and results coming from distinct fields of the neurosciences are accurate in terms of producing a functional brain.

    What other way is there for them to be accurate? Either the information and results coming from etc etc are accurate about what a living functioning brain is and does or their just poor, innaccurate results.

    Your objection is similar to a lot of anti-science nonsense. Please grasp that models don’t come in just the two varieties you mention, 1) all secrets revealed correct model and 2) too incorrect to be of any use model.

    There isn’t a dichotomy of accuracy, there is degree.

    When the universe is simulated, we see similarities and dissimilarities to our own universe. This is useful.

    But also it seems that maybe you think there is some kind of discontinuity. As if any simulation will useless unless it is that PERFECT model. As if we can neither approach this in a gradient from inaccurate to more accurate, and also as if having an inaccurate model is of no help to this task of making the model more accurate.

    Which is kind of the presuppositionalist christian position. We can’t know anything unless we get it from a source of omniscience. This kind of thinking is the same. In reality, we actually can use incomplete information, our inaccurate maps and models get better over time the more we systematically and clearly work with them. This is a big part of science itself.

  40. PaulBC says

    cubist #41

    Perhaps the EU is actually taking NASA’s Apollo program as their model: Spend mass quantities of cash on an extravagantly ambitious goal, on the theory that the new technologies they’ll need to develop along the way will, in their own right, be sufficiently worthwhile to justify the project, even if the project fails to achieve its ‘official’ goal.

    I think the Apollo mission turned out exactly the opposite. At great expense and effort, we did manage to send astronauts to the moon. However, the claimed spinoffs have never been substantiated. I consider the moon landing a great achievement. But it’s an illustration that you are more likely to accomplish what you set out to do than some other unknown thing that might be even better.

  41. Corey Fisher says

    @Jesse A.

    I’m a PhD student studying Computer Science, though the theory of computation is not my particular subfield. However, most CS folks are aware of an unproven thesis which nevertheless has survived for 60 years called the Church-Turing thesis. To oversimplify it somewhat, it suggests that any information processing system can be simulated by a computer (actually, by a Turing Machine). It came at a time when there were multiple competing models of information processing, and its proponents were able to prove the equivalence of all the existing systems.

    While I’m not a PhD student quite yet, the theory of computation is my subfield, and I’ve also got a bit of background in philosophy of mind, so lemme see if I can shed some light on this.

    The Chuch-Turing thesis isn’t exactly about information processing – it’s about algorithms. So, if there’s a well-defined, finite series of steps (algorithm) that you take to do something on a particular piece of hardware, then you can make an equivalent series of steps that would do the same thing on a Turing machine or any system with as much power as a Turing machine. It might be a much longer and harder series of steps, but there’s a series of steps and it does the same thing.

    Church-Turing is pretty damn well-supported, even if it’s not proven, so I don’t think most people are too concerned about brains overturning it. However, it may turn out to be possible to have an information processing system for which no algorithm exists, which, if the brain was one, would mean we couldn’t simulate it.

    This is kinda relevant because algorithms are in a sense hardware-independent – any given PC will be able to run the AI algorithm should one be created. Not only that, but you could run it with a game of Minesweeper or Magic: The Gathering – both systems that have been proven to be Turing-complete.

    In a way, the hard problem of AI is about whether human intelligence (particularly the bits where we feel emotions and have experiences* and whatnot) is algorithmic, which is why it’s so contentious – it seems like we should be able to model and replicate a physical system, but the implication from there that you can make an intelligent game of Magic is mind-bogglingly absurd. And while I don’t think we’ve ever shown a physical system to have no algorithm, there are things for which it is proven no algorithm could ever exist – such as the Halting Problem**. So the brain not being algorithmic might be possible.

    * http://en.wikipedia.org/wiki/Qualia
    ** http://en.wikipedia.org/wiki/Halting_problem

  42. David Marjanović says

    Develop ICT tools to generate high-fidelity digital reconstructions and simulations of the mouse brain, and ultimately the human brain.

    “Ultimately”?!? If you can simulate a mouse brain, you’re almost there! It’s practically the same thing except for size!

    “If you want to bake a birthday cake, you should not begin by putting the candles into the eggs. That is too early!
    this horror movie

    I really hope that’s just the press release and not a reflection of what the people who are involved actually believe.

    Yes, they will ultimately try do simulations of the brain. But that’s just the far goal. In the mean time they will try to understand how the brain works, do simulations for specific diseases and so on.

    …What kind of disease is there that requires a huge simulation to understand, but at the same time does not require a simulation of the whole brain?

    The only non-magical suggestion I have heard for cognition that isn’t Turing equivalent is some speculation by Penrose that human thought arises from “quantum tubules” in neurons, and I’m being generous about the “non-magical” part.

    Penrose proposed quantum state superpositions inside the microtubules that occur in every eukaryotic cell. Trouble is: 1) microtubules are huge – there’s water in them; 2) even apart from that, at body temperature, they can’t shield their contents from interactions with other stuff including themselves (that’s the “observation” that collapses the superposition); 3) even apart from all the above, they’d just generate random (true random), something you can replicate by giving your simulation input from a Geiger counter.

    If we could build a simulation of the human brain accurate enough to pass for a human brain, it would for all intents and purposes be human.

    And indeed, Data has a Starfleet rank.

    But if the official publication is saying one thing about this undertaking and the head scientist is saying another, how do you know which one to trust? How do you know he isn’t just trying to fend off the inevitable criticisms that are going to be attracted by such an extreme, sensational claim?

    Check if the publication was written by a scientist in the field. If not, you should treat it as a press release – as something written by somebody who has no clue whatsoever.

  43. Corey Fisher says

    @badgersdaughter

    If we could build a simulation of the human brain accurate enough to pass for a human brain, it would for all intents and purposes be human.

    Arguable (and, in fact, has been very extensively argued) – that’s sort of a Turing Test definition of the problem. While it’s definitely a definition that’s been defended, it’s arguably moving the goalposts, because if it’s outputting a clever replica of a human rather than a human, it may not have actual experiences, emotions, etc.

    Of course, if we get something indistinguishable from a human and we can’t actually tell if it has experiences, treating it as a human should definitely be where we go from there. Because “We thought it might not be sentient” isn’t much of an excuse for doing horrible things to someone if it turns out they are.

  44. gmacs says

    Also, as someone futzing around on the periphery, I’d like to point out that one of the major functions of the brain is to process sensory information, some of which has already been processed by the spinal cord. In order to have a proper simulation of the brain, you would need to factor in this input.

    It is not simple.

    My area is in the primary sensory cells, almost all of which project directly to the CNS. We’ve known some basic circuitry for a while, but there is much more to it than that. We are still discerning different cell types and how they fit into those circuits. For example, pain information and itch information most likely use different fibers, though they are similar in morphology and chemical sensitivity. Activation of nociceptors (pain cells) appears to block the signaling of pruriceptors (itch cells). This is pertinent to the topic because it is not yet clear where this inhibition takes place, whether in the spinal cord or the brain.

    Furthermore, when permanent damage happens that affects movement, an individual’s perception of the position and condition of his/her limbs or torso have been known to change. Neuropathic pain can also develop from various pathologies. Again, it is not entirely clear what changes are happening or where.

    To properly simulate a brain, you would need to simulate input from the sensory organs and the spinal cord. I don’t see how one could simulate something completely, unless it were a system capable of functioning independently from outside input.

    Hey, maybe* they’ll be able to simulate the enteric nervous system in a few years.

    Sorry if I’m rambling.

    *maaaaaaaayybe

  45. David Chapman says

    39
    alexanderz

    David Chapman #30:

    Can we just accept that the project is what it says it is please?

    You can if you’re extremely naive and have never seen the EU in action.

    I’m Irish, I’ve seen the EU in action. To my regret.

    They know they need to appeal to popular views to get the funding and they know that once they do get the funding the project is very unlikely to be stopped. Their hope that once the project does start they can direct it in a less sensational and more fruitful direction.

    But as things stand, the intended target is ( among other things ) the creation of a synthetic brain; that’s what the funding is allocated for and that’s what I was referring to: what the project is officially about; which may indeed turn out to be what it is in actuality.
    For whereas we might perhaps wish the scientists involved good luck in derailing, or redirecting the whole schemozzle, if it’s all in a good cause, as you imply, they are not necessarily the ones who are in control of its fate. So I question your interpretation of what the project really is. According to the picture you delineate, what it is is going to depend on who wins, — who gets to decide the direction this thing goes in.
    However apart from this, as soon as the theme is raised: “Aha, well what we tell the politicians and the public is one thing; in fact the reality will be run by us and completely different!” — that immediately raises the question: Well then, what will it be like?
    As I said in my previous post, if the people involved are telling the politicians they can re-invent the brain, and they are telling less gullible folks that they aren’t really wasting time on that sci-fi stuff, that they want to do real nuts and bolts research, how do you tell which is the cover story and which is their real ambition? We’re apparently discussing fairly sophisticated operators here. How do you know they don’t tell scientists: “Oh no no, of course we’re not going to try to build a digital brain. We’re just after funding for proper research.” This is what they would say to prevent professional derision and rasberries, such as Professor Myers post above, breaking out from all academic quarters. Which would inevitably threaten the whole scheme.
    Of course the funding process must be largely to blame for all this one way or another, but this is a problem for the whole of our society. Obviously, we should be able to do these sorts of things better.
    One might also worry for instance, that if these researchers are good at blowing smoke up the arses of journalists politicians and civil servants, then such a well-funded project could wind up being just a scientific gravy train keeping scientists and computer programmers well-nourished and purring for the next ten years. Let me stress that I don’t particularly suspect any such thing, and I possess no information about the project or the people involved that would lead me to suspect that. ( All I know about it derives from this blog and the project website.)
    My meaning is that when we have a culture that obliges scientists to behave like politicians and spin doctors and PR merchants, it raises all sorts of potential doubts and cynicisms in peoples’ minds that threaten to erode peoples confidence in the scientific process.
    Therefore, whereas most of your post was interesing, I don’t think it contributed much of worth to imply that I was being naive ( extremely naive, forsooth! ) for pointing out explicitly what this EU project is explicitly about. ( Apparently, by extension Professor Myers is “extremely naive” as well. )
    It may be virtuously metamorphosized into something more worthwhile as you suggest, but the task of nailing down what the EU’s relationship with science is is necessary and important; not least for us who live beneath its brooding penumbra. And that’s what I’ve been attempting to facilitate. It’s not just the allocation of research funds that are of importance here, obviously, but the way our society and our science live together and co-operate with one another.

  46. Lofty says

    If the project results in a working human brain simulation a small tribe of republicans will worship it as Microsoft Jesus SP2.

  47. george gonzalez says

    As Deep Throat said “Follow the money” and it all makes a whole lot more sense. Boondoggles like this, “Star Wars” missile defense, and dozens of others, one could surmise are just a form of genteel corporate welfare. Rather than the dollars going under the table in big suitcases, they go through a quasi-open process through the Pentagon and NSF and NIH and the other agencies where there is a modicum of process and review and then checks get printed out all legal-like and get mailed to the usual benificiaries, almost always the usual suspects.

  48. razzlefrog says

    Oh, God! Futurists! I met a guy of their lot to whom I simply could not explain that “head transplants” were an imbecilic idea. He called me close-minded and pointed to that idiot Robert J. White who did the monkey transplants decades ago that just resulted in senselessly slaughtered monkeys.

    Those futurists are some sci-fi-drunk, pretentious, fucks. They think because the subject of their fascination is science and technology (really, they’re basically masturbating to it) that it categorically places them among some group of glorious, freethinking, rogue, internet intellectuals.

  49. says

    @PZ: Knowing about your atheist background (which I share) I find this statement of yours quite interesting:

    “…and computer scientist types have this very naive vision of how the brain works — it’s just another computer. We can build those…”

    If I didn’t know better, I would think you are dualist. If the brain isn’t “just another computer”, what is it then? I agree that we don’t really know. But so far, medicine has made pathetically little progress on this subject, especially considering the great progress that has been made in artificial intelligence in a much shorter timeframe. Maybe it IS time to let the computer scientist types a bit of quality time with the subject.

    Who says that simulating the interaction of a great many neurons (with admittedly simplified input/output and unknown initial states) will not lead to insights into the causes of the macro-effects we observe in the human brain. Examples of where this might be useful is in Migraine, epilepsy, mental disorders – which are mostly described with very vague analogies by neuroscientists.

    Small scale simulation of neural networks has already lead to progress in the realms of image and speech recognition, logistics planning, computer gaming, self driving cars and adaptive, artificial limbs, to name but a few.

    If nothing else comes out of this, building a machine of this magnitude will create a great pressure on chip makers to come up with more powerful CPU architecture. This is something that everyone will benefit from. With that perspective, 3B USD is a bargain. Many big companies spend more than that on IT consultants who do very little good in this world. Axing a bit of funding to neuroscience seems like a pretty worthy sacrifice. I find it hard not to have some sympathy for cutting down neuroscience funding as the most common thing heard from that field is complaints about how little we know about the brain and very few theories on how to approach knowing more.

    Disclaimer: I am a computer scientist type, though not a futurist.

  50. says

    Simple, its just a Excel spread sheet with a 100 billion rows and 100 billion columns – 10^22 weights to calculate. Any mouse could do it. However, I think the answer you’ll find is 42.

  51. David Chapman says

    45
    brianpansky

    @38
    David Chapman

    This project is actually less about reaching a human brain simulation and more about checking whether all the information and results coming from distinct fields of the neurosciences are accurate in terms of producing a functional brain.

    What other way is there for them to be accurate? Either the information and results coming from etc etc are accurate about what a living functioning brain is and does or their just poor, innaccurate results.

    Your objection is similar to a lot of anti-science nonsense. Please grasp that models don’t come in just the two varieties you mention, 1) all secrets revealed correct model and 2) too incorrect to be of any use model.

    There isn’t a dichotomy of accuracy, there is degree.

    I totally concur! Perhaps my prose wasn’t as accurate as it should have been…. I was making a completely different point. What I intended would have come out better something like this:

    What sort of sense does this phrase of Gabrielcosta’s make: “accurate in terms of producing a functional brain?”
    What other way is there to be accurate about the action and interaction of cells and other elements in the brain except an accurate description of a functional brain? What else is their to study or describe? What else is their to attempt to be accurate about?

    The only other kind of such study I can think of is if someone wanted to analyse the way the brain of a corpse decomposed. ( Yecchh. )
    I was questioning whether he meant anything particularly coherent at all, since I thought his intention was to obscure the truth, unfortunately.
    Now I think he simply expressed himself awkwardly; and it would seem I’m not in a position to condemn him for that…
    Whereas like I say perhaps I could have been clearer, I didn’t actually write anything erroneous. I didn’t say or imply that inaccurate results were scientifically useless. ( If they were, of course, science could never get started. ) Unfortunately I managed to give you that impression. What you read me as saying was something like:

    What other way is there for them to be accurate, except to be absolutely, unerringly, and indefeasibly accurate? What other way is there to be accurate? That’s what the word ‘accurate’ means!! Either the information and results coming from etc etc are accurate about what a living functioning brain is and does or their just poor, innaccurate results, and therefore completely useless to scientists!! Science demands absolute Godlike omniscient precision at all stages, otherwise it’s not science!!

    — which was not at all my intention.

  52. PaulBC says

    Thomas Kejser #55

    If I didn’t know better, I would think you are dualist. If the brain isn’t “just another computer”, what is it then?

    It’s a brain.

    It has nothing like the architecture of a typical programmable computer. Our understanding of how the brain works is incomplete. There is a certain amount of speculation about how parts work, but no consensus about the whole picture. The closest model from computer science is a “neural network” (weighted threshold gates). This is a poor approximation of actual neurons, and only used in some applications (generally simulated on a more conventional von Neumann architecture).

    I agree that the brain can compute some things, but a soap film can compute some things too (locally minimal surfaces). If a soap film is not “just another computer” then what is it? I’m a computer scientist (seriously) so I guess now I’m an expert on soap films too. Just about any complex system can compute something non-trivial, so I guess being a computer scientist means I’m an expert on everything. Protein folding? Back off, man, I’m a computer scientist.

    The main point is that the brain does something very complex, and there is no particular reason to believe that you will know how to simulate it on a computer even if you had the raw computational power to do it.

    Note that the brain/computer analogy fails in the opposite direction as well. I can program a computer to do a breadth first search of a graph of 1000 nodes or (a lot) more, and have done it more times than I remember. Could I program my brain to do it? Of course, I could carry out some lengthy computation with pencil and paper, but then the whole system is the computer (like a Turing machine with my brain as the finite state control). In fact, I doubt that there is even a way to get a brain by itself to carry out a task that requires that many error-free associations of discrete symbols (the kind of thing computers are built to do).

    I feel safe in saying PZ does not mean that there is any magic going on, just that whatever is going on in the brain isn’t going to happen in a computer unless we understand what it is first, and we’re not even close.

  53. David Marjanović says

    Axing a bit of funding to neuroscience seems like a pretty worthy sacrifice. I find it hard not to have some sympathy for cutting down neuroscience funding as the most common thing heard from that field is complaints about how little we know about the brain and very few theories on how to approach knowing more.

    …This… makes no sense.

  54. alexanderz says

    David Chapman #51:

    You make very good points. Nevertheless, given the economic and political climate of the EU I don’t see how any form of large scale research could take place without crippling backlash. That said, I don’t see why would the EUcrats deceive the scientists. This project actually removes money and power from EU since a part of the scientists, partners and institutions aren’t part of the EU. Furthermore, there is no evidence of any corporational corruption like george gonzalez #53 suggests, nor does there seem to be an effective way to instal it later on.
    Mind you that regardless of intentions, for laypeople and the general media, the difference between a HUMAN BRAIN (cue horror music) and a bunch of computers running the best neurological models there are isn’t that big. After all, that’s the same media that shouts “LIFE ON MARS” every time the topic comes up. My point is that there might not even be any deception here, at least not from the point of view of the EUcrats – they may honestly believe that while the human brain isn’t currently feasible, a kind of human brain simulation is a worthy enough goal.

    Naturally research shouldn’t be based on belief. Nor funding allocation shouldn’t be based on deception or perceived deception. It’s just that currently things are as they are and I don’t see a reason to fault this particular project.

    Apparently, by extension Professor Myers is “extremely naive” as well.

    Well of course. He proved that many times over the years. Not that that’s particularly bad – naivete (in the innocent sense) is often a companion to kindness.

  55. PaulBC says

    David Marjanović #59
    In other words, the defunding will continue until we see signs of progress.

    Works for me.

  56. consciousness razor says

    David Chapman, #57:

    [Discussing this from gabrielcosta, #35):]

    This project is actually less about reaching a human brain simulation and more about checking whether all the information and results coming from distinct fields of the neurosciences are accurate in terms of producing a functional brain.

    What sort of sense does this phrase of Gabrielcosta’s make: “accurate in terms of producing a functional brain?”
    What other way is there to be accurate about the action and interaction of cells and other elements in the brain except an accurate description of a functional brain? What else is their to study or describe? What else is their to attempt to be accurate about?

    This is just my interpretation (maybe gabrielcosta will return to elucidate what he meant), but I’m going to guess that you’re not parsing the sentence correctly. Look at how it’s structured: less about X, more about Y. Instead of picking apart how Y is being phrased internally (and for what it’s worth, I don’t think it’s implying anything problematic), you can look at the whole thing to see what kind of distinction it’s making.

    I’ll start with an analogy. Suppose somebody said “I’ve got a computer simulation of the universe.” You’d probably agree with me that they are not claiming to have made a universe. They have a model (a scientific model/theory) of the universe, which they’ve put into a computer to derive predictions based on it. If the model’s already known to be just plain wrong, it will not produce something that looks like the universe we already know about; and whatever predictions it makes need to be checked (if they’re actually predictions, not “retrodictions”) with some kind of physical observation of the universe itself.

    An ambiguity arises here because people have very good reason to believe that a brain is more-or-less a computer, and consciousness is more-or-less a “simulation” the brain makes for itself of what it’s doing. So, when somebody says “computer simulation of a brain,” they immediately jump to an idea like “artificially intelligent/sentient computer” instead of an idea like “model-testing simulation.” That is, at least, an idea that readily finds its way into my head.

    But one point which ought to be stressed here is that if you have any theory of the brain whatsoever, that almost certainly means it can be “model-testing simulated” at least to some extent. (If it’s not completely algorithmic, then nothing you can do in a computer will be completely sufficient, or basically anything resembling mathematics as we know it). Perhaps it’s a really awful simulation, because you just don’t have enough to data to support your theory yet or simply because it turns out your theory is no good after all. Anyway, conceptually it is a little tricky, but there is some kind of a distinction to be made between literally “making a brain” (or “reaching” that goal) and “checking theories to see if they’re accurate” (in terms of what makes a brain a brain).
    ———
    ———
    Paul BC, #58:

    I agree that the brain can compute some things, but a soap film can compute some things too (locally minimal surfaces).

    No, they can’t, because it’s not doing any “information processing.” That is, unless you as a computer scientist have a very good argument about why it’s valid to conflate any physical process whatsoever with information processing. Do we live in the Matrix or something? Is that equivalent to (or even in the same ballpark as) the claim that brains do information processing?

    But since you don’t seem to actually believe that anyway, what’s the point supposed to be? Your argument looks like this: “the brain can compute some things, but false statement is true (but actually false).” All I get out of that is that the brain can compute some things. So what?

    Instead of going deeper into this elaborate nonsense, you could simply interpret “just another computer” (PZ’s phrase) as meaning “another one of those contemporary computers which aren’t anything close to what brains are capable of doing, and which aren’t structured the way brains are structured” (however, working on the latter in particular seems to be an explicit goal of the project, so that’s not much of an argument). Saying that it isn’t just another one usually doesn’t imply that it’s not one, but that it’s a special kind of one instead of “just any old arbitrary one.”

  57. PaulBC says

    consciousness razor #62

    No, [soap film] can’t, because it’s not doing any “information processing.”

    I can calculate minimal (not necessarily minimum) Steiner trees using soap films (e.g. http://arxiv.org/pdf/0806.1340.pdf). Of course, the information still needs to be interpreted, but if the nodes (as pins between parallel plates) are placed against a cartesian grid, such that you can read the film surface as line segments between grid coordinates, it sure looks like information processing to me. I can even add some image analysis to convert the results directly to numbers. Of course, this is “digital” but the actual computation is happening in the soap film, not the I/O device.

    Am I doing information processing if I use another analog computer such as a slide rule? I thought so. Do you have a more specific definition of “information processing.” If I can’t do it with soap films, I might be able to suggest another analog computer that qualifies.

    My point is that while the brain appears to do “information processing” among many other things, understanding how a von Neumann architecture does the same thing gives you almost no insight into how the brain does it. So to stretch an admittedly circuitous analogy, if I know how to write an algorithm to approximate Steiner trees on a digital computer, this will not make me any kind of an authority on how this happens in a soap film. I will still need to understand at least a little about differential equations, at least enough to write a program to solve those numerically.

    In fact, I think the brain does a lot more than information processing (by which I do not mean magic, but physics). E.g., it produces activity measurable as an EEG. Is this important to understanding human cognition? I don’t know. But understanding computers (and again mostly understanding stored program von Neumman architecture machine) doesn’t put me in any privileged position to rule these things out.

    Computers are useful in simulating things, but you need to have some understanding of what you are simulating (not necessarily perfect understanding). I think PZ was merely offering the opinion that our current understanding is insufficient to make much progress this way, and many people really do think that “It’s just a computer” is the key to understanding the brain.

  58. consciousness razor says

    PaulBC:

    I can calculate […]

    Yes, you can. Computers can. Soap can’t. You notice the difference, right? It sure as fuck isn’t the difference between digital/analog. And I would hope a computer scientist wouldn’t even need this much spelled out, but then again I know there are some awfully wacky computer scientists out there.

    I think PZ was merely offering the opinion that our current understanding is insufficient to make much progress this way, and many people really do think that “It’s just a computer” is the key to understanding the brain.

    Well, I agree that the latter sounds hopelessly simplistic and naive. I suppose there could be a whole lot buried in the word “computer” for some such naive person, but I doubt it’s even a meaningful claim with some substance to it. This is their “goddidit” because they’re ready to stop thinking about the subject.

    The former doesn’t make any sense, given my other remarks above. Our current understanding supplies something in the way of a simulation, or else we’ve got no fucking idea at all of what cognitive science is even about. I think we’re well beyond that stage, so whatever the state of the art is now will be enough for making some kind of progress. I don’t think anyone believes computer simulations are ever going to be “sufficient” for such research (i.e., nothing else is necessary), but that they are a necessary component for making certain kinds of progress that you can’t (ethically) get any other way.

  59. gmacs says

    In other words, the defunding will continue until we see signs of progress.

    Funding for something that is far beyond what we’re capable of is wasted funding. Rather than put resources into a simulation that will almost certainly be useless could be better spent elsewhere.

    Examples: understanding the cellular nature of the nervous system; circuit-level work and connectomics (note: a full map of the brain will probably not happen because at a certain point, they will all probably be different); synaptic plasticity; development and cell differentiation.

    Hell, for the comp sci folks, who said AI had to work exactly like the human brain? If you’re so excited about AI, why wait to figure out how this brain works?

  60. PaulBC says

    consciousness razor #64

    Yes, you can. Computers can. Soap can’t. You notice the difference, right? It sure as fuck isn’t the difference between digital/analog.

    I agree I expressed myself badly and fell for a gotcha. In another context, I might say “I can calculate an approximate Steiner tree with a computer.” and in this context should have said “The soap film calculates an approximate Steiner tree.” I would set up the digital computer by inputting a program and some data. I would set up the soap film analog by moving pins between parallel plates. Certainly, neither the computer nor the soap film conceives of the problem by itself. One day, a computer might. Probably not the soap film.

    But your earlier point was that the soap film was not doing information processing. I don’t think you’ve established that. It’s not doing information retrieval or symbolic processing, both of which are available to the brain and computers, so there are some limits to my analogy, and maybe that’s what you meant. I’m not sure what kind of information processing you mean or why this is relevant to a rhetorical question like “What is a brain if not a computer?” Well, “What is a soap film on some pins between parallel plates if not a computer?”

    To me the simplest answers are “A brain is a brain. A soap film Steiner tree analog is a soap film Steiner tree analog. A computer is a computer.” These are three distinct things. Understanding any one of them does not give particular insight into the other. It doesn’t seem incredibly wacky to me. I understand computers a lot better than I understand the other two.

    Lumping these together doesn’t strike me as helpful in understanding them. For most people, a computer is a purpose-built device that is programmable in a fairly straightforward way and is Turing-equivalent. The human brain does things we don’t know how to program yet (like learning the semantics of spoken languages by example) and will probably never do things that are easily programmed (like finding the shortest path in a graph of 1000 or more nodes–without pencil and paper). In the latter case, even if it is possible, there is no programming language. So the statement “A brain is a computer”, while true modulo some definition of “is” and “computer” doesn’t strike me as a helpful way of looking at things.

  61. PaulBC says

    consciousness razor #64

    I don’t think anyone believes computer simulations are ever going to be “sufficient” for such research (i.e., nothing else is necessary), but that they are a necessary component for making certain kinds of progress that you can’t (ethically) get any other way.

    Sorry, I didn’t catch this above. I never suggested that computer simulations aren’t useful in the study of brains or soap films for that matter. Only that until you understand what you are going to simulate, the computer itself is no more relevant than some other piece of lab equipment. I also never suggested that you need to understand everything about the brain. Some level of abstraction will suffice. It’s just that this project as announced appears to assume more understanding than currently available. The same project as actually planned by those sneaky European scientists may turn out to be great science. I don’t have enough information from either this page or the linked article to know.

  62. PaulBC says

    gmacs #65

    Hell, for the comp sci folks, who said AI had to work exactly like the human brain? If you’re so excited about AI, why wait to figure out how this brain works?

    This is what AI research has been doing for about 60 years. Aside from a few areas like neural nets, most of it is not based on any theory of how the brain solves problems. It has also been a victim of its own success (“Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence” http://en.wikipedia.org/wiki/Artificial_intelligence)

    But I think this brain project is intended as neuroscience, not AI, and is mainly interesting if it sheds light on how brains actually work, not how we can solve the same problems with digital computers.

  63. consciousness razor says

    I’m not sure what kind of information processing you mean or why this is relevant to a rhetorical question like “What is a brain if not a computer?” Well, “What is a soap film on some pins between parallel plates if not a computer?”

    If you can make a computer out of soap film, that’s great. I don’t think that’s more interesting than making a computer out of anything else, nor do I get why that something else ought to get the special designation “computer” because it’s a particular type of physical system. This sounds nearly like what you were saying. But just bare soap film — not doing any of the very specific computational work you set it up to do in your very specific conditions, but only being plain old soap film — that itself is not doing any “computing” by any reasonable definition. I mean, that is, unless you’re really going to join the Matrix crowd, or the Mathematical Universe crowd, or the It-from-Bit crowd, or whatever they call themselves these days. But I wouldn’t call that reasonable.

    In any case, I still can’t figure out how I’m supposed to follow your argument. It requires an extremely uncharitable reading of PZ to think that he meant (if only implicitly) that brains are magic or non-physical. If anything physical can (in principle) be a computer, I really don’t think that’s going to address the issue he was trying to raise.

    It’s just that this project as announced appears to assume more understanding than currently available.

    Like what, for example? All of their talk about getting/consolidating data, developing theories based on that — that’s not being assumed. What’s assumed? I sincerely want to know.

    I never suggested that computer simulations aren’t useful in the study of brains or soap films for that matter.

    I was responding to what you said you think PZ’s opinion is. Now, you did say insufficient for making “much progress,” not “any progress,” so there’s even some room for him to wiggle out of this if you’re right. I think PZ, as a biologist, probably doesn’t take simulations quite as seriously as someone else who’s coming from a computer science (or mathematics or even physics) angle. Maybe that’s justified, maybe not; and anyway, I don’t actually know what “much progress” (or progress about what) is supposed to mean. But just saying there’s also all of this other stuff you have to do (especially when these people are saying the same thing) doesn’t seem like a helpful criticism.

  64. PaulBC says

    consciousness razor #69

    In any case, I still can’t figure out how I’m supposed to follow your argument. It requires an extremely uncharitable reading of PZ to think that he meant (if only implicitly) that brains are magic or non-physical.

    And indeed, that is not how I read PZ, whose main point was consistent with mine, if not wholly identical. In face “What are you going to simulate?” sums up my thoughts pretty well.

    I was responding to

    Thomas Kejser #55

    If I didn’t know better, I would think you are dualist. If the brain isn’t “just another computer”, what is it then?

    Which to me looks close enough to your “uncharitable reading” of PZ to explain why I thought it merited some response. The brain is not “just another computer.” I was trying to illustrate why I thought this, but I probably got carried away with my analogy. The brain is not just another computer. Understanding computers doesn’t help you very much in understanding the brain.

  65. PaulBC says

    @consciousness razor
    Finally, if you get a chance, can you reread my #58 in light of the fact that I was mostly affirming my agreement with PZ while attempting to refute what I thought was a self-evidently silly statement ‘If the brain isn’t “just another computer”, what is it then?’ I just reread #58, and I still think it makes sense. Skip the paragraph about soap film. (I still like it, but it can be omitted without changing the point.)

    I don’t expect you to agree with the whole thing, but I think the last part makes it clear that I did not engage in your “uncharitable reading” of the original post, specifically:

    I feel safe in saying PZ does not mean that there is any magic going on,

  66. says

    soap bubbles, “computer”, brain.

    one of these things is not like the other.

    “computer” from wikipedia:

    Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

    I’ll let you all figure out which one is not like the others, or else establish that they all fit this description, or else establish that only one fits this description.

  67. PaulBC says

    brianpansky #72
    I would answer that a computer is a computer and the other two are not, like I said in #66 “A brain is a brain. A soap film Steiner tree analog is a soap film Steiner tree analog. A computer is a computer.”

    Or am I missing something?

  68. says

    ah, but then further down wikipedia it also says:

    “Any device which processes information qualifies as a computer, especially if the processing is purposeful.”

    So I suppose if you are USING soap bubbles as a computer, perhaps that usage of the word is indeed meaningful. Similar to the word “tool”.

  69. says

    @73
    PaulBC

    brianpansky #72
    I would answer that a computer is a computer and the other two are not, like I said in #66 “A brain is a brain. A soap film Steiner tree analog is a soap film Steiner tree analog. A computer is a computer.”

    Or am I missing something?

    Things can be themselves and also members of conceptual groups (like “tool”) that other things also belong to. I was working on the assumption that “computer” is one such category.

  70. consciousness razor says

    PaulBC:

    I don’t expect you to agree with the whole thing, but I think the last part makes it clear that I did not engage in your “uncharitable reading” of the original post, specifically:

    I feel safe in saying PZ does not mean that there is any magic going on,

    Right, I get that. I didn’t meant to imply you were the one reading PZ uncharitably on that count (although I’m sorry my comment does sort of read that way). I meant that your rebuttal to that was going in a lot of weird directions that aren’t relevant. You could just as well say we can make computers out of billiard balls, aluminum foil, peppermint gum, and maybe some kind of lubricant — I’m sure MacGyver could do it somehow, given enough time. But my point is that in the process of trying to say the brain is “not ‘just’ a computer” (in the sense of desktop PCs or even current supercomputers), you shouldn’t make “computation” in general a totally vacuous term (or suggest that’s the only argument your opponent has at their disposal). It does mean something more than just any physical process in any physical system whatsoever. If my balls/foil/gum/lube setup is doing what a desktop does, then a reasonable person would have to agree that it’s “just a computer.” But if that’s doing what brains do, structured in the way they are (i.e., not much like a desktop computer), then they’re still physical objects, not magic, and they are indeed “computing.” However, they are not the mundane variety of “computer” that most people will have in mind. That doesn’t mean there’s anything particularly special about the ordinary word “computer” that somehow ordinary people have latched onto. It’s just a word that refers to a particular type of computer, in ordinary language.

    So, you don’t just need to invoke any mystery about the brain (or go in the other direction to make these vaguely panpsychist/dualist claims about everything being “information” or “computation”) just to make the very trivial point that our current computers don’t match what a brain does. That’s all the statement meant, not anything more fundamental about the nature of computation. But you certainly seemed like you were taking that route anyway. In #58, you claimed this:

    The main point is that the brain does something very complex, and there is no particular reason to believe that you will know how to simulate it on a computer even if you had the raw computational power to do it.

    I think there’s plenty of reason to think we could simulate one on a computer. If you’re giving me all the computational power I could ask for, why wouldn’t we know how to do a simulation? A simulation doesn’t mean you’re going to make sentient AI, after all. It just means modeling the damned thing — with numbers and shit. If that’s really asking for too much at this point, I have no idea why. You’re going to pick interesting features (and known/predicted features) to model, and you leave out the junk you don’t know about or don’t care about. Because if you had to “model” every last detail, it wouldn’t be a model anymore. And most of it won’t be relevant to any particular thing that you’d actually want to study anyway.

    Then you said we’re “not even close” to understanding what a brain is. Pure silliness:

    I feel safe in saying PZ does not mean that there is any magic going on, just that whatever is going on in the brain isn’t going to happen in a computer unless we understand what it is first, and we’re not even close.

    Of course we’re “close” to understanding that — a whole lot closer than total mystification, as you seem to be suggesting. But again, you’re apparently assuming that a simulation means it’s “going to happen in a computer,” as if the computer itself needs to be sentient or something, instead of simply simulating for the purposes of making scientific predictions. That does require some fundamental theoretical structure to get the project off the ground, but (1) we already have that, (2) these people themselves are saying they’re going to look for more evidence to develop this even further, and (3) there’s nothing stopping us from doing that.

    I also don’t get what the point is of saying that, for some reason or another, we would be able to make a computer brain, but only as soon as we “understand what [a brain] is” (presumably at some super-deep level), which could only happen if it is in fact (regardless of what we know) a kind of computer doing computable things. How else are we supposed to come to that “understanding,” if not by trying to actually do that, one step at a time?

  71. pyrion says

    Saying that we should not try to simulate the brain because we there still is so much we don’t know about it is about as feasible as saying we should not do weather or climate simulations. Or big bang simulations for that matter. Those simulations will never be absolutely accurate, but that’s not the point. The point is to veryify or dismiss models and to create new models based on your simulation data. Simulating the brain (or parts of it) will of course not give you a human identical brain in a short time span. The accurracy of the simulations will improve… but ONLY if we start doing them at all. Simulations in science are usually not started at the point where you basically know everything there is to know about the subject. It’s quite the opposite. They are used to find out about things.

  72. says

    @PaulBC: I am afraid I don’t follow your logic – or you may have managed to confuse both me and yourself.

    You have the distinguish between the medium of computation (brain, Intel CPU, radio tubes, soapfilm) and the computation that is actually run on that medium. It is an unanswered question if certain types of media allow “special” program to be run. As a previous poster mentioned, Church-Turing has not yet been proven and it may never be. However, so far computer science has not found any computation that cannot be run on a Turing machine (not to be confused with the Turing test) and this gives hope that ANY computation can be run on “just a computer” (like the ones we run today and your laptop, but more powerful). If neuroscience claims to have an approximation of a working model of the brain – then we should be able to run a program on an advanced, but not at all mushy “brain like” machine that simulates this model. We now have three levels of abstraction to distinguish between: what the program does (think, feel), how the program runs (communication between neurons) and what the program runs on (brain / computer).

    I think it would be extremely unlikely that this project creates a program that thinks (what the program does). But it would be rather likely that we learn a lot about how programs run in brain like computers. For example, migraine is often described to me by doctors as a “storm in the brain neurons” and there is evidence that jabbing some light electrical pulses into the brain helps patients. With a computer simulation of how neurons interact at a grand scale when they fire – we might get insight into why this misfiring sometimes happens.

  73. gabrielcosta says

    #57
    David Chapman

    I was questioning whether he meant anything particularly coherent at all, since I thought his intention was to obscure the truth, unfortunately.
    Now I think he simply expressed himself awkwardly;

    Yes you’re right, I might have expressed myself awkwardly (maybe due to english not being my first language or maybe I was talking in a shallow way). Let’s see if I can do a better job:
    I think if you hear anything about simulating a complex brain, specially the most complex of all, you should raise an eye brow and get ready to ask some hard questions to the project’s proponents.

    That is what I was thinking and what I saw when I went for Henry Makram’s presentation, but eventualy I came out with a sense that the guy actually has some control on how to better use the funding in a “useful” way even if not producing the ultimate human AI in the end.

    #57
    David Chapman

    What other way is there to be accurate about the action and interaction of cells and other elements in the brain except an accurate description of a functional brain?

    An accurate description of a functional brain is not necessarily a complete description AND translation of the human brain. Sorry, I don’t know any other way to put it.
    I work on EEG and for me if you can have, for example, a simulation of a hippocampus that produces the theta waves that are prominent in this region or a simulation of the visual cortex that includes the neurons known to compute for different visual features in the striate and extrastriate cortex (e.g. movement, contour) and not only you get a proper processing of the information but also you see a behaviour similar to the one measured by electrophysiology (i.e. significant modulation of alpha waves given certain conditions and neuronal firings modulated accordingly) than I think you have an accurate model.

    The head of this project actually has simulated a volume (1 cubic mm I think) of the barrel cortex of mice, and even though they don’t have an understanding of what it is doing they have a model that represents its behaviour (e.g. mean number firings).

    Sorry if I sound to optimistic but in terms of the specific goals of the project I think it is relevant and deserves a chance – and I was very skeptic when I first heard of it. Now if you want to say that the selling of it might ruin further efforts like this or that the project’s leaders are not going to fullfil even the most grounded goals, that is a different question and I’m all ears.

    Thanks for giving me some credit, I really didn’t intend on murking the discussion.

  74. gabrielcosta says

    (Sorry for my previous comment coming all jumbled with the quotations. This is how it should have come out)

    #57
    David Chapman

    I was questioning whether he meant anything particularly coherent at all, since I thought his intention was to obscure the truth, unfortunately.
    Now I think he simply expressed himself awkwardly;

    Yes you’re right, I might have expressed myself awkwardly (maybe due to english not being my first language or maybe I was talking in a shallow way). Let’s see if I can do a better job:
    I think if you hear anything about simulating a complex brain, specially the most complex of all, you should raise an eye brow and get ready to ask some hard questions to the project’s proponents.

    That is what I was thinking and what I saw when I went for Henry Makram’s presentation, but eventualy I came out with a sense that the guy actually has some control on how to better use the funding in a “useful” way even if not producing the ultimate human AI in the end.

    #57
    David Chapman

    What other way is there to be accurate about the action and interaction of cells and other elements in the brain except an accurate description of a functional brain?

    An accurate description of a functional brain is not necessarily a complete description AND translation of the human brain. Sorry, I don’t know any other way to put it.
    I work on EEG and for me if you can have, for example, a simulation of a hippocampus that produces the theta waves that are prominent in this region or a simulation of the visual cortex that includes the neurons known to compute for different visual features in the striate and extrastriate cortex (e.g. movement, contour) and not only you get a proper processing of the information but also you see a behaviour similar to the one measured by electrophysiology (i.e. significant modulation of alpha waves given certain conditions and neuronal firings modulated accordingly) than I think you have an accurate model.

    The head of this project actually has simulated a volume (1 cubic mm I think) of the barrel cortex of mice, and even though they don’t have an understanding of what it is doing they have a model that represents its behaviour (e.g. mean number firings).

    Sorry if I sound to optimistic but in terms of the specific goals of the project I think it is relevant and deserves a chance – and I was very skeptic when I first heard of it. Now if you want to say that the selling of it might ruin further efforts like this or that the project’s leaders are not going to fullfil even the most grounded goals, that is a different question and I’m all ears.

    Thanks for giving me some credit, I really didn’t intend on murking the discussion.

  75. says

    @David: As per this article: http://www.theguardian.com/science/2014/jul/07/human-brain-project-researchers-threaten-boycott the neuroscientests are apparently worried that this will “suck out funds from valuable neuroscience research”. What I question here is what exactly that valuable research is?

    As the discussion here clearly shows – it is very easy to get confused about what is computed, how it is computed and what it is computed on. We have to start somewhere to understand the brain and so far – the neuroscience study from the “outside in” has not yielded much progress in terms of curing brain diseases, easing epilepsy/headache and generally coming up with a reasonable model of how humans actually think. Compare with computer science, a less than 100 year old discipline. Neural network reasearch and genetic programming alone has yielded incredible insights into how neurons learn. This has led to real world uses. Methods like FMRI would not even be possible if progress in computer science has followed a similar progress to medicine.

    Yes, human are very complex machine and it is not entirely fair to compare progress on them with progress in computers. As long as we are not even willing to think of humans and the brains as advanced machines – I am afraid we will not even begin understanding them.

  76. says

    Let me just make it clear that when I say neuroscience has made little progress – I am referring to the top down approach of understanding the brain. Which, if I understand the new proposals correctly, is where the funding is getting removed.

    In order to understand the structure of the brain and how it computes – we of course need some reasonable model of how neurons and synapses work. This requires input from a neuroscience study from the bottom and up. Previous posters mentioned that we don’t fully understand the physics and chemistry of the neurons yet and that neural networks are highly simplified. Yes, and so what? Modeling has to start at some level and make some basic assumptions. We don’t simulate the weather by simulating every single atom in the air – we rely on higher level behaviours. Similarly for a simulation of the brain – just because we don’t understand the complete details doesn’t mean we can’t simulate by making simplifying assumptions (such as those made by neural networks in AI).

  77. Holms says

    you do realize that scientists have many legitimate uses for simulating their models in research, don’t you?

    Yes. I also realise that those models are decreasingly useful when the parameters are imprecisely known, and that these uncertainties are sucking up US$4,631,000,000 of science funding.

  78. Burkhard Neidecker-Lutz says

    Hello, somewhat late, but better late than never. Long term lurker, first time poster, since this time I figure I can actually speak to the topic at hand.

    I am with one of the teams that is actually part of the project (in subproject 9, “neuromorphic computing”).

    This is not something that was put together by some ignoramuses or people that have no idea about neuroscience. For a more detailed overview of what we do see:

    https://www.humanbrainproject.eu/discover/the-project/sub-projects

    As to “what are you going to simulate”: Many things at many levels. Part of the desire for a giant supercomputer for “full brain” simulation is that we need to sometimes validate “higher” level models at large scales (i.e. we can do a very detailed model of various neuron types or even ones that are detailed enough to correctly apply the effects of different pharmaceuticals at that level), but it becomes very inefficient at larger scales to do this, unless you can simplify the more abstract models.

    The trouble of course is that we don’t know enough yet to know what we can safely abstract away when climbing in abstractions, hence the need for “big hammers” to be able to simulate models with much more details and then comparing them with various more abstracted versions that are easier to compute.

    So the goal is mostly to take existing data/theories/models from brain chemistry, neuroscience, learning theory, etc. and plug them into models at various levels of granularity and see whether assumptions of one layer can be validated with data from other layers.

    The US effort by the way would be an ideal complement, as their proposal is to develop much better mapping technologies to discover actual brain structure and functions (without understanding necessarily how it works, just what happens). That kind of data could be plugged into the EU models and one could study things “in silico” including experiments that would be difficult, impractical or unethical in life subjects.

    The “IT” part of the project actually has two pillars. Subproject 7 goes the route of general purpose supercomputer to be able to do flexible simulation of interlocking models at all scales, but simulation speed will be at best 1/1000th to some small integer fraction of realtime.

    Subproject 9 is actually into developing chips and hardware that (if we abstract it the “right” way) should still be able to simulate the higher brain functions despite not being “anatomically correct” and has the potential to be much, much faster than realtime, allowing to study long-term brain processes like learning (i.e. try learning something 10000 lifetimes over in different ways to see which model matches reality).

    So yes, it is a huge government sponsored project with all the potential for internal waste and bureaucracy that comes with large efforts, but it is not some crackpots with no idea what they are talking about. The list of institutions participating is

    https://www.humanbrainproject.eu/discover/the-community/partners

  79. PaulBC says

    consciousness razor #76
    I don’t want to spend much more time on this. Here’s a concise statement of a fallacy that I was trying to identify:

    I understand some kind of computer. “System A” computes something. Therefore, I understand “System A.”

    Can we at least agree that the above is a fallacy? We don’t have to agree that anyone has ever used that fallacy or that “System A” refers to anything in particular.

    The rest, human communication being what it is, is largely speculation on my part. Maybe nobody has ever used that fallacy, but going back to PZ’s original post:

    I’ve noticed this, that a lot of gung-ho futurists and computer scientist types have this very naive vision of how the brain works — it’s just another computer. We can build those. Build a big enough computer, and it’ll be just like the brain. Nope. That’s operating on ignorance. And handing ignorant people billions of dollars to implement a glorious model of their ignorance is an exercise in futility.

    Possibly this is a strawman, but I think I know what PZ means (I also don’t think it describes this particular EU project and said this in #3). It does describe some things I’ve seen on the web over the years, notably the work of Hugo de Garis, who was trying to build some kind of “artificial brain” using genetic algorithms. It would have been exciting if he had ever succeeded, but I admit I’m not surprised that he failed. So when PZ says “gung-ho futurists” I think I know what he means, and I think the operating fallacy is the one I noted above.

    I don’t think we understand the brain very well, but I’m not a neuroscientist, so maybe I’m wrong. I certainly know that I don’t understand the brain very well. I wasn’t disputing the value of simulation for just about any field including neuroscience. You just need to understand the science first. Hopefully, PZ is wrong and the researchers on this project understand the brain a lot better than he thinks they do.

  80. PaulBC says

    Burkhard Neidecker-Lutz #84
    Thanks for the details. The main thing that strikes me as a red flag is the open letter with over a 100 signatures (seemingly from researchers, but I don’t know the field well enough to evaluate). Is there a good basis for criticizing this project as science, or is the source of the disagreement something else?

  81. Burkhard Neidecker-Lutz says

    Re PaulBC #86

    Me being a computer science guy myself (and my team not being in the core science but rather the “what else useful can we do more short term with the neuromorphic hardware being built” crowd may be biased, but I think there are two things going on:

    1. Henry Markram (the projects godfather) is a very opinionated and driven personality. He has put a lot of effort into the aggressive marketing that was needed to get an initiative of this magnitude funded. This does not necessarily also make him a very conciliatory and perfectly even-handed integrator of other scientists. :-/

    2. Money. The trigger seems to be the “skinning of the beast” for the next phase which saw the wholesale elimination of subproject 3 (I am not privy to what lead to that).

    While the headlines always say 1 billion € project, the actual way this works is that there is a large planning pre-project (which has been running for roughly a year) that lays some of the foundations, getting around 7% of that amount and then 2 rounds (4-5 years apart) of the mayor funding actually doled out. What happens now is the fine planing for that and the contractual issues of who gets what and has what rights/obligations for the first of these large chunks of money.

    So if you read the letter, it basically says:

    – We consider putting all eggs in this one basket very risky (now that we are no longer in that basket)
    – Can someone from the outside please check the decisions and decision making procedures being used

    So, all in all to me it is (big science) business-as-usual. Like sausage making, not too pretty if you look at the actual process of how it is made.

  82. chrisreynolds says

    The research seems to be working on the assumption that if we knew all the connections we would automatically understand how the brain works and what makes us different to animals. I would suggest that the best way to understand how the brain works may well be to start at the animal end and consider the possible evolutionary pathways. For this reason I agree with P.Z. when he says

    “What the hell? We aren’t even close to building such a thing for a fruit fly brain, and you want to do that for an even more massive and poorly mapped structure? Madness!” It turns out that I’m not the only one thinking this way: European scientists are exasperated with the project.

    I am working on an evolutionary model of how a network of neurons could meaningfully communicate by asking “What is the simplest decision making model of a brain that could be sufficient to help an animal survive.” The model has three basic operations – RECOGNISE known pattern, COMPLETE an incomplete pattern, and REMEMBER a new pattern. In addition there is an OPTIMIZE function which increases the importance of useful patterns and forgets redundant patterns, but is not directly involved in the decision making process. At this level the model predicts the existence of concept cells and mirror neurons.
    Of course this model is really crude in terms of formal mathematical logic and does not even support the idea of an explicit NOT. Many years ago the AI guru Minsky pointed out why, in mathematical terms, such models were not logically powerful enough, but any ten year old who has been taught about Venn diagrams could point out the flaws. So it would appear obvious that research in this direction would be a waste of time. After all we know we are ever so intelligent, and therefore there must be some kind of philosopher’s stone of intelligence, possibly in the form of some special genetic mutation, that makes our brain different to a primitive animal brain.
    As far as I can determine everyone has assumed that our marvellous human brain could not have such an appallingly crude driving mechanism at the heart of it – and in making this assumption we have forgotten how evolution can be very good at getting the best out of unpromising material. As a result I am looking at how such a crude system might evolve into a human brain.
    An important resource in this process is the archive of mainly unpublished research findings relating to an attempt to design a “white box” information processing system to help the human user to handle incompletely defined information processing tasks. The planned white box system handled recursively defined associatively addressed set names in a bottom up manner while the conventional “black box” computer is a top down rule based system that processes numbers in a numerically addressed linear store. The research showed that the approach was able to handle a wide range of non-numerical tasks, including A.I. style problem solving, but for non-technical reasons the work was abandoned over 25 years ago.
    The original white box proposals included a number of conventional computing type features, such as the ability to do arithmetic and to drive a computer terminal, but if these frills are stripped off the inner workings can be mapped onto the crude brain model. A comparison demonstrates that such a simple approach could handle large quantities of poorly structured information – could support at least a simple language, and even morph into something like a programming language if required to handle complex sequential tasks.
    Once it is realised that the primitive brain model can do significant useful work a consideration of evolutionary pressures suggests that there is a barrier to animals being more intelligent set by the amount of useful information that can be learnt in a lifetime. However once a primitive language becomes an efficient way of transferring reliable cultural information between generations the barrier falls away, and cultural evolution takes off like a rocket. As language is a tool that can be learnt and improved, the language will rapidly become more powerful, allowing information to be transferred even faster. A minor genetic change in the learning mechanism would speed the process even more – but make it more likely that people would be more inclined to follow charismatic leaders without question (i.e. religion?). In addition it seems very likely that information learnt in abstract terms via language would use the neural network more efficiently, increasing the brain’s knowledge capacity with no increase in physical size. In modern humans the cultural information serves to hide some of the limitations of the crude internal workings – but some important human brain failings, such as confirmation bias and unreliable long term memory, are predicted by the model.
    Basically the model predicts that there is no major difference between the way our brain works, and an animal’s brain works. The difference is by using language we have greatly increased our rates of learning and that what our intelligence is virtually entirely due to cultural knowledge.

  83. Burkhard Neidecker-Lutz says

    Re. chrisreynolds @ #88

    > The research seems to be working on the assumption that if we knew all the connections we would
    > automatically understand how the brain works and what makes us different to animals.

    No. But it is working on the assumption that if we knew all the connections (and signalling and signal processing and whatever else turns out to be important for the computation the brain performs), i.e. we had a sufficiently detailed simulation of whatever the low-level “hardware” in the brain does, then the higher order functions such as reflexes, emotions, reasoning and eventually consciousness should emerge. (And of course the assumption is that the brains primary function is that of information processing that can be mapped to a different medium).

    We might still not *understand* how that works, but once we have an “in-silicio” simulation of the real thing, then we can start to observe and tweak it in ways that are very difficult or impossible to do in a living organism (such as slow-motion, exact replay, etc.).

    The hope is that eventually such a capability will also lead to understanding, which to me is quite defensible given the success of such approaches in other areas of science.

    And apart from the biological understanding (i.e. the basic science, with most of the potential benefits in basic neuroscience and maybe some applications for brain diseases) the other part is an explicit focus on creating homologous implementations of certain parts as alternative solutions in robotics and other areas of computer science, quite independent of whether the biologically inspired hardware and algorithms in the end are accurate representations of the brain.

  84. chrisreynolds says

    I find Burkhard Neidecker-Lutz comment very interesting

    We might still not *understand* how that works, but once we have an “in-silicio” simulation of the real thing, then we can start to observe and tweak it in ways that are very difficult or impossible to do in a living organism (such as slow-motion, exact replay, etc.).

    I think we would both agree that there is no satisfactory published model of how electrical activity at the neuron level is converted to human intelligence.

    I am working on a model “brain symbolic assembly language” which assumes that we share with animals a very crude logical system. Starting with this model and looking at how it might evolve I believe I have found a pathway that will link together mirror neurons, the ability of process significant quantities of pretty messy (i.e. real world) information, support a pretty powerful language and explain human mental failings such as confirmation bias. I briefly mention this is a post commenting on whether two massively funded brain projects are going in the wrong direction.

    Now at the heart of all science is curiosity, and if the approach I am suggesting has any value, it should be of considerable interest to Burkhard. But he is apparently not interested in novel ideas but more interested in repeating information about the European project which I already knew.

    In fact his reply highlights what is going wrong with much scientific research at present. Blue sky research outside the major centres is impossible because for most scientists and the institutions that employ them research money and careers come first. Scientists, like other human beings, have realised that normally the best way to get on is to play follow my leader and not rocking the establishment boat. There is an almost unlimited number of very interesting, and often useful projects where modern technology allows one to collect data that was previously impossible. In such circumstances why waste time thinking outside the box when it may well set yourself against the experts on whose support your career depends. And even the more enlightened experts don’t feel ot worth spending any time of unconventional ideas unless they come from a “respectable” establishment.

    Of course this is nothing new – read the 1988 New Scientist article Why Genius is nipped in the bud.

  85. Burkhard Neidecker-Lutz says

    This is going to be three replies in a row (not that anyone cares but me….)

    The first one (this one) will be on PZs post.

    The second one on “no thinking out of the box by career scientists”

    The third one will try to dig a little bit into brain theory.

    So, to PZs post:

    The signatories to the letter make it sound as if this is a neuroscience project that got hijacked by evil IT people and diverts their precious neuroscience money. That somewhat misses the mark of where this money comes from and what it potentially was allocated for.

    The EU has several programs for pushing science and technology. One of these is called FET (Future and Emerging Technologies) and part of it are “flagships”. Its mission is:

    FET Flagships are ambitious large-scale, science-driven, research initiatives that aim to achieve a visionary goal. The scientific advance should provide a strong and broad basis for future technological innovation and economic exploitation in a variety of areas, as well as novel benefits for society.

    (taken from http://cordis.europa.eu/fp7/ict/programme/fet_en.html)

    Note, nothing of neuroscience specifically here.

    In 2011 they put out a call to propose “big things” and in 2013 selected 2 out of the 6 things proposed. The six candidates proposed were:

    FuturICT – a sort of big-data/earth observation/global sensor-net thing

    Graphene – research the material and what you do with it

    Guardian Angels – sort of wearable, live-long digital assistants

    Human Brain

    IT Future of Medicine – big data for medical research and treatment

    RoboCom – sort of an embodied version of the guardian angels

    ( All this taken from http://cordis.europa.eu/fp7/ict/programme/fet/flagship/6pilots_en.html, where you can also find the exhaustive/exhausting descriptions in full of what the things are about).

    In 2013, Graphene and Brain were selected. So, if anything, all of this funding could have gone to entirely different areas, it is a completely separate pot.

    One can certainly be upset with the management of the project and can quibble over decision procedures, but they should be honest that the project always was set up outside of that realm with a much more interdisciplinary focus. And yes, it takes a big gamble, but that is exactly what FET is for.

  86. Burkhard Neidecker-Lutz says

    Second post about “career scientists opposed to out of the box thinking”.

    Re. #90 chrisreynolds

    In fact his reply highlights what is going wrong with much scientific research at present. Blue sky research outside the major centres is impossible because for most scientists and the institutions that employ them research money and careers come first.

    Mmh. You may want to reread the articles and then think about who is in institutions and careers, which of the approaches is more “blue sky” and what they are actually saying (apart from the management criticisms and fear of tarnished reputations if the project fails).

    As for me personally, neither my career nor research money in any way depends on the project. The tough sell to management actually was to show them why it might be a good idea to participate despite it being quite a bit removed from what my company does for a living. But that’s just an aside.

    If you want to argue against “big science”, fine. If you can demonstrate convincingly that you can understand (or even recreate without understanding) an emergent phenomenon like animal or human minds without a pretty big infrastructure, the more power to you.

    As far as we can tell, there is no fundamental difference in basic functions of animal brains and the human brain, it mostly are minor organizational and pretty massive size/numbers differences. So, if strength comes (at least partly) fundamentally from the number of low level elements interacting in the brain, we unfortunately are stuck with needing the equivalent of the LHC. (and one could argue that putting that much money into a single purpose machine for finding just on particle was depriving all of physics of funding with equal validity).

  87. Burkhard Neidecker-Lutz says

    Re: chrisreynolds at #88 and #90

    Not that anybody cares, but this whole thing is an area I am passionate about
    and I fought to get our company to participate in the HBP. So there it goes.

    Third: My apparent lack of curiosity and Chris’s theory.

    Now at the heart of all science is curiosity, and if the approach I am suggesting has any value, it should be of considerable interest to Burkhard. But he is apparently not interested in novel ideas but more interested in repeating information about the European project which I already knew.

    You may be surprised by what I can be interested in :-). And yes, curiosity is
    a very useful emotion to keep oneself motivated. But it is not even half what
    you need for doing science. So onto your approach.

    I think we would both agree that there is no satisfactory published model of how electrical activity at the neuron level is converted to human intelligence.

    Short summary: no, I would disagree (with lots of caveats).

    Long version. Seriously long.

    There is an awful amount of stuff packed into that sentence of yours. First
    the part where we agree.

    There is electrical activitiy at the neuron level, plenty of wiring and
    connections and at the highest level we agree that the whole system exhibits a
    stunning number of behaviours we call “intelligent”.

    If you mean by your statement that we do not have a concise, simple unifying
    theory on how you get from the former to the (ill-defined, let’s do that
    below) latter, then, yes, I agree.

    If you mean by your statement that we do not have a clue how the system
    operates at some levels and that there aren’t plenty of published theories
    (many of them unproven) about mechanisms at various levels of abstraction,
    then the answer is emphatically no, we have plenty.

    I would even go so far to claim that we have an outline of what a set of
    theories probably will look like in the end. Think a little bit like the
    “Theory of Religion” that forms the bulk of Daniel Dennett’s “Breaking the
    Spell” (and excellent book in my opinion, if you haven’t read it yet, stop
    right here and do that). There he gives an elaborate sketch of what a theory
    probably will end up looking like without having enough data and experiments
    to prove that the “final” theory will be exactly like that.

    I’ll start with your suggested theory (as far as I understand it from the
    brief sketch) and then show you my sketch. Hopefully you then understand why I
    think your theory is not on the right track and why I felt justified
    in ignoring it.

    I am working on an evolutionary model of how a network of neurons could
    meaningfully communicate by asking “What is the simplest decision making model
    of a brain that could be sufficient to help an animal survive.”
    The model has three basic operations
    RECOGNISE known pattern,
    COMPLETE an incomplete pattern, and
    REMEMBER a new pattern. In addition there is an
    OPTIMIZE function which increases the importance of useful patterns and forgets redundant patterns, but is not directly involved in the decision making process.
    At this level the model predicts the existence of concept cells and mirror neurons. Of course this model is really crude in terms of formal mathematical logic and does not even support the idea of an explicit NOT.

    There are several things I at least don’t understand about that model or which
    are directly contradicted by evidence we already have about brains (human or
    animal). When you say “evolutionary model”, do you mean “how did the whole
    system evolve” (over evolutinary history), do you mean “how does the system
    get build and how does it learn” (i.e. the interaction of genes the
    developmental processes when the organism grows up and the “programming” aka
    learning phases). Note that the HBP is mostly disinterested in the first and
    only interested in the latter as far as the actual learning is impacted or as
    far as errors in development lead to brain impairments later.

    As far as the “assembly language” and “formal mathematical logic” goes, that
    is completely mired in digital computer terms and has nothing do with how
    computation in the brain works (more on that below). And “concept cells” and “mirror neurons” are mostly a measurement artifact, that while real in some sense, obscures the real picture.

    But then I have to little detail about your theory to understand
    in what way your discrete basic operations predict these. So by all means, if
    you have a better description of your ideas, I am mildly interested, but think
    that you are fundamentally unaware of quite a bit of existing understanding.

    And fundamentally I don’t understand whether you are only trying to explain
    the parts that make human brains different from other animals or all
    of them. Lots of the biases you talk about are shared features between humans
    and at least mammals.

    So here goes the sketch of what the proto-theory entails.

    First some basics:

    – No magic. Everything needs to be reducible to lower level descriptions at
    some point and there must be realistic mechanisms how stuff works and maps
    onto biology and chemistry.

    – While the brain has many interesting properties, here I am only interested
    in how it actually does information processing (i.e. turning input data from
    senses into perception that can be manipulated and reacted upon, i.e.
    finally lead to action). Note that in my terminology the brain itself has a
    “sense of itself”, i.e. you cannot just hear, see, smell, touch, balance,
    feel pain (all of which ultimately are relayed as electrical signals from
    the various sense organs outside the brain but also internal “feedback”
    loops.

    – Most of the highly structured parts of the brain are very old and shared
    with animals. So for a lot of the lower levels of the brain, both at the
    circuit level as well as overall brain organisation, it is surprisingly
    difficult to tell apart whether you have a mouse or a human brain (other
    than size differences). While one may be able to build a simulator that
    can do surprisingly intelligent things with just using the neocortex
    structures and learning algorithms (and people already do that using close
    cousins of what the brain algorithms likely are), you don’t get a human
    unless you put in all the “emotional brain” parts that are much older. And
    the whole machinery doesn’t do squat unless you drive with, well “drives”.
    And yes “curiosity” is just such a drive.

    The brain needs to solve a sensing and control problem for the organism it
    inhabits. Depending on how complex the organism (both in sensory input as well
    as body complexity that needs controlling/actuating) is, you will usually end
    up with a “central processing unit”, the brain. So first, all the processing
    is dynamic and time dependent, because it originated in (motion) control
    problems.

    We do know that the implementation of these control structures is through
    interconnected networks of neurons. While there are a couple hundred different
    neuron types, their basic mode of operation is always the same. All they can
    do is perform sums of multiple inputs (either spatially or temporally) and
    then signal them (either as electrotonic or action potentials), mostly
    dependent on what distance the outputs are supposed to travel.

    How do you make such an assembly compute anything ? By choosing what to
    connect to what and with how much strength. If you choose the right inputs,
    connection pattern and strengths (and timing in spiking networks) then the
    desired outputs emerge. Very fast, approximate computation on slow, noisy
    hardware. The trouble is, how do you find the appropriate number of computing
    elements, their connection pattern and strengths ?

    You can try to code it somehow in the genome and build the whole shebang as
    part of the development of the organism. This is likely what happens in
    nervous systems as simple as C. Elegans ( a nematode worm). 302 neurons, 6393
    connectors and a couple hundred other elements. So no learning required to
    “program” that thing. Does it control the behaviour of the worm. You bet.
    Does knowing the entire connectome (all connections, etc.) allow you to
    statically predict the behaviour ?

    Not at all. You initially have no clue what it does unless you run it.

    Does it have a “this is the neuron for movement”, “this is the neuron for
    controlling mating behaviour ?” and do you now understand what the worms behaviour is.
    No. Wrong expectation and wrong way to think
    about how the control structure is actually working.

    While this is frustating at first, it gets worse. In anything bigger than a
    worm you quickly figure out that the genome cannot possibly contain the
    information required to wire up the whole shebang, so something else has to
    construct and adjust all the myriad connections and weights so that they just
    happen to do the right thing (i.e. development and learning need to interact).

    So what do we know about how these processes work ? We know that “concept” or
    “grandmother neurons” are somewhat of a fluke, both because there is no
    measurable difference between neurons to account for them to encode any
    information as well as the decided lack of robustness of the approach. So
    instead everything is encoded as a distributed pattern across thousands of
    neurons and their connections. And, importantly, there is no fixed
    alphabet or vocabulary
    . The encoding for any given sequence or concept
    is chosen randomly ! (I leave the problem for how to ground the random stuff
    in any meaning for another time). But the rest is a pretty straightforward
    algorithm that from any random starting pattern can converge on a (particular,
    there is not “the” solution) solution on how to choose and adjust connections
    across the ensemble to make any desired computation happen (well, not any, as
    biases and cognitive errors show, but all the important ones originally needed
    for the control problems the animal faced).

    In humans, there is a fairly large amount of not initially unspecialized
    neural tissue, running the same kind of algorithm on pretty much all inputs.
    For a sketch (probably wrong in most details, but close enough to give you
    an idea what the structure of the algorithms are) see
    Jeff Hawkins’s On Intelligence
    and a sample algorithm

    The one thing that seems to lift human animals above other animals is mostly
    the sheer amount of undifferentiated “thinking tissue” and the one trick to
    put the internal symbols represented by the encoding above into
    (grammatically) structured sequences and locally feeding them back within the
    brain. You can call that language if you like, it is actually both a blessing
    and a curse, since almost all of our “explicit” thinking is language based and
    hence discrete whereas a lot of the real world does not have the sharp
    boundaries required for the discretization enforced by language. If you want
    to know one of the roots of essentialism, we human animals just can’t help it.

    Ok, sorry, long meandering rant, but no, there is no discrete “language of the
    brain” with a fixed number of operators. It is one integrated set of input,
    process, actuate and learn. Almost all of the specifics you can see in a
    physical embodiment are random and much like the halting problem,
    there is no “closed form” theory with which you can calculate outcomes for a
    given network (and hence the insistence on large scale simulation facilities).

    I’m sufficiently off-post now, but somehow PZs comment struck me as “it is so
    complicated, we will never understand it (at least with a map)”. The actual
    state is that we for the most part knowthat there can be no (single)
    map
    but rather can only hope to figure out what the relevant mechanisms
    are.

    P.S: I can’t seem to figure out how to get line breaks where I want them…

  88. chrisreynolds says

    Commenting on my comments #88 and #90 Burkhard Neidecker-Lutz #93 says

    You may be surprised by what I can be interested in :-). And yes, curiosity is a very useful emotion to keep oneself motivated. But it is not even half what you need for doing science. So onto your approach.

    I think we would both agree that there is no satisfactory published model of how electrical activity at the neuron level is converted to human intelligence.

    Short summary: no, I would disagree (with lots of caveats).
    Long version. Seriously long.
    There is an awful amount of stuff packed into that sentence of yours. First the part where we agree.
    There is electrical activity at the neuron level, plenty of wiring and connections and at the highest level we agree that the whole system exhibits a stunning number of behaviours we call “intelligent”.
    If you mean by your statement that we do not have a concise, simple unifying theory on how you get from the former to the (ill-defined, let’s do that below) latter, then, yes, I agree.

    I was talking very specifically about your last meaning – and your reference to the book by Jeff Hawkins On Intelligence is very helpful. I had not read it before, although I had picked up quite a few of the ideas from other sources. The following is my reaction, after a quick scan, and I will be re-reading his book in more detail over the next week or so.

    You and Jeff are looking at the same problem that I am looking at, but from a completely different viewpoint. Jeff is starting from the physical brain and asking how its components are constructed and how they work. I fully appreciate that some very detailed and sophisticated work has been done in this area.

    My primary interest for more than 50 years is in communicating and processing information and I am asking the top down question – “How must neurons function in order that we think in the way we do?” What is important to me is the meaning of the information being processed and not the finer detail of the biological system that does the processing.

    Other researchers who has tried a top down approach have started by looking at natural language and there have been major disputes, for instance around the work of Chomsky, and they don’t seem to have reached anything like an agreed conclusion. Other researchers (at least in the 1970/80s) studied sophisticated logical puzzles of the type that amuse mathematics undergraduates and called it Artificial Intelligence. This period of research is now considered by many to have not been very fruitful.

    My research started accidentally in 1967 after having worked with very complex manual and computer information processing systems. The first steps were made when I was examining the human interfacing problems of a working commercial system which priced orders for about 250,000 customers buying any of about 5,000 products. This lead to the idea of a “white box” computer which could work symbiotically with humans on large and open ended non-mathematical problems. In effect the system is a pattern recognition system rather than the rule based approach of the conventional “black box” computer.

    What is clear to me is that Jeff’s model of what neurons can actually do, and my model of what they need to be able to do to handle complex real world information problems is very similar. If it is agreed that we are both modelling the same thing it means that in 1970 I actually had a crude working model of how humans process concepts (but not down to the neuron interface level). However that was the year I was declared redundant because the work was not compatible with the way my employer thought computers should be going.

    I moved to a university and by 1988 I had a very much more powerful model, but was reluctant to start shouting “Eureka” because I knew I still had many issues to solve – and I am naturally a quiet backroom boy type of scientist who was not interested in being in the limelight. At that time I appeared to be close to a break-through with a working package being trial marketed and attracting rave reviews, and a paper accepted in the top UK computing journal, However I was getting exhausted from banging my head against the computer establishment brick wall for years. At the same time a new head of department made it repeatedly very clear he thought I was grossly incompetent because I had not got any research money into the department in recent years. (This was because for some years my research had been seriously disrupted by my daughter’s illness and eventual death.) Basically I folded and allowed myself to be declared redundant again (but this time with a pension) and I decided to abandoned academic life to do voluntary work helping the mentally ill.

    Many years later and now very much an old age pensioner my son sensibly asked what he should do about the piles of papers (which include everything from the research project) should he find himself having to clear the house. Reviewing the options he mentioned the word “skip.” As a result I decided that I should look online to see what had happened since 1988 and realised that my research might still be of interest. I set up a blog http://www.trapped-by-the-box.blogspot.co.uk and uploaded some of the key publications online. I also started to blog my ideas out loud and quickly realised that what I had been doing could be relevant to brain research. Comparatively recently I have worked out how CODIL (the symbolic assembly language of the “white box” computer I was working on) could be re-interpreted in a form that could work on a neural network, and I have also looked at the evolutionary implications of the model.

    Clearly this string of comments is not the ideal place to discuss the matter, but if you are interested have a look at my blog (a list of the most relevant links is in the right hand margin) and contact me through my blog.

  89. chrisreynolds says

    Burkhard Neidecker-Lutz #93 suggested the I read the book “On Intelligence” by Jeff Hawkins.

    So I have, and my views on what I found are posted at:
    http://trapped-by-the-box.blogspot.co.uk/2014/07/comments-on-jeff-hawkins-book-on.html.
    :
    If such a badly written and woolly minded book is the best explanation of their work that the neuroscience community can produce to support their case the sooner the E.U. Project is radically changed or actually cancelled the better.

    I have also published a detailed note on lessons we can learn from historical scientific research ideas which tried to follow blind alleys in the past at.
    http://trapped-by-the-box.blogspot.co.uk/2014/07/the-trouble-with-brain-science-history.html
    :
    This ends with the following warning.

    Whatever your views on the E.U. proposals remember that this generation of scientists is not guaranteed to be right all the time, any more than of the previous generations were. The fact the Richard Dawkins called one of his books on evolution “Climbing Mount Improbable” should be warning enough that if we find ourselves facing a major scientific brick wall we should be prepared to step to the side and look for an out-of-the-box way round rather than waste money trying to scale impossible heights.

  90. says

    My previous post triggered a response on my web site, and the sender apologised for not leaving his id – but the one he uses on this blog does not work. This means I cannot send an appropriate reply to his comments – and as the points he rasises could be of wider interest I am copying my reply here.

    Good Science involves the open and free exchange of ideas – and can be particularly valuable when opinions differ. My only contacts with B N-L have been in this comment stream and I have found his comments useful. I assume he suggested Jeff’s book because he believed it would be a useful introduction to the subject, although he did warn that he did not agree with everything it said. Because my interests in the brain over many very different disciplines I cannot possibly be an expert in all of them so such advice is always helpful.
    Now my first job was as an information scientist which meant reading and reporting on scientific matters from many different disciplines – no matter what the subject. A good test of a document was whether it convinced me that it was logically consistent and clearly worded and not just using jargon terms to muddy the waters so you are not clear what is based on reliable research findings and what is no more than pure speculation. Of course I have no objection to speculation as long as it is made clear that it is speculation.
    Jeff’s book “On Intelligence” fails the test and when I read any science book I often review it for my own benefit, and if appropriate I post a tidied up version on my blog. By commenting on Jeff’s book I was, in effect, warning B N-L that the book is not suitable to recommend to people who disagree with his case because it may have the opposite effect to the one he intended.
    You say “Your post also reminded me of the statement “You can only believe what your language allows you to believe.” In fact this is virtually the theme of my Blog. We are all trapped by “boxes” – whether through our upbringing, the culture we live in, the technology we use, and ultimately by the planet we live on. From some boxes there is no escape while it is sometimes possible to climb out of a box – only to find that all you have done is moved to another box – although hopefully a bigger one.
    In fact I suspect that Jeff’s approach, and his discussions about invariant forms, was adversely affected by his extensive experience with computers and computer programming. In 1971 Gerald Weinberg wrote the book “The Pyschology of Computer Programming” and discovered that the first programming language you learnt restricted the ways you used the second programming language. My own research involves a bottom up pattern recognition system bootstrapping itself up from nothing and with no global model of the task in hand. The stored program approach involves a top-down rule-based system which requires a pre-defined global algorithm to drive it. The conceptual difference between the two approaches is very much greater that, for example, the differences between COBOL, Fortran and PL1, which Weinberg was considering. What I found was that if someone has been taught a conventional computer language they find it very difficult to relax, throw away the need for a global model of the task, and take advantage of the far more relaxed and open-ended approach inherent in a pattern-based language system. In fact over the years almost every advance in my research has been characterised by finding, and eliminating, another example of “stored program computer think” which had somehow got incorporated into my research software. I get the impression that virtually all the current generation of brain-related scientists have been exposed to the “computer think” virus as school children and fail to realise that one thing that characterises the brain is the fact that when it is “Born” it has no pre-programmed model of the environment that the hosting body lives in.