Artificial evolution looks an awful lot like the natural kind


What properties should we expect from an evolved system rather than a designed one? Complexity is one, another is surprises. We should see features that baffle us and that don’t make sense from a simply functional and logical standpoint.

That’s also exactly what we see in systems designed by processes of artificial evolution. Adrian Thompson used randomized binary data on Field-Programmable Gate Arrays, followed by selection for FPGAs that could recognize tones input into them. After several thousand generations, he had FPGAs that would discriminate between two tones, or respond to the words “stop” and “go”, by producing 0 or 5 volts. Then came the fun part: trying to figure out how the best performing chip worked:

Dr. Thompson peered inside his perfect offspring to gain insight into its methods, but what he found inside was baffling. The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest — with no pathways that would allow them to influence the output — yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.

That looks a lot like what we see in developmental networks in living organisms — unpredictable results when pieces are “disconnected”, or mutated, lots and lots of odd feedback loops everywhere, and sensitivity to specific conditions (although we also see selection for fidelity from generation to generation, more so than occurred in this exercise, I think). This is exactly what evolution does, producing a functional complexity from random input.

I suppose it’s possible, though, that Michael Behe’s God also tinkers with electronics as a hobby, and applied his ineffably l33t hacks to the chips when Thompson wasn’t looking.

Comments

  1. Caledonian says

    When NASA applied evolutionary algorithms to the problem of efficient antenna design, it not only produced structures that took up far less space than its existing models and worked just as well, but took far less time to design and implement as well.

    Forget artificial biologies, nuclear power, and nanotechnology. Evolutionary processes are the real power – by harnessing them, we can become as gods.

    Isn’t that really why all those fundies hate evolution so much? They worship a false god, and on some level they know that. Science studies the true creative process responsible for the structure of the universe, and within limits, grants power over it. It IS, what all the religions merely claim to be.

  2. Ric says

    Well ad Dodgey Gil says, the facts speak for themselves. The guy couldn’t figure out how the chips did it, so the only possible answer is god.

  3. lockean says

    I think Behe’s position would be that a Designer must be presumed to have set up the initial parameters, as in fact Adrian Thompson did.

    It’s been many years since I read Black Box and that was plenty of ID for me, but I think Thompson’s experiment fits pretty well how Behe imagined nature working. A designer builds a system. The system then runs itself.

    One really can’t illustrate the errors of ID with any experiment, since experiments (at least deliberate ones) must have Designers.

  4. says

    Applied models. Design. Etc.

    Equvivocation, false analogy, the athesit staples.

    Look, this is all about atheism, as some people like Red State Rabble are already starting to admit.

    Just come out with it.

  5. Caledonian says

    A designer builds a system. The system then runs itself.

    But this is something completely different. There is no designer. The experimenter sets up parameters for evaluation and lets the system design itself.

  6. gsb says

    Look, this is all about atheism

    Riiight. Thompson spent all that time and effort doing this impressive and interesting work with FPGAs just to promote atheism, not because he was interested in advancing any science or because he thought this had some potential for practical application for industry or research. Please tell me you were being sarcastinc.

  7. TheJerrylander says

    If the l33t h4xx0r dude in the sky really primed evolution, and kinda set up the system to run through its evolutionary processes, he obvious was not omniscient, if his goal was the creation of, well, creation in a guided processes. Evolutionary or genetic algorithms are a process to derive solutions that are in general sub-optimal, they are just more efficient at producing good solutions than say randomly picking configurations. So, humans are not the pinnacle of creation, but an alright approximation :p
    If you have complete knowledge of the problem field, or at least a very good understanding of it, there are more efficient solutions to problems than employing methods of SoftComputing.
    So, while in the experiment there was a designer that set up the experiment, and had knowledge about the experiment architecture, that which evolved within the scope of the experiment is self-evolving under environmental conditions. The results of the experiment are not easily expressable, and not necessarily fully understandable by the scientist who initiated the processes.
    Thus, my conclusion, god was just a big, dumb chump like me.

    You may start worshipping me now, Jerrylander-TheSoftComputer :P

  8. Arnaud says

    But this is something completely different. There is no designer. The experimenter sets up parameters for evaluation and lets the system design itself.
    Yes but the “selection” was toward a specific goal which was inputed by the designer and checked for at each generation. Quite different from natural selection were the only “goal” is to increase an individual’s ability to pass on their genes. (In this experience, it took more than 1400 generations to produce a chip with something approching “fitness”.)
    Somebody like Behe could very well argue that the goal of a god-driven evolution is the production of a conscient humanoid mirroring its creator…

  9. lockean says

    Cal,

    If I’m accurately remembering a book I read many years ago, Thompson’s role in the experiment above matches pretty closely how Behe described his Designer working.

    Designer Thompson said, Let there be purchased a computer and there was a computer. And Designer Thompson saw that it was good.

    And on the second day Designer Thompson said, Let it be programmed to run an experiment. And it was programmed to run an experiment. And Designer Thompson saw that it was good.

    And on the third day Designer Thompson rested and the experiment ran itself.

    Behe doesn’t dispute that stuff after the second day matches what we in the reality-based community know/presume/imagine. He’s disputing whether nature can set up its own initial parameters. (Again, if the gin hasn’t ruined my memory.)

    For Behe an experiment implies the existence of an Experimentrist.

  10. Russ says

    Yeah, baby! It’s stuff like this that keeps me reading Pharyngula. I’ve seen some of the superlatively functioning mechanical systems resulting from evolutionary selection processes, and from their final counterintuitive forms, we might go so far as to say “bizarre,” it’s clear that human engineers would not have arrived at them since humans tend to bias design toward their own “common sense” notions of form following function. But, of course, as the form of these devices show yet again, the behaviors of natural processes as revealed by these evolutionary techniques are often quite distinct from the human thoughts and intuitions about them. Thanks, PZ.

    The place I’d love to see evolutionary techniques applied to a gnarly engineering problem is cosmic ray shielding for long-term space missions. Current “common sense” conceptions for what is needed result in solutions so physically large, weighty and expensive that they are impractical. I’m not suggesting that “common sense” is necessarily wrong, here, but affordable reliable shielding is a must if manned interplanetary space travel is to be carried out by anyone who wants to live to tell about it. Evolving cosmic ray shielding instead of designing cosmic ray shielding could show us, by actually trying out those counterintuitive non-common sensical plans humans might reject, a different way to do it, and by doing so once again underscore for us that our thought processes evolved to solve very different problems.

  11. Caledonian says

    1) The initial parameters aren’t the important part.

    2) Behe believes intervention was necessary – that aspects of the world could NOT have evolved.

    So you’re completely wrong.

  12. Torbjörn Larsson, OM says

    Interesting how the single chip experiment directly showed that evolution over a population is needed to not get stuck in too constrained solutions. There is no secret that digital IC’s are individuals as analogue chips (which is one reason digital techniques are preferred), but that the sometimes small differences would affect the result is still cool IMO.

    One really can’t illustrate the errors of ID with any experiment, since experiments (at least deliberate ones) must have Designers.

    That is the illustration right there, ‘design’ is not a scientific theory because it can’t be falsified. Their idea is…, um, “designed” to not make predictions.

    On another note, one can make it pretty hard on creationists by letting the experiment choose random targets and constraints as well. (Rather like biological evolution btw, where increased fitness ‘targets’ may be shortterm and contingent.) Their last resort then is to make the handwaving you noted above.

    But no amount of handwaving will get that idea to fly.

  13. Torbjörn Larsson, OM says

    Interesting how the single chip experiment directly showed that evolution over a population is needed to not get stuck in too constrained solutions. There is no secret that digital IC’s are individuals as analogue chips (which is one reason digital techniques are preferred), but that the sometimes small differences would affect the result is still cool IMO.

    One really can’t illustrate the errors of ID with any experiment, since experiments (at least deliberate ones) must have Designers.

    That is the illustration right there, ‘design’ is not a scientific theory because it can’t be falsified. Their idea is…, um, “designed” to not make predictions.

    On another note, one can make it pretty hard on creationists by letting the experiment choose random targets and constraints as well. (Rather like biological evolution btw, where increased fitness ‘targets’ may be shortterm and contingent.) Their last resort then is to make the handwaving you noted above.

    But no amount of handwaving will get that idea to fly.

  14. Luna_the_cat says

    Although I don’t have any references to hand to support my memory, what I seem to remember is that continuing investigation revealed that some of the “unconnected” cells were actually exploiting quantum effects to influence and be influenced by neighbouring feedback loops, in precisely the way that most chip designers try to avoid and insulate against.

    This is another mark of organic evolution, to me: error and “undesireable” physical conditions are still part of the environment, and the system doesn’t make moral judgements about them, it just uses them or adapts, because it must — they’re there.

  15. AJS says

    I’ve already seen this one, and I have to admit I thought it was pretty cool. The evolutionary process allows for things that a human designer either just wouldn’t think of or would artificially exclude, like using logic gates as linear amplifiers and relying on parasitic coupling.

    It isn’t going to cut any ice with the IDiots precisely because there is a pre-stated goal. (Real evolution doesn’t work towards anything; mutations just grind to a halt when there are no more easy ones with obvious survival advantages in the existing environment. Of course, a change of environment can alter the relative usefulness of mutations. Being only 20cm. high and having floor-length fur wouldn’t do much good for a wolf in the wild, but is quite acceptable for a Yorkshire terrier with an old lady around to feed it and groom it.)

    I think even if some scientist managed to create actual artificial life in a test-tube, and sat and watched it evolve over several generations into something quite different (perhaps two distinct species [one of which eats the other?], both exhibiting irreducibly complex features), the ID brigade would still be taking it as proof that life requires an intelligent designer — their principal objection would be that the initial conditions were pre-specified, and it hadn’t all happened spontaneously.

  16. raven says

    Blairs Team AKA Kansas Troll:

    Look, this is all about atheism, as some people like Red State Rabble are already starting to admit.

    No, it is all about the truth, common sense, reality, and creative thought. Everything your cult nonsense isn’t.

    We know the drill. Your standard falsehood.
    Science=evolution=atheism=communism=mass murder.

    Now that your lie has been posted, take the rest of the day off.

  17. Caledonian says

    It’s not about initial conditions, it’s about selection parameters. That’s what you mean to discuss but keep using the wrong term to refer to.

  18. lockean says

    No Cal,

    Behe–at least in Darwin’s Black Box, the only thing of his I’ve read–describes nature as working like the experiment described above. It is a repetitive book, making the analogy over and over and over, so although I don’t remember the book perfectly, I don’t see how anyone can miss the basic point.

    Either you don’t remember the book, or you read it with such disgust that you misunderstood it, or you are deliberately mangling its already flimsy arguments, or you never read it.

    Adrian Thompson’s experiment had, in fact, a Designer. Adrian Thompson.

  19. Caledonian says

    Michael Behe? Michael “malarial drug resistance is beyond the capacity of evolution to generate” Behe?

    I don’t care what you’ve read or thought you’ve read. Michael Behe, judging by his latest arguments, believes that life has needed special fiddling to develop in the way that it has. It’s not a matter of an intelligence controlling initial conditions or establishing selection criteria.

  20. Arnaud says

    As Caledonian said, these are not initial parameters. They are repeated and re-inforced at every generation. They are actually responsible for the only selective pressure applied to the chip. The chip is not allowed to become a better toaster or digital watch.
    Although, “establishing selection criteria”, seems to me, is fiddling with evolution.

  21. says

    It is remarkable that people keep trying to use the “but, but, but, the experiment had a designer” canard. Quite the equivocation that is.

    Let’s not lose sight of the fundamental argument IDers/creationists make: that genetic variation matched with a simple selection mechanism cannot create highly complex specialized items. Yet that is exactly what is done with these kinds of experiments over and over again. EAs are the death knell to these arguments to anyone who actually understands how they work.

  22. TheJerrylander says

    Unfortunately, I would have to disagree with the Science Avenger. It would be great if EAs were sufficient to make IDiots see their folly, but I fear they will just gloss stuff over.
    The EA will, in the end if their run, result in something that is relatively well-adapted given the defined fitness function, which will just end up in the IDiots screaming:”Well, we believe in micro-evolution alright, but….”.

    Some common sense should be the death knell to creationist arguments, or a look at cellular automata to see how complex patterns can emerge from almost ridiculously simple rules, but, alas, this doesn’t work either, as we have seen. De-programming the densely faithful and making them see that their arguments are inane is probably beyond the scope of EAs :(

  23. lockean says

    Cal,

    I don’t see the point in creating a straw man version of an argument that’s already straw. Behe’s dubious claim to fame is Intelligent Design. The thrust of Intelligent Design is the DESIGN. Hence the name.

    I don’t think either of us knows (or cares) that much about Behe, but claiming that I’m ‘completely wrong’ when I’m the only one of the two of us who’s actually made it through his magnum opus is pretty obnoxious, don’t you think?

    And I thought we were friends.

  24. Caledonian says

    I’m the only one of the two of us who’s actually made it through his magnum opus

    You presume a great deal.

  25. SWT says

    Behe has repeatedly asserted that without intervention, the processes of random variation and selection are inherently incapable of producing irreducibly complex systems.

    Let’s look at Thompson’s results. His last-generation FPGAs clearly are, or at mimimum contain, systems that meet Behe’s requirements for irreducible complexity: “A single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.” In this case, the basic parts are the elements of the FPGA in question, and the source article shows that there are key elements in these systems the removal (reprogramming in this case) of which cause the system to cease functioning. Since Thompson did not intervene to make specific alterations in the FPGA programming, Thompson’s results provide a direct counterexample to Behe’s core thesis unless Behe wishes to argue that the designer intervened in the experiment.

    Thompson’s intent is irrelevant to the validity of Behe’s assertions about the sufficiency of random variation and selection.
    ______________________

    Also, just for fun, let’s run these results through the explanatory filter, ID-style. (Except that the ID guys never seem to bother applying the filter to … oh, what do you call it … oh, yeah, data.)

    (1) Law. We’re assured that evolutionary processes are incapable of producing systems of this complexity; thus, we can rule out Thompson’s results as the operation of a law.

    (2) Chance. We are further assured that evolutionary algorithms are no better than blind search. I took a quick peek at the Wikipedia page for FPGAs. It is apparently pretty common for each gate in the FPGA to have four inputs and a lookup table to define the output in terms of the inputs. This lookup table for each gate must therefore be at minimum 16 bits. If we further assume that chip architecture limits each gate to receiving inputs from four other gates, encoding the inputs must require 4 inputs * 2 bits/input = 8 bits. Thus, each gate requires, at minimum, 24 bits, since input selection and the lookup table are independent choices. For a 100 element array, the program for the FPG must be at least 2400 bits, so the number of possible programs is 2^2400.

    Thompson’s experiment took 4000 generations with 50 individuals/generation, and so examined only 20,000 programs. The probability of obtaining his outcome by blind luck is 20,000/2^2400 ~ 10^-719 … much less than the “universal probability bound” of 10^-150.

    (3) Design. Since Thompson’s results, by Dembski’s rules, are not the result of either law or chance, the explanatory filter leads us to conclude that the Thompson’s results were the product of design.

    Or, on the other hand, we can conclude that the explanatory filter is bunk.

  26. wjv says

    Aaaah yes, the Thompson experiment. It has always been one of my favourites.

    I’m an engineer by training, and my Masters project involved algorithmic work on FPGAs. (This was a few years before the paper under discussion.)

    For the past 7+ years I’ve been working in bioinformatics, though, and I’ve always found that the Thompson experiment remains the best way of explaining evolution to my old engineering buddies.

    Recently I “rediscovered” Thompson’s paper after not thinking about it for a while, and posted it to reddit where it received a fair number of upmods.

    A few links:

    There was a nice article about the experiment in New Scientist. If you’re not a subscriber, you can (shhh!) be naughty and get the full text here.

    A report in Science gives some broader background. Essentially, Thompson wasn’t the first to implement algorithmic circuits on FPGAs (As I said, we were doing it some years earlier.) nor even the first to mess around with genetic algorithms on the same technology. The idea of seeing what would happen if you try to evolve a digital circuit without using compound digital building blocks… that was genius.

    Oh, last but not least, the actual paper on CiteSeer.

  27. JJR says

    Still running these scenes in my mind:

    God: N00B!
    Adam: yes, Lord

    God: Yo, Abe, your foreskins are PWND!
    Abraham: thy will be done.

  28. travc says

    Evolution in hardware… not the first time, but still fun (and a lot of work to actually get results).

    A few take-home results.

    1) Hardware != simulation… even “digital” hardware isn’t really digital. Anyone who has designed even a mildly complex circuit knows this.
    corollary: The physical world is complex, and the source of the complexity evolution has incorproated into living systems.

    2) Artificial evolution which has a “functional” fitness criteria is very different from cases where the “closeness to a particular goal state” is the fintess function.

    3) As PZ points out, things which go through an actual replication have additional implicit fitness components (such as mutational robustness) which are not there in experiments such as this one or standard GAs and such.

  29. says

    The dependence on the particular part is interesting. It could be eliminated by using a simulator, but then the result would (take a lot longer to find and) depend on any bugs or quirks of the simulator and may not work on real hardware. :)

    This post says what I was trying to say in this /. thread, but I couldn’t find the right citation.

  30. SWT says

    wjv, thanks for the link to the paper.

    In case anyone cares, my previous post needs a correction and a couple of amendments. The correction is that I managed to multiply 4000 by 50 and get 20,000 … my bad!

    The amendments are based on Thompson’s paper.

    1) Per Thompson’s paper, the system stabilized at 4100 generations, resulting in a total of 205,000 programs.

    2) Per Thompson’s paper, each program was 1800 bits, not 2400 as I had guesstimated.

    Thus, probability of getting the final result by blind trial and error would be 205,000/2^1800 ~ 10^-536 … not as improbable as before, but still less than Dembski’s probability bound.

  31. Jon H says

    The non-transferability of the program lends support to the idea that the disconnected units were taking advantage of… um.. quantum stuff.

    The evolution process would wind up accomodating or making use of minute variations in that FPGA’s particular atomic structure in a way that would not happen normally.

    In order to transfer the resulting program to another chip, I expect the recipient’s silicon would need to be an atom-by-atom duplicate of the donor. Or even a duplicate down to the level of electrons and holes.

  32. Caledonian says

    The non-transferability of the program lends support to the idea that the disconnected units were taking advantage of… um.. quantum stuff.

    Not really. No two chips are truly identical – there are idiosyncratic differences, far above the level of quantum effects, between one chip and another. An evolved system that took advantage of those unique qualities might not function properly when loaded onto a slightly different chip.

  33. says

    Erm, this is pretty old news, folks. I had a thread four years ago on ARN about Thompson’s (and other’s) hardware evolution work, referencing his 1996 dissertation. Why’s it popping up around the science blogosphere now?

    RBH

  34. Kagehi says

    From a usability standpoint Thompson’s system has a fatal flaw. For that matter, in a biological system it would be a fatal flaw too. In his case, he has one chip figuring out how to do X, instead of several competing chips trying to find X, with the basic requirement that the mean to get there “must” function on any one of the chips. In biology, this would be the equivalent of a single individual in a species that could see *slightly* into the infra-red, so could see better in the dark, but which **lives** in an area where the creature’s species is “rarely” active at night, and seeing in the IR spectra during the day is actually detrimental. That it works better at seeing at night is irrelevant, if the *species* never uses the trait in question for anything.

    Well, maybe no the best example, but I think you get my point. Just because *one* member of a species manages to “solve” a problem successfully with a mutation, doesn’t mean its useful for any other member, if that mutation is so specific or unique to that members biology or circumstance that it becomes useless to anyone else that has it. In this case FPGA’s are the “species” and his successful version is an individual, with a non-interchangable design feature. Kind of like transplanted a brain from one person’s body to another (assuming you could manage it) and expecting it to work, when there is no direct correlation in how many, where or how the nerves where wired. With something like a brain, there is enough plasticity that this is not insurmountable. Assuming you got all the wiring “close” to where it needed to be, promoted nerve growth *and* kept both alive, there is at least some evidence that the brain could “rewire” itself to work with the new “hardware”. Not so if you tried to offload the “program”, if you will, and drop it into a new brain. You want universal, you need to *start* with the premise that you are going to have multiple FPGA’s involved and that one of the “conditions” you look for “is” that it will function when shifted out of one into the next.

    Sort of run 3-4 in parallel or something, then every “generation” you dump the code from A to B, B to C and C to A. Only code that runs on “all” of them survives. As I understand it, the problem was that it depended on transient issues, like electron bleed between transitors, or other similar flaws unique to each chip.

  35. Ramblindude says

    I just read the article and I find all this absolutely fascinating.

    I remember seeing a NOVA? program years ago where a guy instructed several identical computer programs to evolve solutions to achieve specific goals. The results were unpredictable and thoroughly random and many or them seemingly ‘ingenius’.
    This is the same thing but with actual hardware.

    Kagehi: Good point. ‘Competing chips’ and transference compatibility during the evolutionary process would seem to be the next logical step.

    On the other hand, figuring out why the code works on one, and not on the others, also leads to breakthroughs.

  36. SWT says

    Kagehi,

    From the standpoint of using Thompson’s experiments as a test of the power of random variation and selection, chip specificity is not a fatal flaw. One can look at the chip as a “habitat” for a “species” of code. Then, chip-specific code is analogous to a species with very specific habitat needs … when such a species is moved to another habitat or the habitat changes in some critical way, previously useful adaptations may be unhelpful or detrimental.

    I do agree that the procedure you describe (where survivors must demonstrate viability in multiple habitats [on multiple chips]) would indeed produce more robust code. That does not invalidate the observation that an “irreducibly complex” system was evolved through random mutation and fitness selection with an incredibly rapid search through a large search space.

  37. says

    Anyone want to bet that someone will claim that these arrays were “frontloaded”

    Sure somebody will. I am waiting for somebody at UD claiming that FPGAs are fine tuned and possess the best overall setting for scientific discovery.

  38. David Marjanović says

    Somebody like Behe could very well argue that the goal of a god-driven evolution is the production of a conscient humanoid mirroring its creator…

    Then why are there so many byproducts? Like, the whole rest of the world?

    “Inordinate fondness of beetles”?

    mutations just grind to a halt when there are no more easy ones with obvious survival advantages in the existing environment.

    Mutations never grind to a halt. You’re talking about directional selection becoming stabilizing selection (and even this doesn’t work on neutral mutations — there’s still drift).

  39. David Marjanović says

    Somebody like Behe could very well argue that the goal of a god-driven evolution is the production of a conscient humanoid mirroring its creator…

    Then why are there so many byproducts? Like, the whole rest of the world?

    “Inordinate fondness of beetles”?

    mutations just grind to a halt when there are no more easy ones with obvious survival advantages in the existing environment.

    Mutations never grind to a halt. You’re talking about directional selection becoming stabilizing selection (and even this doesn’t work on neutral mutations — there’s still drift).

  40. says

    Silly me.

    It seems it was the same experiment.

    Why is this just now coming up? AS I said, I read about it over a year and a half ago.

  41. says

    from comment #18

    Adrian Thompson’s experiment had, in fact, a Designer. Adrian Thompson.

    well, duh.

    This has been said before, but it bears repeating.
    Things designed by humans are evidence that humans design things. This does not in any way imply a “Designer” for the universe.

  42. wjv says

    The non-transferability of the program lends support to the idea that the disconnected units were taking advantage of… um.. quantum stuff.

    Not really. Just good old Maxwell. Cross-channel leakage. Various high-frequency effects. That kind of stuff. There’s a lot going on in a semiconductor well above the quantum level.

  43. says

    Yeah, evolution is quite often the stupidest thing ever or absolutely brilliant. Basically our brains determine a solution to the same problem and if that solution is better it’s “brilliant” if not it’s “stupid”. To test some of the functionality of an evolution algorithm I set it up to create the Pythagorean theorem. It managed to do this successfully a number of times, the unsuccessful times are fascinating though. It would miss the solution and go rather nutty, but slightly (sometimes) understandable.

    (A + B) / (sqrt(2)), this is exactly right when A = B, and A=B when the angles are 45 degrees each. And on average the angles are 45 degrees. So on average, this is a good solution. Go ahead and try it for yourself. A 3-4-5 triangle roughs out to a 3-4-4.9497 triangle. 5-12-13 becomes 5-12-12.

    A + ((B^2 – B) / (sqrt(A) + A + B)), no clue. 3-4-(4.37 or 4.67 depending on order) and 5-12-(11.8 or 12.977 depending on order)

    A + (B / (sqrt(sqrt(2)) + (A / (B – 2)))), what the hell?

    — They actually do work pretty well, I have no clue why except in the first case. Some work better near the fringes where A is as far from B as possible, others work better when they are close. And what’s more the more I expand the limits of the numbers I check (these were found with A and B between 1 and 1000), the more complex the equations become in their attempt to match the scaling of the Pythagorean theorem.

    It makes me think of the mammal eye, and all the effort and work and adaptations which go into trying to minimize the downside to a backwards retina. What should be about 7 characters spirals into the hundreds in an attempt to best approximate the solution.

  44. tsg says

    I remember reading about this a couple of years ago. What I found most interesting was that the optimal program would not only fail when placed on another chip, it failed when the temperature in the room was changed by a couple of degrees either way.

    What it highlights is not only the traits that were important for a program to survive and reproduce, but the traits that weren’t. Had the FPGAs been subjected to changing temperatures and operation on multiple chips during their evolution, those problems wouldn’t have been evident in the optimal program.

  45. JD says

    But, but if you removed one of the non-connected gates the circuit would fail. This system is clearly irreducibly complex and could not possibly have evolved in any way. God is clearly into tinkering with FPGAs under experimentor’s noses.