Reader Ted sent along a rebuttal to that pathetic evolution “simulation” written by a creationist—it’s a much better simulation that was presented at TAM5, called IC Evolver. The simulation plays a simple game that has a strategy that can be encoded in strings, and it starts with a set of randomized strategies, which it then uses and modifies, generation after generation, to maximize the score. Two cool things about it: one is that it modifies the strategy with common genetic operations, like insertions, deletions, point mutations, and recombination, and displays them graphically so you can see what’s happening. Another is that it tests for irreducible complexity—when there are 5 components, and removing any one of them reduces the score it can get to 0, it flags it.
You can see the scores steadily improve, and you can also see irreducible complexity evolve.
Turn it loose and let it run in the background. It works fairly quickly.
See, the thing is that the original creationist’s simulation wanted to see entire, complete words turn into other, complete words. Applied to evolution, it basically demonstrated that dogs don’t turn into cats, and we all know that’s what evolution really says.
I watched the simulation for about 400-some odd generations. Why was the frequency of S so high? Other than specific malicious mutations, like an ‘L’ right in front of the end gate, S is the worst trait that the organism could have, so why with all that selective pressure on high point totals would there still be so many ‘s’s after that many generations?
Yeah, scratch that, forget I said anything…despite staring at it for about 10 minutes I thought that s was subtract, not split for just long enough to post a stupid question.
Could someone figure out mathematically what is the optimum strategy in those circumstances? Is it even possible to figure it out? But if it is, does the evolution of strategies tend to the “ultimate” strategy? Any ideas?
There are a few obvious strategies that the structures seem to have trouble producing. A straight downward line of ‘A’s before the first ‘S’s would seem to be obvious, as is putting an ‘A’ right before the exit. Currently the highest score I’ve found is 29770, yet the pattern hasn’t yet found such obvious ways of increasing its score.
Scratch that – the stable 29770 pattern which had lasted for more than a hundred generations just gave way to 34640. And still rising…
Yeah, I was thinking about optimal too. It’d be nice to have another version that let you manipulate your own boards. I think the best would be something like you said, a long string of As (zigzagging from side to side with Ls and Rs before some optimal number of Ss and routing to get all, or most of those splits into the gate.
Eric Davison says
This is actually pretty fun. I’m sitting here with 5 of them running simultaneously, and every once in a while I’ll restart one – they run into dead ends (one got stuck at 31, without threshold, and didn’t go up at all for 4000 generations), and some go extinct (generation after generation of 0 fitness level).
It’d be nice to have another version that let you manipulate your own boards.
Would that be the ID version of the simulator?
I gave up. Dozens of generations yet not one asteroid collision.
Eric Davison says
The other thing I noticed is that non-threshold setups tend to evolve more splits while threshold setups favor adds – which makes sense, since splits give you 15 every time in non-threshold but they might give you -10 with the threshold turned on.
Trying going a couple of hundred generations with the threshold button off, then turn it on. You will see extinction in a very short period of time.
There is your asteroid.
Yeah, I like toggling the threshold function, as it puts on, or removes selective pressure. If you let it evolve without the threshold you’ll like, Eric said, get many Ss and no, or few As. If you turn on Threshold after a few hundred generations then you’ll start to force As into the organism. I’d really like to show this to some of my non-molecular geneticist/computer scientist friends and see how little explaining I have to do for them to reach the same conclusions.
I hate to only appear when it’s a post related to something I can generally speak about with some level of elegance, or if I need help, but I saw a question on a website, and I’m having trouble seeking out a rebuttal for it. I know there’s something wrong with it, but I can’t figure out what.
why does the evolutionary theory require a shorter half-life for Carbon-14 than is measured today and yet insists on the “cold hard fact” behind radioisotope dating?
There’s a good paper on this at: http://www.grisda.org/origins/24050.htm it obviously is biased towards creation, but it does _seem_ to present the other views accurately. The curiosity as far as I’m concerned is that the “uniformitarian” model of carbon-14 dating uses a half-life of 5568 years where the correct half-life is 5730. Seems kinda strange, IMO.
Like I said, I hate to only de-lurk when I’m stumped, but I can’t let this go unanswered.
Spam Sink says
There is an unpleasant flaw in this simulation. What is shown as an I.C. individual is usually a result of a fitness-reducing mutation.
I’m no expert but since I don’t see an answer to your question yet I’ll give it a try.
Who says “evolutionary theory requires a shorter half-life for Carbon-14”? As I understand it, radio-carbon dating is only useful for a range of about 50,000 years. For times prior to that, long-lived isotopes are used.
Maybe there was an error (3%) in the C-14 half-life value used in some estimates of the ages of recent fossils, and maybe those estimates were off by 5000 years or so – why would that be a big deal?
Or maybe there’s a fudge factor to account for different carbon-14 concentrations in the atmosphere in pre-historic times (just guessing there).
I asume you tried “http://www.talkorigins.org/origins/faqs-index.html”?
I did preview that comment #16, but somehow missed “asume” [assume], and meant to type “longer-lived” instead of “long-lived”. Sorry.
PZ Myers says
The half-life of C-14 was adjusted to the more accurate value of 5730±40 years in the 1950s or 60s (the “Cambridge half-life”). Older measurements of date used the 5568 figure (the “Libby half-life”) and need to be corrected. So it’s really just a historical difference in the value of the half-life.
I think that about the time the switch was made, there were also some more sophisticated calibration adjustments made (atmospheric C-14 isn’t constant, so the measured values have to be calibrated against known data sets, like tree ring data).
There’s a gentlemen on a blog site I write on, that’s said he’ll take on anyone regarding evolution. As PZ has shown, it can take many people to correct the mistakes of just one. The only thing I’ve been able to locate at TalkOrigins is in relation to a scientist named Libby; the number he sites as being incorrect apparently hasn’t been used in over 50 years, but I want to have a response more substantial than, “you’re wrong.”
sez Spam Sink: “There is an unpleasant flaw in this simulation. What is shown as an I.C. individual is usually a result of a fitness-reducing mutation.”
Why is this a flaw? The ID argument is that evolution *cannot* produce irreducibly complex systems *at all*, by *any* means*, no way nuh-uh, never nein nyet, *end of discussion*. The simulation shows that evolutionary processes *can* generate irreducibly complex systems; the nature of the mutations by which evolutionary processes do this thingdoes *not* alter the fact that volutionary processes *do* this thing. And seeing as how IC systems are ultimately brittle and non-robust — by definition, an IC system breaks if *any* of its pieces stops working — it seems to me that IC, itself, is a reduction is fitness, hence *any* mutation that produces IC is a fitness-reducing mutation.
Near the end of the range where C-14 dating is used, at eight half-lives, you would date a fossil at 44544 years with the earlier estimate, and 45840 years with the current (mean) estimate. I think the burden of proof is on the creationist to show how this brings the ToE to its knees.
(Thanks to PZ for the C-14 information and for pointing us to the ICE simulation.)
I feel stupid asking, but I feel I must. Does the time change due to the fact that there’s less carbon?
Thanks again, by the way. :-)
Re comment #22, if you’re asking me, I think the process is:
1) Measure the ratio of C-14 to C-12 in the fossil which is to be dated.
2) Compare that to the ratio in living organisms (in which C-14 is replenished from the atmosphere), to determine how much C-14 has decayed.
3) Estimate the age based on the number of half-lives it would take to reduce the original amount of C-14 to the amount remaining. For example, if 7/8 of the C-14 has decayed, leaving 1/8, that is three half-lives, so the age is three times the half-life.
The process can be further refined to take into account how the concentration of C-14 in the atmosphere has changed over time (as PZ mentioned above), and to account for naturally-occuring radiation (from surrounding rocks).
There is more and better information on radio-isotope dating at Talk Origins.
The 200 years difference is due to a quirk with an older value which was less accurate than what we realize is the proper value. However if you figure that 3.5% error really means that the 14 billion year old universe is 10,000 years old… you are crazy. That’s a 99.9999% error.
Whereas the scientists have started to switch the data over and use a more accurate value improved, the religionists are still off by a factor of a million.
What’s wrong with calling a spade a spade. Their comment is completely inaccurate is misrepresents the truth. There isn’t a shifting carbon 14 value, there is an improving accuracy. What use to be off by 3.5% is now off by, give or take, 0.5%.
Spam Sink There is an unpleasant flaw in this simulation. What is shown as an I.C. individual is usually a result of a fitness-reducing mutation.
This is not a flaw. The fact is the reducing mutation could easily reduce fitness and result in a product where all reducing mutations result in a reduction of fitness. Oddly enough, what is wrong here is your understanding (you’re thinking like a creationist). You think that in the original form was also I.C. This does not need to be the case. If we have an organism in an I.C. state and we had to reduce fitness to get there that the original was I.C. too; this is wrong. It could very well have had a reducible part which wouldn’t have reduced fitness in the original, which, with that reduction, is no longer neutral.
Let me give you an example. An arch is irreducible complex. You may not remove any stone without the thing falling down. However, lets look at the state prior to this state (with the understanding that we are selecting against “arch chance of falling down”). We now have an arch with full scaffolding. Is this irreducibly complex? No. We may remove the top brick with no real harm. With the scaffolding there it certainly won’t fall down with or without that brick. Removing the scaffolding is, a fitness reducing function (lot less likely to fall without it). However, without the scaffolding, that top brick becomes extremely critical.
Removing one part can change the fitness of the other parts.
“Yes Virgina, you can evolve a bacterial flagellum!”
Bronze Dog says
…Where’d the last four hours go?
Anyway, yeah, I noticed some of the previously pointed out trends: S’s are favored early on, IC examples tended to be lower fitness. Got a few extinctions, and I was kind of hoping that one critter would be able to rise up, but it doesn’t work out.
steve s says
This is really cool. I tend to have to see visual models to understands things. This is the best model I’ve come across to help me understand evolution and all that jazz.
I never studied biology in school. I’m not sure how I got away with that. I’m beginning to really understand it now. This model is just perfect.
I’m also not muchof a mathematician, but I wonder what the solution to the total number of stratagies would be here?
This is just too much.
Eric Davison says
I noticed that most of my IC “organisms” also were a reduction in fitness and very quickly disappeared – it usually just paused once, and then kept going, which indicated that it had not been selected.
However, in one of my simulations, it paused over and over and over again, and I figured out that the IC individual had taken over the population. Not sure exactly what mutation caused that one (shoulda screen-shotted it perhaps), but it was obvious that the IC individual was an improvement in fitness.
“There are a few obvious strategies that the structures seem to have trouble producing. A straight downward line of ‘A’s before the first ‘S’s would seem to be obvious, as is putting an ‘A’ right before the exit.”
I’ve got one which does exactly that. What’s interesting is how it got there – first, whithin a couple of generations, it more than halved the genetic information. Then it slowly began to add more , which eventually ended in in a straight line,with an S and an L at the end.
Apparently it is easier to get to a simpler solution when starting from scratch…
Lee Graham says
I’m the fellow who programmed that thing :) Thanks for the comments!
I also noticed the “flaw” about low-fitness IC solutions, and while it nevertheless shows IC systems being produced by a process of Darwinian evolution, it is just that tiny bit less impressive because of it. If you keep pressing “play” when it flags IC solutions, it will sometimes go on to produce IC systems that dominate the population, but not that often. I have a more beefed-up version of the program, written in C++, with a larger population, a larger board, and with ever-so-slightly different rules. It produces IC systems that almost always become dominant in the population. I’m working on publishing something on that right now, and I’ll eventually add info about it to the site too.
As for changes to the program, I agree that it would be nice to be able to do the ID version (try your own solutions and see how they do). I also think it would be nice to see the irreducible complexity checking routine animated as well (visually see all the knock-out mutants fail to attain any points), maybe via a little “prove it” button that you can press or ignore depending on the amount of detail you’re interested in. At the moment the program checks for IC in the background and simply lets you know when it finds an IC system. I’m far too busy in the near future to get those changes done, but the code is there on the web if anyone has the know-how, the time, the patience to read my code, and the interest. If you’re a programmer keen to enhance my program, go for it, and let me know if you make an improved version.
One other observation that has been made, and which I don’t describe on the site (yet) is that population initialization is not entirely random. You’ll notice that the placement of boxes is restricted to a certain area of the board. Also, each random individual is really the best of 6 randomly-generated individuals (you can’t see that unless you peek into the code). These things were done just to make sure that the initial population has at least a few viable individuals. I’d like to point out that this isn’t really a cheat or a flaw. The same could be accomplished by just having a larger population. I had to compromise because I wanted a small enough population to fit the nature of the applet. A really large population in the applet just wouldn’t suit its purposes as well as the small 20-individual population that fits on the screen.
Bronze Dog says
Well, I’m looking forward to a beefier version.
One annoyance that occurred just on my most recent run: Got stuck at equilibrium because apparently I hit a maximum genome size: Adding one more A at the top center would have caused a great increase in the score after getting multiplied, but it never happened.
Someone already gave an answer to this, but I want to add something. The evalution system **is** flawed. Nature itself doesn’t “evaluate” the fitness of a gene sequence based on if *everything* in the genome works, it does so based on what is *actually used*. Having run this thing for a while, it seems to take into account unused “genes”, or at bare minimum, may be checking for IC by looking at “all” sets of possible instructions, not just the ones that execute.
Put simply, there is junk DNA in these things, which could be considered junk because the “coding” places that DNA where it will never be used at all, short of some major mutations, which in this system can never happen, due to how fitness is evaluated, but the IC checking and fitness calc “both” seem to take this “junk” into consideration. If we did that with real genomes, nothing would be IC in the real world either.
Oh, and it also shows another flaw, which is only partly the result of how evaluation takes place. Since there is only one “output”, and no variation in the system, eventually the only “fit” candidates will all look alike, since novel mutations are pretty much impossible and can never be retained long enough to create “new” solutions.
This is after all a “simple” example of genetic algorythms. The Avida people figured out the flaw with lack of variation early on. Too little variation and you get *one* universal species, with no uniqueness. Too much and maybe you get nothing at all, or again, one species. There is a range in which the level of complexity of the system creates both sufficient instability to promote variation in genetic *and* sufficient isolation to keep the results from cross producing a *super species* that uses all resources near optimally, shoving everyone else out of any possible fitness test you could come up with.
Hmm. However, I just noticed that since its not possible to “keep” a working copy of an existing species, this simulation can also, given enough time, develop a “fatal” genetic flaw. I am at generation 650 or so and everything has a fitness of 0, due to, I presume, the simple accident of all of them developing a similar, but fatal, flaw at the same time. Wasn’t watching, so I have no clue what happened to cause that.
Yeah, really you need an increased genome size and to avoid one dominant species. Oddly enough, the Avida team managed to speed up evolution by about a factor of 6 by putting it in better competition and multiple species. As for the ID side idea, it almost never works. I distinctly tried to force a few great programs on to a genetic program I had wrote. Usually there were heavy selection pressures I didn’t consider which resulted in my well coded organism being brushed aside. I didn’t even get one gene in the damned thing to work.
Genome size limit = bad.
Gene pool size limit = bad. (this makes a huge difference)
Limited number of species = bad.
ID = doesn’t work.
Having the selection pressure from nature outweigh the game = bad.
Difficulty making a mutation result in a phenotype change = bad.
My genetic bot plays poker. It’s a little less deterministic, but nature isn’t deterministic either. With a gene pool of 10, I would kill off the person who went broke and make a sexual offspring from the first and second best person (mutating along ofcourse). After an initial setup which I thought was too slow (which I am going to go back to), I made it just play from the starter cards (as a test of concepts). If it has those cards it raises, calls, or folds everytime. I wanted it to go faster so I gave it a couple genes to raise AA and raise KK. I came back in the morning to find my raise AA and KK genes dead and a fold AA gene in very high frequency. I looked around, and it turned out every raise gene was dead. Apparently the whole you die if you lose thing was enough to breed mediocrity.
Lee Graham says
I chose the length limit only to make the chromosomes fit on the screen.
You may be right. It’s been a couple years since I wrote it. I’ll go back into the code sometime and have a look. If the IC check doesn’t concern itself just with the pieces on the board, I’ll make sure it does. Thanks.
You’re right, but it was never my purpose to do what was done with Avida. I only want a process of evolution producing irreducible complexity. I’m not concerned about variation, efficiency, speciation, stability, population isolation, optimality, or any of those things.
Indeed. That’s a good feature request: logging, saving, and reloading. If anyone would like to add that feature, be my guest!
Again, my goals aren’t those of Adami’s Avida.
The folks at Chez Dembski have discovered this simulation, and are discussing it.
My favorite comment is this one:
How silly of me not to have noticed the glaring differences! Behe defines IC as a system that requires all its parts to function, while the IC simulator… um… er… well, it’s just not the same thing, okay?!
And by the way, what the program simulates isn’t evolution, it’s just allele frequency change in a population over generations!
arensb, this is the same as IC. The removal of one “part” of an IC individual in this simulation causes the individual to have 0 fitness, aka to “cease functioning”.
Kagehi, an IC individual could not have a junk gene because the removal of that gene would not cause the fitness to fall to 0 (if it did it would be a useful gene). So you don’t need to worry about junk genes being evaluated or not. If an IC individual is found, it won’t have any junk genes to evaluate!