Python Picture Evolution part 2, finally


So I’ve finally gotten around to working on my picture evolution project. part 1 being here. You can feed it a sample image, tweak a bunch of parameters (at the top of the headers), then sit back and watch as the fitness improves sharply for the first little bit then plateaus out. To recap, my initial hypothesis regarding this computer model is that, given a population of digital “creatures” with undirected mutation, and selection criteria based on how close these “creatures” appear in comparison with a target image or “environment”, it would theoretically take very little time to reach something very closely approximating the original image, even if the environment and selection criteria were more closely modeled to reality.

My first version duplicated the EvoLisa experiment pretty closely, and I found you could get some pretty high-fidelity representations of the target image by only selecting those mutations that were “advantageous”, e.g. only those that produced a marked improvement, always throwing out all the disadvantageous mutations.

And now, coming only a mere four months after the blog fad has faded completely, I’ve come out with my next release. This newer version of my recreation now sports a configurable population size, sexual reproduction, variable mutation rates, and a selection criteria that, while still heavily favoring the better fit ones, allows for any creature, no matter how well selected, to die either by being spotted, as unlikely as that is, or through sheer chance. It also outputs some extremely verbose debugging info if you choose, and even dumps the most relevant stats to a .csv file so you can play with it in OpenOffice Calc (or Excel for you Microsofties).

In my trial runs, I have found that the randomized initial population, bearing absolutely no resemblance to the target picture, will very rapidly select out some fair-to-middling features (the idea being that the really obviously wrong ones would “stick out like a sore thumb” and get eaten by predators nearly immediately), leads to most of the population getting most of the background and foreground colors (mostly) correct but almost none of the shape, and very unlikely to have any of the fine details. Once it reaches this state of equilibrium, it seems to stay roughly at the same average fitness for a while — how long I can’t say, as every test I run on this laptop takes about a second to process each generation, which is absolutely abysmal, and I suspect almost entirely due to my relying on Pygame for my image processing.

I’m curious to see if it’s possible to break out of this equilibrium and start another trend toward improvement after a while under these conditions, given that even the original EvoLisa project took roughly 900,000 generations to come up with its final product, however I suspect that as the natural selection criteria are not terribly ruthless, and because those criteria never vary from generation to generation, that any such breaking of punctuated equilibrium would take far more generations than I’ve managed to get out of this laptop. The highest number of generations I’ve gotten in one run was 6000, and that run ended in my console with “killed”, not by anything I could see and not due to any bug that I could track down (as usually Python’s pretty good about declaring when bugs occur).

I’m planning on moving this onto my desktop for another trial run soon, expect a blog post with more than just talking about this stuff — like actual pictures with actual results and maybe even a graph or two. As for plans for the code itself, very soon I will be implementing command line parameters and a config file (so you don’t have to mess with the code to change certain values), and hopefully finding some way faster alternative to Pygame for actually generating the images — will probably keep using Pygame for the main loop and display, at least initially, but replacing just the generation part. I’m looking at Python Image Library and Cairo currently. Grafting out huge chunks of core code is bound to have deleterious side effects so version 1.0 may be a while longer before it makes it up here. Besides, I’m working on this in my free time between neglecting my blog, neglecting my friends, neglecting my job or neglecting my girlfriend, so if it takes time, just chalk it up to neglect all around!

For now here’s version 0.8. Enjoy.

Comments

Trackbacks

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>