I’m in the process of re-assessing everything I think about AIs.[stderr] One topic I have always been fascinated by is: creativity.
My old model for creativity was that it’s the output of a process of permutation of ideas, which are then applied against a recognizer of “yeah, that’s cool” (or appropriate) or “yeah, that may work” (in problem-solving) – if the AI has a good enough recognizer, the recognizer ought to be able to give some approximate probability of “that will work.” By ‘good enough’ recognizer I mean one that is trained enough in the problem where the AI is coming up with creative output. In some cases, the permutation process may output stuff that’s equally probable but not very successful – as long as there is a feedback loop that can update the recognizer, the AI will learn how to be better at creating in a particular area of endeavor.
When an AI that’s playing a computer game doesn’t know what to do, it makes guesses really fast and assesses their likelihood of success. In rethinking a lot of this stuff I am realizing that that’s how I play computer games, and I’m just an AI, so why wouldn’t another AI do exactly the same thing? The implementation details are going to be different but the outcomes of that process are going to be statistically similar. So, if you’re William Wegman and you’ve made a name for yourself as an artist photographing Wiemaraner dogs in silly poses, you have a simple creative process as you’re working on one image of a Wiemaraner and suddenly you notice the hot dog you’re about to eat and your permuter permutes that against the dog pictures you make your living producing, then the recognizer goes, “sure, that’d be funny” and you’re ordering a bun costume. A photographer with a lot of memories involving dogs and art is going to be more likely to permute things involving dogs and art photographically, and their recognizer is going to be more likely to successfully pass ideas involving dogs and art because, after all, you’ve already got dogs handy.
There’s another piece of the puzzle which is the basic firmware – the “go left” “go up” “grab it” kind of atomic operations that are available to an AI. In the game-playing AI, which is trapped in a weird 2D flat-land, its basic options are already sort of approved by the recognizer because they work pretty much always. At this point, I think we start to invoke what Niko Tinbergen and the ethologists call “Fixed Action Patterns” (no relation to “the FAPpening”) – we learned about those in undergrad psychology: some creatures appear to be born with specific programs that they will follow until they learn something that overrides it. In the stickleback male, for example, that means aggressively trying to chase anything – even a wood doll – that has the coloration of a stickleback from its territory. The stickleback, however, has learned behaviors that modulate and layer atop the fixed action patterns so it won’t keep attacking an object on the other side of the glass of its tank. The part that lodges in my memory was the description of some Greylag Geese that Tinbergen observed: when presented with a threat the geese had no fixed action pattern for, the individual goose would try something, and if it worked it would learn and repeat that behavior. Otherwise it would try something else until it found something that worked, etc. So, if a Tinbergen walked toward a goose with a nest and chicks, the goose would try several default behaviors to drive off the ethologist and if none of them worked, it might play dead, or pretend to have a broken wing and try to lure the ethologist off in another direction. Perhaps a bird that hit upon dancing the macarena just as the ethologist got bored and left, might become a ‘superstitious’ bird that mistook the macarena for a way of scaring ethologists. Konrad Lorenz also did some interesting stuff (bordering on animal abuse) with geese, exploring their imprinting behavior: a fixed action pattern in which newly hatched goose AIs that haven’t got an experienced recognizer can be hacked into thinking an Austrian ethologist is their mommy.
Fixed action pattern -> permutation engine -> go/no-go recognizer -> behavior
That’s all lead-in to this little bit of video that I now see in an entirely different light. Here we have an AI that begins to permute a variety of ideas, rapidly passing them past a recognizer that is tuned for rhythm and humor. Presumably the recognizer is loaded with a great deal of memories of what is funny and what isn’t – it seems to be able to rip through the permutations very quickly, but really what we’re looking at is a massively parallel process.
I used to just think that clip was funny and showed how amazingly cool that particular AI’s training sets are, but now I see it as funny, amazingly cool, and instructive.