AIs and Creativity


I’m in the process of re-assessing everything I think about AIs.[stderr] One topic I have always been fascinated by is: creativity.

My old model for creativity was that it’s the output of a process of permutation of ideas, which are then applied against a recognizer of “yeah, that’s cool” (or appropriate) or “yeah, that may work” (in problem-solving) – if the AI has a good enough recognizer, the recognizer ought to be able to give some approximate probability of “that will work.” By ‘good enough’ recognizer I mean one that is trained enough in the problem where the AI is coming up with creative output. In some cases, the permutation process may output stuff that’s equally probable but not very successful – as long as there is a feedback loop that can update the recognizer, the AI will learn how to be better at creating in a particular area of endeavor.

When an AI that’s playing a computer game doesn’t know what to do, it makes guesses really fast and assesses their likelihood of success. In rethinking a lot of this stuff I am realizing that that’s how I play computer games, and I’m just an AI, so why wouldn’t another AI do exactly the same thing? The implementation details are going to be different but the outcomes of that process are going to be statistically similar. So, if you’re William Wegman and you’ve made a name for yourself as an artist photographing Wiemaraner dogs in silly poses, you have a simple creative process as you’re working on one image of a Wiemaraner and suddenly you notice the hot dog you’re about to eat and your permuter permutes that against the dog pictures you make your living producing, then the recognizer goes, “sure, that’d be funny” and you’re ordering a bun costume. A photographer with a lot of memories involving dogs and art is going to be more likely to permute things involving dogs and art photographically, and their recognizer is going to be more likely to successfully pass ideas involving dogs and art because, after all, you’ve already got dogs handy.

Konrad Lorenz fixed action dances with geese

There’s another piece of the puzzle which is the basic firmware – the “go left” “go up” “grab it” kind of atomic operations that are available to an AI. In the game-playing AI, which is trapped in a weird 2D flat-land, its basic options are already sort of approved by the recognizer because they work pretty much always. At this point, I think we start to invoke what Niko Tinbergen and the ethologists call “Fixed Action Patterns” (no relation to “the FAPpening”) – we learned about those in undergrad psychology: some creatures appear to be born with specific programs that they will follow until they learn something that overrides it. In the stickleback male, for example, that means aggressively trying to chase anything – even a wood doll – that has the coloration of a stickleback from its territory. The stickleback, however, has learned behaviors that modulate and layer atop the fixed action patterns so it won’t keep attacking an object on the other side of the glass of its tank. The part that lodges in my memory was the description of some Greylag Geese that Tinbergen observed: when presented with a threat the geese had no fixed action pattern for, the individual goose would try something, and if it worked it would learn and repeat that behavior. Otherwise it would try something else until it found something that worked, etc. So, if a Tinbergen walked toward a goose with a nest and chicks, the goose would try several default behaviors to drive off the ethologist and if none of them worked, it might play dead, or pretend to have a broken wing and try to lure the ethologist off in another direction. Perhaps a bird that hit upon dancing the macarena just as the ethologist got bored and left, might become a ‘superstitious’ bird that mistook the macarena for a way of scaring ethologists. Konrad Lorenz also did some interesting stuff (bordering on animal abuse) with geese, exploring their imprinting behavior: a fixed action pattern in which newly hatched goose AIs that haven’t got an experienced recognizer can be hacked into thinking an Austrian ethologist is their mommy.

Fixed action pattern -> permutation engine -> go/no-go recognizer -> behavior

That’s all lead-in to this little bit of video that I now see in an entirely different light. Here we have an AI that begins to permute a variety of ideas, rapidly passing them past a recognizer that is tuned for rhythm and humor. Presumably the recognizer is loaded with a great deal of memories of what is funny and what isn’t – it seems to be able to rip through the permutations very quickly, but really what we’re looking at is a massively parallel process.

I used to just think that clip was funny and showed how amazingly cool that particular AI’s training sets are, but now I see it as funny, amazingly cool, and instructive.

Comments

  1. says

    Anders Kehlet@#1:
    AI generated music:
    http://newatlas.com/ai-pop-music-amper/51018/

    Pretty cool!

    A friend of mine and I were talking about AI after a recent show and were thinking, “if you could get all the lyrics to all the country songs… you could train a neural network…” You’d get some statistical classics like “my dog drove a train over my pickup truck” but Nashville’d love it.

  2. Bill Spight says

    For some reason, when I try to comment on the previous note, I get sent here. This is about “winning” a war.

    If winning a war means defeating the insurgency, please note that the U. S. did not win the Civil War. The insurgency is still alive. It accounts for most terrorist activity in the U. S. today.

  3. Bill Spight says

    The Dylan video reminded me. I was fortunate enough to have breakfast with W. H. Auden. It was in college and I wasn’t alone, there were about 10 of us students. I got there early and was sitting next to him, though. :)

    One of us asked him what he would advise a young poet. Auden replied, “Play with words.”

  4. says

    For reasons I never have understood, the description from Hitch Hiker’s Guide has always seemed to say a lot about creativity. It feels like what happens in my mind when I am having an idea:

    He picked up from the table a piece of paper and the stub of a pencil. He held one in one hand and the other in the other, and experimented with the different ways of bringing them together. He tried holding the paper under the pencil, then over the paper, then next to the paper. He tried wrapping the paper round the pencil, he tried rubbing the stubby end of the pencil against the paper and he tried rubbing the sharp end of the pencil against the paper. It made a mark, and he was delighted with the discovery, as he was every day. He picked up another piece of paper from the table. This had a crossword on it. He studied it briefly and willed in a couple of clues before losing interest.
    He tried sitting on one of his hands and was intrigued by the feel of the bones of his hip.

  5. Bill Spight says

    “my dog drove a train over my pickup truck”

    “Well I was drunk the day my mama got out of prison,
    And I went to pick her up in the rain.
    But before I could get to the station in my pickup truck,
    She got runned over by a danged ole train.”

    from
    You Never Even Called Me By My Name, sung by David Allen Coe.

  6. brucegee1962 says

    Before you can figure out whether a machine can be considered creative, I think you’d have to put a lot of thought into your definition of creativity. It’s still hard to do better than old Turing, though — we might think of a machine as creative if a trained observer can’t tell the difference between what a computer produces and what, say, a poet or an artist produces. That leads us to other philosophical questions, though, like what the “purpose” of art is. If the purpose of art is to convey emotion, would we say that we have reached the goal of machine creativity if its creation evokes some feeling in the viewer? But someone else might say that isn’t the purpose of art at all.

    Of course there are lots of other problems with the Turing test as well. Observers who understand how AI works can do a better job of recognizing them than someone off the street. Also, forms of art that are more abstract (like modern painting) or fantasmagorical/imagist (some modern poetry) or containing lots of mathematical relationships (like classical music) might be easier for an AI to imitate than something narrative or descriptive, like Tennyson or Andrew Wyeth.

    In training an AI to be creative, the hard part, I would think, would be getting a recognizer that could train the creativity. The recognizer in most games is an easy binary: if you won, keep the strategy you used, if you lost, discard it. That doesn’t work with art.

    I suppose you could try crowd-sourcing the rating of computer artwork over the internet. But individuals’ tastes differ so, would you ever come up with a reasonable algorithm?

  7. grahamjones says

    > In training an AI to be creative, the hard part, I would think, would be getting a recognizer that could train the creativity.

    Agreed. “It’s easier to make a composer than to make an audience.” As someone said a long time ago, probably the 70s or 80s. Wish I could remember who.

    > I suppose you could try crowd-sourcing

    Been done. http://darwintunes.org/ It seems DarwinTunes is to be “retired” soon unless someone falls in love with it.

  8. says

    Bill Spight@#7:
    You Never Even Called Me By My Name, sung by David Allen Coe.

    There’s also Rascal Flatts’ “What happens when you play a country song backwards” song, which is kind of funny.

  9. says

    brucegee1962@#8:
    Before you can figure out whether a machine can be considered creative, I think you’d have to put a lot of thought into your definition of creativity.

    Yeah, I knew one of you’d catch me trying to sneak away without defining any of my terms..
    I can’t, and you know it. Because, if I could, I’d have a go/no-go recognizer that I could use to train against – isn’t that what a definition would be? (In chess or DOTA2 we can define “winning” using the game itself)

    If we could define “success” as a musician… Oh: we can! It ought to be pretty straightforward to have an AI create music and train based on listener feedback on youtube or some music-sharing service. Is the “top country songs chart” a good enough definition of ‘success’ for an AI to train against?

    (I have lately been thinking about what would happen if you could spider a web ‘lyrics’ site for all country music lyrics, then train an AI to write country music. Most of the stuff I see about AIs being trained to simple ‘creative’ tasks use fairly small training sets:
    http://www.slate.com/blogs/future_tense/2017/05/09/an_a_i_created_new_dungeons_and_dragons_spells.html)

    That leads us to other philosophical questions, though, like what the “purpose” of art is. If the purpose of art is to convey emotion, would we say that we have reached the goal of machine creativity if its creation evokes some feeling in the viewer?

    I am very much with you on the question of “what is art?” and the importance of artistry. But, here we’re talking about an AI that’s trying to just be popular. That’s easy (or easier) to measure than whether it’s great or not. That gives me an idea for a short story about a programmer who makes an AI artist and the AI works long and hard and re-paints a Caravaggio and signs it with its own name.

    Observers who understand how AI works can do a better job of recognizing them than someone off the street.

    Yes, that brings up the issue I was hung up on before, which is that for some tasks an AI needs to train itself against an expert that is better at the task than it is.

    I suppose you could try crowd-sourcing the rating of computer artwork over the internet. But individuals’ tastes differ so, would you ever come up with a reasonable algorithm?

    I guess you have to narrow the criteria. Or, you accept that peoples’ tastes differ and when you produce something aimed at (say) the Caravaggio fans, the Jeff Koons/Marcel Duchamp fans will go, “Eh. It’s great but it’s not my thing.”

  10. says

    grahamjones@#9:
    Agreed. “It’s easier to make a composer than to make an audience.” As someone said a long time ago, probably the 70s or 80s. Wish I could remember who.

    That’s great. It sounds like David Byrne (Let me put a plug in for some of Byrne’s writings about music; he really really thought very hard about what he was doing with the Talking Heads. That’s why it seemed so effortless.)

    Been done. http://darwintunes.org/ It seems DarwinTunes is to be “retired” soon unless someone falls in love with it.

    There’s a thing that crops up here: in order for an AI to train against itself, it needs clearly defined rules (e.g.: go) and then it can rip away at whatever clock speed and parallelism you can provide. When humans are involved, suddenly you slam down to human speed. In principle an AI could learn how to win on ‘America’s Got Talent’ if it could try 3,000,000 ideas – but the humans involved wouldn’t sit for it and don’t operate that fast.

  11. says

    Permutation can even make you a creative genius, according to Richard Feynman (as reported by Gian-Carlo Rota):

    Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: “How did he do it? He must be a genius!”

    As a researcher myself, I believe there is more than a little truth to this. The opposite works too: there are a few tools I know how to use well, so I try them on any problem I encounter. Sometimes it works, and presto!, I got me a new paper :-)

  12. says

    Before you can figure out whether a machine can be considered creative, I think you’d have to put a lot of thought into your definition of creativity.

    Yeah, I knew one of you’d catch me trying to sneak away without defining any of my terms..
    I can’t, and you know it. Because, if I could, I’d have a go/no-go recognizer that I could use to train against – isn’t that what a definition would be? (In chess or DOTA2 we can define “winning” using the game itself)

    … Did you just find an equivalence between “what is art” and the halting problem?

  13. chigau (違う) says

    I learned about Konrad Lorenz and the Geese as an undergrad in the 1970s.
    But apparently we didn’t have photography then because They™ never showed us that there photo. It is awsome.
    .
    Bob Dylan … playful??‽
    Weird.
    Is that a Bedford van?