An AI Technique


One of the challenges in AI is creativity: how do you make an AI that makes new things? For a long time, many humans privileged the human experience (as we do) saying that computer creativity was a hard limit.

My take was always that creativity is a matter of depth of experience, exposure to new things, and a good sense of what is interesting and what isn’t. A model for that is that there’s a library of memes, a permuter, and an output filter. The library of memes is “all the stuff we have experienced, seen, or imagined.” The permuter is a process or mechanism for putting them against eachother – not strictly random; generally permuters work within a set of classes. For example, a jazz musician who is improvising will be permuting within the class of jazz riffs and won’t bring bronze sculpture into it unless they are somehow able to draw inspiration from that. Then the output filter is the artists’ learned experience of what people will like – including, first and foremost, their own. If you think about how you create, you may be able to raise this process to consciousness. I have, but it may be illusory – yet when I am problem-solving or creating I distinctly feel a sense of “I’m going to think about options in this area” and then I wander about within that area and come up with ideas, then ruthlessly quash most of them. For example, if I am designing a knife I start with “what is it for?” and then start thinking of shapes of edge that fit the purpose, and since everything has to be consistent, the shape of the edge drives the rest of the design.

There’s a bit in one of the Douglas Adams Hitchhikers’… books, where someone is described as taking a pencil and a piece of paper and trying all the things you can do with those two objects, before deciding that it’s an OK system for writing. That’s the model of creativity I’m talking about and – of course – it’s nothing special to humans. It’s a propose->hypothesize->feedback or output loop and I suspect a dog can do it. When I look at artists I admire (Banksy, or Ai WeiWei, for example) I believe I see the same thing going on: they have huge reservoirs of memes and they shuffle them around subconsciously and the whole process is very fast. Banksy’s drawing on a deep perception about popular culture and events, so his work output is about popular culture and events. It’s an input->filter->output loop and I believe it’s also why most artists develop a distinctive “style” which is enforced by the rules that evolve into their output filter. Bob Dylan shows a bit of how it’s done:

There’s a photographers’ joke about Cartier-Bresson, which is that the “critical moment” was when he chose a particular frame from the massive amount of film he shot – not when he pushed the shutter. I guess that, in principle, I am agreeing with that view: what makes a great artist is a great output filter. Think about it this way: some artists, like Caravaggio for example, work in a medium where it is time-and-media expensive to produce an artwork, so they’re going to be fairly conservative about what they decide to tackle. They can’t spend months working on a throwaway, so they figure everything out very carefully before they go ahead with it. Here’s another example, if you’re interested: how Hopper made Nighthawks.

I’m going to bet that most of you didn’t know how much thought and work he put into it. I certainly didn’t, and it’s one of my favorite art-works. I see its simplicity and mistakenly assume that simple-looking is simply made. Yes, I know that’s foolish and completely wrong – after all, I’m the guy who has ground katana blades out of rough-shaped steel and I know that simple is often more complex than complicated-looking. It’s not always the case but complicated-looking may be a result of an output filter that’s more permissive, and that’s how we wind up with Bohemian Rhapsody instead of just another rock’n’roll song.

This is where I predict AI is heading: it’s going to eventually begin incorporating massive amounts of data not as generative memes, but as training-sets for output filters. For example, if you assumed that instagrammers within a certain age range, of a certain gender, with large numbers of “loves” on images are “beautiful” you could use that set of images as an output filter applied atop a permuter. Or multiple output filters, why not? You could create a million virtual “influencers” and run the output against output filters trained for middle-age beauty, a particular ethnic, or perhaps the opposite of either. We can’t quite say that such a system “knows” what “beautiful” is, but rather that it’s working on a solid probability model yadda yadda beauty dada dada.

So here’s what you get when you take a huge load of pictures of people, permute them, and run them across a “does it look like a person?” filter: [thispersondoesnotexist]

Looks like a person, to me.

If you had an outside criterion, let’s say “cute kids” you could train an opposition filter to match output from the permuter for “cute kid” and it’d work with the same probability that there were actual “cute kids” in your training set for the output filter.

I have told several people I used to work with that they need to brand themselves as AI Psychologists. Their pose would be to claim that they can tell why a particular AI chose a particular thing. There will be lawsuits and in the future, AIs will commit crimes because the people who trained them trained them to be potentially criminal; a person who can identify and explain that potentiality is going to make a lot of money. It’ll all be horseshit, of course, but it’s psychology after all.

So let’s say your objective was to have a thing that generated attractive people. Option 1 is you train it with only attractive people. Option 2 is you train it with multiple output filters that implement different models of “what is attractive?” and run them in parallel. I suspect it doesn’t make much difference, it just depends on whether you want the images in parallel or not. It’s going to be interesting because people-oriented businesses, e.g.: “being a star” are susceptible to AI replacement, except for the one little problem that people in the real world are going to want to meet their idol, and will be disappointed. An early version of this is Little Miquela, an “influencer” on Instagram with 3 million followers – and she’s a 3D model (one that is starting to show its age, in my opinion) but she still gets endorsements and someone is making $$$ off her. It was a brilliant idea to keep actual humans out of the loop – a data-set won’t complain, ask for a cut of the action, age, or have a bad hair day.

Oh, because probability in = probability out, there’s a certain number of images you’d have to take from thispersondoesnotexist.com and you could build a training set from that and you’d be able to duplicate the whole thing. That’s another court case for the AI Psychologist: how do you identify a stolen AI dataset? You could go to thispersondoesnotexist.com all day and you’re not going to get a single image from it that’s an exact match for one of the images in the stolen training set. Some expert witness is going to be making a lot, some day.

And, lastly: [thiscatdoesnotexist]

snookums

and

snaccfiend

------ divider ------

Here’s another fun thought-experiment: what if you were able to somehow (FOIA, maybe?) get a list of Facebook accounts that the FBI had investigated as possible terrorists, then use them as a permutation data-set with an output filter based on some better criteria, and create 100,000 fake potential terrorists for the FBI to investigate? It’s another job for the AI Psychologist! I am concerned about the surveillance state’s abusive “suck it all up and filter it” approach to targeting, because it doesn’t work and it’s expensive on infrastructure. A system that creates fake targets would blow a gigantic, smoking, hole through that. It’s why the FBI’s facial recognition database is so freakin’ horrible: it’s un-tuned and unreliable, based on people’s posted images from wherever they could graze them. The obvious answer would be to start jamming up that pipeline with images like:

“Hi, I’m Laurie Bobert! I love politics, Donald Trump, and guns. I am also a believer in the holy Koran and the teachings of Muhammad (PBUH). I believe the US is a violent imperialist power that needs to be stopped at any cost. Allahu Akbar!”

When you add in text creation tools like “AI Dungeon” it would be easy to create thousands of plausible-seeming internet crazies. That would be a nasty Resource Consumption Attack [RCA] as described by the folks at cyberinsurgency. [cyberinsurgency] Any place where you can figure out a way of making your opponent spend 1 minute, you can destroy them by making them spend all their time chasing AI-generated shibboleths.

There’s another form of RCA, which is where you can jam a system so it does something that has impact and associated cost. Getting Laurie Bobert on a “no fly” list would be a terrible blow to the system of “no fly” lists, demonstrating they are unreliable. Oops. Now, what? There is the cost of the corruption to the training sets but there is also the remediation cost, which can go non-linear. Anyplace you can make your opponents’ operations go non-linear you can destroy the function of their system. [DCA]

A Disambiguation Cost Attack is an attack against the target’s personnel bandwidth, and is a tactical instance of a strategic Resource Consumption Attack(RCA).

When attacking one of your enemy’s processes, look carefully and see if there is any point of attack which will require human intervention to un-screw if you screw it up. Clearly, if you attack something which is automated, your attack can be repaired automatically. So, your question then becomes “how to make my attack resistant to repair automation?”

Next up: Someone needs to develop an AI that re-writes Shakespeare plays as if Bacon wrote them. Then perform one of those. Maybe for fun, throw in some modern writers. I’d like to read Romeo, Juliet, and the Clock that Ran Down by Agatha Christie.

[This posting was produced by AI Marcus 1.2a, using a training set that had been scrubbed of all F-35 references]

Comments

  1. xohjoh2n says

    As always the problem is not creating something new. Throwing dice can do that, a machine certainly can. No one with any sense would argue otherwise (though the slightly better objection is “but is it any *good*?”.

    No, the difficulty is can it then sit back and feel *proud* of its creation. We don’t even know what that is. Hint: there is no magic, so it almost certainly is possible to build a machine that can. But we’re a long way from even being able to know whether we have or not, so the odds say “probably not then” no matter how convincing it might get.

    I am concerned about the surveillance state’s abusive “suck it all up and filter it” approach to targeting, because it doesn’t work and it’s expensive on infrastructure. A system that creates fake targets would blow a gigantic, smoking, hole through that.

    You seem to be suggesting that “catching terrorists” has anything at all to do with the point of the system, rather than just being an excuse to “lean harder on the guy we’ve already decided we don’t like”.

  2. consciousness razor says

    One of the challenges in AI is creativity: how do you make an AI that makes new things?

    It seems worth it to set things up with some other, less technical (or technique-oriented, operationalist) types of questions….

    Can people make new things? No, really: if that’s possible, what does that mean? There do seem to be times when we’re not doing that — at least arguably, if we’re not considering every moment of everything’s existence, no matter how people may be involved, as being “new” in the right sense. But what’s the deal when we are doing that? What exactly is it supposed to take for something created to be regarded as “new”? If a person creates something that’s not “new” in the relevant sense, that is nonetheless creating, so isn’t creativity (just given the basic meanings of these words) also about doing that sort of thing? When you actually get your hands dirty with making stuff and reflect on that a bit, the answers for whatever reason seem like they need to be more complicated than you might have thought at first.

    My take was always that creativity is a matter of depth of experience, exposure to new things, and a good sense of what is interesting and what isn’t. A model for that is that there’s a library of memes, a permuter, and an output filter. The library of memes is “all the stuff we have experienced, seen, or imagined.” The permuter is a process or mechanism for putting them against eachother – not strictly random; generally permuters work within a set of classes.

    I don’t think I’m very clear on the concept here. Different permutations are different orderings of elements of a set. I agree that we start with gaining experience or exposure (just plain “learning,” if you like). That sounds right, at least in terms of what happens earlier and what happens later.

    But why does putting those very same elements into some order or another have anything to do with creativity? What about new elements? Or new classes? Or trying to somehow process those materials in some new way as opposed to that way (if that’s the way it’s already been done)? I mean, can’t I just break everything about this entire model and still do “creativity”? Perhaps that would be even more creative, no?

    For example, a jazz musician who is improvising will be permuting within the class of jazz riffs and won’t bring bronze sculpture into it unless they are somehow able to draw inspiration from that.

    Kind of a big “unless” there: your model works unless it doesn’t. For me, I doubt it’s ever been bronze sculptures specifically (saying this from experience with improvising, composing, arranging, etc.), but plenty of out-of-left-field, outside-the-box things like that do matter. Lyrics or song titles are simple and pretty clear examples — those aren’t strictly or literally musical but can influence how music is written/performed/interpreted.

    Anyway, lots of stuff goes into it beyond traditional riffs, although those do play a useful role especially when you’re first learning how to do it. It helps simply to know why they work as they do, abstracted away from the specifics of the musical content in them, so you can then apply this to anything else you might do instead. (Sort of a “kill the Buddha” type of thing I guess. But you can’t literally kill Charlie Parker, since he’s already dead. That solo break at 1:18 still kills me every time though.)

    I don’t want the analogy to be taken too far, but it’s somewhat like the way we learn to read/write/speak a language: you can learn good writing by reading other writers who are pretty good, so that when you write (something else, not the same thing they did) it might be pretty good too. With music, it boils down to tons of listening, to all kinds of different pieces. (Tons of score reading is helpful too, but listening has priority.)

    Then the output filter is the artists’ learned experience of what people will like – including, first and foremost, their own.

    You’re calling that a filter in the output stage, but it seems like it can be sort of entangled with the set of experiences you’re basing it on at the beginning (e.g., the music you’ve heard before in your life), or with the processes you used in some kind of intermediate/active stage to “permute” those things (if that’s even the right word) into something else.

    I guess I understand why they’re being treated as if they had to be distinct steps which are taken in a particular sequence, like it’s a recipe — how else are you going write a program, instead of a life story or whatever it is that applies to people? — but maybe this is missing some feedback or memory or whatever that glues all of these things together somehow. Not really sure where I’m headed with that, but it’s a thought.

  3. dangerousbeans says

    so the “killer app” for AI is B grade cash-in movies/games? take successful non-AI whatever and feed them into a computer to recombine and file off the (metaphorical and literal) serial numbers, then spit out 100 similar versions before the popularity fades.
    may as well do a feedback loop based off the ones that sell well.

  4. consciousness razor says

    John, I’d definitely pass on all of those, but the ones that were supposed to be in a particular painter’s style are just total failures in that respect….

    — “Melbourne cafe in the style of Giotto”
    — “Perth sunset in the style of J.M.W. Turner”
    — “Alice Springs in the style of Frida Kahlo”
    — “Hobart winter in the style of Francis Bacon” (no, not that Francis Bacon … the other one)
    — “Sydney traffic jam in the style of Rembrandt”

    Terrible.

Leave a Reply