Will the “Influencer” Market Collapse?


I don’t put a lot of attention into Instagram, but occasionally I post a picture of a work in progress, there, or I browse “drunkpeopledoingthings” and “thingsblowingup” – OK, it’s embarrassingly prurient.

What I have noticed, though, is a huge influx of AI crap. Well, crap in general, but the AI crap is especially painful to me for some reason. Perhaps it is Establishment Icon Reginald Selkirk’s influence – I am also inclined to search for extra fingers and missing feet, etc.

Uh, that right hand is pretty bad. I know, I’m supposed to be watching her boobs jiggle except they aren’t.

The buildings in the background are really bad. Also, I don’t think that left hand is very convincing. And then there are the really crazy ones:

I don’t know what nation’s unit patch that is, but the nametag is pretty bad, too. On this one, the, uh, well, it’s hardly trying to be real:

On that one, I think they ran the output from an AI into one of the body-morphing “beauty filters” with horrendous results. It’s like what happens if a Kardashian’s lamborghini crashes into a blimp.

This one is almost good enough to pass until you start trying to figure out the tangle of meat/arms behind their backs, and the toes on the one to the right. Actually all of the toes need auditing:

I am feeling the weight of my years, I suppose, as I watch all this stuff and think “wow, young guys sure are pathetic.” These fake images garner comments like “I love you!” and “I want to meet you”, “dream”, “angel”, etc. Hope springs eternal. But it just fills me with amorphous contempt. When I was a teenager, I suppose, the girls in Playboy were pretty obviously out of reach.

But it’s turning into a sea of scam. And the profiles that are real mostly seem to be teasers for onlyfans sites. Some of those are AI, too, or heavily filtered. And then there is the army of women who appear to have discovered that if you shake your boobs or butt, you’ll get 50,000 clicks. Does that turn into real money somehow? I hope so, because there’s a lot of expensive plastic surgery on display. Or, is it digital? I don’t know and it’s hard to care much, except I’m getting the impression that all young Chinese and Korean women have had their noses whittled down to a little wedge, and their faces reshaped into perfect ovals, or something. Whatever gets you up and going in the morning, I guess, but it seems to me to be degrading everything that is interesting and exciting about the great mating dance. In fact, the problem is that it’s not got anything to do with the great mating dance – what is the point about throwing your affections at someone who is not real?

Back in the day, I remember one of my high school classmates telling me he had a severe crush on one young lady we both knew, who had (in the parlance of the time) “A balcony you could perform Romeo and Juliette from” well, not that big, but you get the idea. I thought for a bit and told my friend, in all seriousness, “you’ll never lay a hand on those.” Why? He wanted to know. Because she had so much kleenex in her bra that it would immediately become obvious that he had been duped, if he ever got close to her. Then, what? Personally, I don’t care – a person should like whatever they like, big or small or even pumped with silicone, but if you’re a shallow teenager who falls head over heels for a certain look, and that’s not the reality, then what? I wound up having a surreal high school conversation with him that ended up with me explaining that when I was doing stats for the swim team I’d seen her in a Speedo. But what happens to some young guy who mistakes one of these virtual viragos for real? They’re never going to meet, so I guess disappointment is not a sure outcome, but it’s either disappointment of one sort, or disappointment of another.

I actually did a few steps of the dance, out of curiousity, with one Instagram profile that messaged me and said “I am just looking for someone to talk to…” Yeah, OK. So I said, “sure, what would you like to talk about?” and realized that I was engaged in a pretty stupid form of the Turing Test. Was someone feeding things through ChatGPT, or just ad libbing? It didn’t matter to me, or them, they blocked me when I said I wasn’t going to install WhatsApp. (I don’t know what the deal is with that, except perhaps using Facebook to harvest networks of relationships?) Ultimately, who cares? The market is flooded. I do hope they manage to monetize it a bit, somehow, because it’s got to be hard work. Especially all the jumping up and down. Meanwhile, there is incredible (real) talent and creativity on display. It must suck if all you can bring to the table is some AI jiggles, when there are genuinely impressive people all around you.

Comments

  1. Jazzlet says

    I had a conversation with my actor niece this evening about how voice over work is no longer something that pays* because you are expected to do it from home, and anyone can set up a good enough system, so people with no training undercut. And that goes for ad’s, books, the lot.

    *unless you are already known.

  2. Jazzlet says

    Sorry, meant to add that this is also an area where AIs will get the work that real actors would once have done.

  3. Dunc says

    These fake images garner comments like “I love you!” and “I want to meet you”, “dream”, “angel”, etc.

    Yeah, but a lot of those comments are fake too. Bots talking to bots.

    I actually did a few steps of the dance, out of curiousity, with one Instagram profile that messaged me and said “I am just looking for someone to talk to…” Yeah, OK. So I said, “sure, what would you like to talk about?” and realized that I was engaged in a pretty stupid form of the Turing Test. Was someone feeding things through ChatGPT, or just ad libbing? It didn’t matter to me, or them, they blocked me when I said I wasn’t going to install WhatsApp. (I don’t know what the deal is with that, except perhaps using Facebook to harvest networks of relationships?)

    You’ve heard of “pig butchering”, right?

  4. says

    I am as horny a single pathetic loser guy as they get at my age but Nr 4 made my eyes hurt and nearly caused me an aneurysm. And none of the other four looks even remotely appealing because they all still reside firmly in the uncanny valley. There is something wrong about them even before one actively starts to think about them critically.

    I found out recently that it helps a bit to add tags -ai -midjourney -stablediffusion to any image search. It weeds out the most egregious offenses at least.

  5. says

    And when looking at Nr 3, the lapels are all wrong and non-functional. There are button holes and a zipper on the left side but no buttons and no zipper on the right side.

  6. sonofrojblake says

    Knee-(and ONLY knee…)jerk reactions to each:

    1. hideous, and not just that hand to viewer’s left. You’re supposed to be watching her boobs jiggle? What boobs?
    2. not terrible, and it would do the job if I was 15 years old.
    3. nice enough I guess. Doesn’t anyone check for things like nametags and badges before releasing these things into the wild?
    4. just horrific, but then there I’ve seen a fair few photos of what I know to be real, ostensibly by modern standards “attractive” women that are similarly appalling, so slagging the AI for copying that nonsense seems unfair.
    5. again, not bad, and would definitely do the job for 15-year-old me, apart from the slightly distracting fact that each of the nice looking young ladies appears to have been fitted with non-matching pairs of legs from different women. Good legs, don’t get me wrong, just not matching pairs. Not sure it would distract me enough – it’s not what Sean Lock would have called “challenging” by any means… https://www.tiktok.com/@standupcomedyshort/video/7274586922612215072

    Synthesising some of the above: these things are taking work away from visual “actors” already, have been for decades. The difference being in 1999 they were Gungans, which would once have been men in suits and were in ’99 just pixels. More and more they’re replacing extras, and occasionally top-line stars like Peter Cushing and Carrie Fisher. For now, someone still has to do a voice, if that’s needed, and a performance – someone still had to act Tarkin in Rogue One. I think that bit is going to take a LONG time to go away, if it ever does. I’m told there are already AIs that can read an audiobook well enough that even untrained humans with their own good-enough setups at home are more expensive than just getting a bot to do it. They don’t do it well… but they do it well enough. Certainly my wife doesn’t even bother with audiobooks – she just gets her Kindle to text-to-speech it automatically, and fills in the nuances herself and assures me that’s good enough (I have my own opinions of the quality of “literature” she’s applying this to – sub-Twilight fantasy, but who am I to judge?).

    As far as the harmless cheesecake shots above: there’s a near-infinite market for it, and as the internet showed us before AI turned up, a near infinite supply already – it’s not like I was a lad when you had to go looking in hedges for discarded copies of Razzle (possibly UK-specific joke?), there is a tidal wave of pictures and high-definition videos out there to cater to any conceivable preference. Possibly the generation of these things, and the massive dip in the value of them that follows, will reduce the prevalence of women being exploited to produce them – nice idea, I doubt it’s true. The flipside being the pandering to toxic male gaze and raising of teenagers’ expectations of real women, that can only result in disappointment, resentment and anger that (almost all) real women are not that thin and compliant, but then again if you’re the kind of lad who’ll react violently to disappointment and rejection, it’s not the AIs’ fault, it’s yours.

    I was briefly courted by one of these things the other day on Facebook. “She” asked me what I liked doing, and I said playing with my wife and kids, what with being happily married and all. “She” pressed the issue, and I repeated that I really was just, y’know, happily with this one woman who had given me two lovely kids and wasn’t really interested in “naughty” interactions with anyone else, real or otherwise, online or in real life… and she evaporated. I vaguely wonder if it was an actual woman running a scam, a man running a scam with a stolen profile pic, an *AI* profile pic, or whether there was even *any* human in the loop at all at the stage that rejected me as pointless. What a time to be alive.

  7. Reginald Selkirk says

    what is the point about throwing your affections at someone who is not real?

    They won’t let you down like a real person will.

  8. snarkhuntr says

    Before I knew they were almost all just slaves held captive by organized crime, I used to have fun messing with the Pig Butchers, inventing obnoxious personas and letting them play out their game while dealing with ‘my’ difficult personality. Then I realized that if *you* try to sell them crypto, they bugger off sharpish. That’s a missed opportunity in my eyes, any crypto promoter is likely more gullible than the average punter they might encounter.

    Ed Zitron did an excellent piece recently on Generative AI. Long quote incoming:
    https://www.wheresyoured.at/sam-altman-fried/

    Sora’s outputs can mimic real-life objects in a genuinely chilling way, but its outputs — like DALL-E, like ChatGPT — are marred by the fact that these models do not actually know anything. They do not know how many arms a monkey has, as these models do not “know” anything. Sora generates responses based on the data that it has been trained upon, which results in content that is reality-adjacent, but not actually realistic. This is why, despite shoveling billions of dollars and likely petabytes of data into their models, generative AI models still fail to get the basic details of images right, like fingers or eyes, or tools.

    These models are not saying “I shall now draw a monkey,” they are saying “I have been asked for something called a monkey, I will now draw on my dataset to generate what is most likely a monkey.” These things are not “learning,” or “understanding,” or even “intelligent” — they’re giant math machines that, while impressive at first, can never assail the limits of a technology that doesn’t actually know anything.

    There is a very real chance that none of these models are going to ‘get better’, because the underlying technology isn’t really capable of being improved in that fashion. What will happen is that people will graft and append the ‘AI’ onto other technologies to force it to do things that it fundamentally is incapabable of – like understanding that, in most circumstances, girls should have legs that match, or that ‘the same person’ means the same face, even if it’s making a different expression. Concepts that, so far as I know, cannot be embedded in the current datasets.

    As for what this technology will do the economy? The word ‘bubble’ comes inevitably to mind. The proliferation of AI-generated bullshit on the web right now is currently being heavily subsidized by tech companies. The actual costs of running some of these AI models are hard to estimate, since the companies have a proprietary interest in making sure nobody thinks too hard about it. So long as the automated production of drivel is being paid for by the investors in OpenAI and its competitors, we’re going to see this flood of nonsense.

    On the other hand, the ‘influencer market’ will likely collapse along with most of the rest of the ad-supported economy. Something that is in progress now. Look at the quality of ads (and advertisers) posting on YouTube and the other large content providers. While there are still big brands out there, increasingly the products are cheaply produced scammy products, fake health cures, or outright grifts looking to take your money.

  9. flex says

    As is my wont, I am always re-reading books which I get pleasure from.

    Yesterday I was re-reading Menken’s essay, Criticism of Criticism of Criticism, and one of the passages stood out to me. It clearly had stood out to me before because I had made a marginal notation to remember this idea, and I can recognize that I have incorporated his idea into my general thinking.

    While the entire essay is worth a read, I’ll summarize the idea which resonated. Menken is reviewing the ideas of Joel Elias Springarn, who, in short said that the job of a critic is to evaluate a work of art based on the intentions of the creator. That a critic can certainly bring up shortcomings in the execution; e.g. the rhyme scheme in a poem, the piety or patriotism, even the psychology of the inspiration. But all that is subservient to the requirement of a critic to act as a judge as to what the message of the artwork is, and how well that message was executed (i.e., how well a person can understand it). A good critic may help explain the artist’s intention (as best as the critic themselves understand it), but criticism itself is a judgement on how successful a work is.

    Menken points out that this concept of the role of a critic is not unique to J.E. Springarn, but has precursors in the work of Benedetto Croce (which I haven’t read), who himself may have filched if from Thomas Carlyle (I’ve read a couple of Carlyle’s essays, but I don’t remember their topics), who could have snagged the idea of Goethe (whom I know only from his poetry, I probably should read more as influential as his work apparently is. But, as Eco teaches us, books speak of other books, so I may find that I’ve been reading Goethe all along without knowing it).

    Abandoning Menken (what a exhilarating phrase to write!), I think we can take this idea, the Springarn-Croce-Carlyle-Goethe idea, and use it look at AI. How does a critic, using the above idea, criticize an AI work? The AI itself has no intention, there is nothing in the AI which can be called intention. In the best case scenario, a critic could look at the training data and then the input string. More likely all a critic could look at is the input string.

    Under those conditions, the current crop of AI fails abysmally. Clearly the intent of all the images in the OP is prurient pulchritude. To an average viewer these images may very well fill that need (everyone who reads stderr is clearly above average), but to a critic the abnormalities impair the message sufficiently to render it meaningless (subtle pun noted). With the possible exception of image 4, while the other images may have anatomical abnormalities, that one would require some radical replumbing of the subject’s skeleton and internal organs (I prefer my bladder and kidneys in my buttocks please.).

    Now I’m no stranger to the peculiarities of the human psyche, I’ve been seeing furry artwork on the internet since the UseNet days. But generally you can tell whether the artist’s intention was to give their subject six fingers (or appendage of your choice) or if they were just a really bad artist. In today’s world of AI, and in the above examples of AI art, I’m pretty certain that part of the prompt for image 5 didn’t include “Give the girl second to the right two kneecaps on her left leg.” (I suppose I could be wrong about that, but the odds are….). The exception is again, image 4, where the prompt clearly must have said something akin to “give the women asses suitable for a horse and waists suitable for a wasp.” (Maybe image 4, as disturbing as it is, really does reflect the intentions of the prompter the best, and so based on the above criteria for criticism should be judged the best? Naw. That’s probably the beer talking.)

    As snarkhuntr writes above, we may be seeing the limit which can be done with training sets. To advance toward more realistic AI results we may need additional logic bolted onto the image-generating part. Logic like, “This image is identified as a representation as a human being. As long as my prompt doesn’t tell me differently, a human being has 4 fingers and a thumb on each hand. I can identify 7 things which meet my criteria as fingers attached to the thing which meets my criteria for a hand. Image-generating software, redo array space X1000, Y1000 to X2500, Y2000.” I’m certain this type of thing is happening as I type. (Note: it’s probably being done much more efficiently and competently then my example. I couldn’t pass myself as an expert in AI even in front of a crowd of crypto-currency buyers.)

    (Pops the top of another bottle)

    But what does all this blather on come to? Do I have a point or not? My point is that AI, whether it is generating the uncanny stuff we are seeing today, or stuff indistinguishable from the work of a great artist/writer/poet/lyricist/composer, is can only really be seen as a tool. A tool to help create bad art, or make routine decisions on insurance claims, or even write Capt. Kirk/Harry Potter slashfic. There is a bit in Steve Martin’s excellent play, “Picasso at the Lapin Agile” where he has Picasso say, “It’s all in the wrist. And the wrist starts here.”, Picasso points to his head.

    It may well be that Marcus’s previous contention that an AI is performing the same task that a human is doing is true. But if that is so, then neither the AI or the human is being “creative” (whatever that word means) in the task of transmitting the ideas from head of the human to a medium where the rest of humanity can be exposed to it. Art is a form a language, it is a collection of symbols, otherwise it would be simply pigments on a canvas (choose the medium of your choice). Humans are visual creatures, we tend to value vision more than other senses, so it seems to follow that we create visual artwork. Imagine an alien with no visual senses, but an extremely heightened olfactory sense, how would they see La Gioconda? “I perceive beeswax, azurite, vermillion, smalt, among other things, how is this representational as a person? I’ve not found that humans generally smell of lead.”

    The task of taking an idea in someone’s head and expressing it in a medium is an arduous one (just ask any blogger). The tools we use are those we are familiar with. In some cases the tools are conventional and obvious (St. Sebastian always has what in any image of his?), but the tools we use can only be based on our own experiences. If we desire to draw a cat, we take all our previous experiences of cats in order to generate an image which looks like a cat (“Ceci n’est pas une pipe”). We adjust our intentions before we even begin in order to transmit the idea we have in mind, this cat is sleeping in the sun, this cat is playing with a ball of yarn, this cat is cleaning itself (this cat is really a human dressed as a cat and, oh no, no, stop, we don’t want to go there… arrgh!).

    Every one of the different examples of cats I just used probably brought various ideas into your minds. That is what the prompts for an AI is intended to do, to restrict the training set images to meet the criteria in the prompts. Then use what remains to generate an image with similarities to those remaining. This may well be similar to what people do.

    Which brings us back to Springarn’s idea, that the creativity is in the intention, and criticism is about evaluating the success of the intention in communication. There are two sources of an AI image. The training set and the prompt. Which means that in any AI result there are two sources of creativity; the selection of the training set and the precision of the prompt.

    Some people may not appreciate that creativity is not the purview of a single person. I know a number of people who feel that their creativity must come from within themselves. I agree that in it often can, and does, come from a single person’s training set and precision of their prompts. But creativity can also be a joint effort, or the result of an entire culture. In fact, the medium and forms used by even a single person being creative are always related to the culture they are exposed to, their training set.

    That does not diminish their creativity, and neither will the new tool of what is called AI. At worst AI will allow people who have not spent years learning a skill to transfer their ideas to a medium the ability to do so. But those people who understand and use this new tool to generate new ideas and new viewpoints will still be in demand. Maybe there will less demand for them then for the current demand for artists, actors, sets, etc. AI can, at best, only emulate work which has already been produced. That’s an inherent flaw in it’s training set. But that may be enough for a lot of productions, Max Headroom may be only a few years away. But the advent of the Kodak instamatic didn’t kill professional photography. It did put a lot of mediocre photographers out of work. It also generated millions of photos, many worthless and many treasured.

    (Pops the top off another bottle)

    I’m going to stop here. I think I’ve gotten my idea across. If not, then it’s probably a rubbish idea. I haven’t posted anything this long in a while. It’s probably because I’ve been cutting down on my drinking in order to lose weight. It’s not that I haven’t been writing these absurdly long screeds for comment sections on obscure blogs, but I have been reading them after I write them and recognize what twaddle they are. So I don’t post. But today I think I’m going to hit the “Post Comment” button just below where I’m typing without re-reading or editing what I’ve written. My sister and nephew are in town from ole’ Blighty and I’ve been enjoying a few brewski’s for the last eight hours. I’m not drunk, mind, you, only a bit more willing to fill Marcus’ comment section with my own thoughts, likely dwarfing the amount of text in the OP.

    The TL/DR version: Nothing really to see here, I’ve just been enjoying myself.

    Cheers!

  10. Pierce R. Butler says

    Yet somehow these AIs get the faces right, or very close to right. I’d expect that to be the toughest challenge.

    Does this come from almost all images including faces, programmers borrowing code from facial-recognition development, viewers subconsciously filling in the gaps, use of cosmetics by models, or ???

  11. says

    @Pierce R. Butler as far as faces go, IMO the best one in this bunch is perhaps paradoxically the first one. Because it is asymmetrical. Human faces are not symmetrical. One eye is always ever so slightly more open, one eyebrow slightly higher, one ear/cheek slightly bigger, one side slightly more wrinkled etc. It is one of the (many) problems with the faces in TES IV: Oblivion. They were significantly more realistic than in previous games but still perfectly symmetrical and that made them too close to the uncanny valley area for comfort.

    In art school, we were half-jokingly warned against using romantic interests/partners as models because it would force us to make a conscious note of the imperfections and asymmetries in their faces.

    All the faces in these pictures are way too symmetrical and plastic-y, although in today’s world of heavy make-up, Instagram filters, and Photoshop touch-ups it is indeed hard to see a really natural human face on the internet at all for a long time. Young people today grew up with the internet and are used to this artificially enhanced look for a long time now so they possibly mostly ignore it.

  12. says

    “On the Internet, nobody knows you’re a dog,” they said. So I went all the way! I had full twolegsification and defurrment surgery; and by means of cast-iron self-discipline, a gruelling programme of exercise and a large supply of chewy treats to reward my own successes, I managed gradually to increase my clothes-wearing endurance; stepping up item by item, second by second until I could stand being dressed just long enough to pose for a photograph

    And thus, an old meme died, and a new meme was born: “On the Internet, nobody knows you’re a dog. Until they see you have 42 teeth.”

  13. says

    A tangent on this subject: Future training sets. Since AI-generated works are now populating the internet, future models trained on internet data will be trained in part on that AI content, risking some kind of feed-back loop of AI nonsense. This leaves a few options:
    1) Use something other than internet data. Not sure what could provide a big enough set, though.
    2) Somehow filter the internet data, to remove AI content from the training set. Expensive and complicated.
    3) Go “who cares” and use it anyway, resulting in future AI models being utterly convinced that humans are supposed to have eight fingers on each hand and several rows of teeth. After all, look: It’s right there in the training set.

  14. flex says

    @LykeX, #15,

    I’ve wondered how large a training set really needs to be in order to be useful. We’ve seen Marcus use prompts like, “In the style of (artist’s name)” and the result does have some resemblance to that artist. Considering that at best most artists may only have a couple thousand images in a training set and it is likely that that number is much smaller, that suggests that a small training set may be just as useful as a larger one.

    There is the distinct possibility that option #1 will be used by professionals, but they may well purchase pre-assembled training sets. There is probably a market for someone willing to create such sets. I can see a training set of corporate logos for an AI to create something new. Or a training set of curated photos of beaches in Spain. Or a training set of sunsets. The user of an AI could purchase training sets which include all the types of images for the subject they are working on, then run the training and start playing with prompts.

    For the amateur, who is just playing around to see what they can produce, option #3 might be fine. Although, as snarkhuntr proposed above, there may be other applications which will remove unwanted teeth or fingers.

  15. Ian King says

    To the best of my knowledge, all of these systems are the result of asking ‘what if we took this approach, but threw just a shitload of data into it’. The effectiveness resulting from this was generally a surprise to the people who tried it, and since the first llm the idea has been applied more broadly.

    I’m always amused by anyone who looks at these systems, the effectiveness of which would have been considered far beyond the limits of current approaches to machine learning even five years ago, and decides ‘yep, this is definitely as far as this can go’.

    As to the consequences of image generation on human relations, I think it’s more valid to reverse cause and effect. These things work precisely because we’re so alienated. For a significant percentage of young people this is literally the best they will ever do. If you know you’re never going to have a relationship with a real person, that ai girlfriend starts to look a lot more enticing.

Leave a Reply