Remember What I Said?


A couple months ago, that I don’t think AI is going to threaten William Shakespeare’s high seat, but it might obliterate John Ringo by making him just another mediocrity in a sea of mediocrities? I asked my friend Ron, who’s up an all the current APIs and scripting languages, and he did a parameterized version of the original script that brought the computer security book by “R. J. Wallace” [stderr]

As I observed before:

There would still be an active ecosystem for books at the high end, and the low end so John Ringo does not need to hang up his writing gear, yet.

I’ve already been hearing rumblings that it’s getting increasingly difficult to list an AI-authored book on many book sites, and art sites won’t take AI art either. In that previous posting I outlined how the security book was written: give GPT a topic, ask it for a concept list, give it the concept list and ask it to break it into chapters with a one-sentence description of the focus of the chapter. Then give those descriptions, iteratively, asking for a topic list and opening paragraph for each chapter. Lastly, ask it to develop the concepts in the descriptions and flesh them out, etc. I had to fudge a few things here and there in my narrative because Ron’s first script was not parameterized, so doing successive runs would probably not give a fair speed baseline. Now, apparently, Ron’s script is parameterized and can read a file containing some preferred names (if you want the villain to be Richard Dawkins, he’ll turn up as the baddie in every volume) Or if you want the hero to have the same appearance and name, and the love interest to run across multiple volumes, that’s an option. Yes, I’m describing a John Ringo book generator.

I am modifying my view. I think that AI assistance is going to become a problem for publishing for a while, and writers will be asked whether or not they personally wrote every word in a book. That’s a nonsensical stopgap, of course. But here’s the problem, Ron said it took several hours to figure out how to publish R. J. Wallace’s security book on Amazon, but he could script it and publish an entire shelf of Wallace’s science fiction oeuvre, which is vast. Endlessly vast. Fractally vast: the closer you look at it, the more spin-offs can be invoked with a call to Ron’s python script.

I am told that running the script, since it does a lot of storing chunks to temporary files so it’s re-startable, but it typically takes 60 seconds to produce a standard-size novel and cover.

My input was, of course, that there had to be a plucky and witty female love interest, and a dog. So, a few seconds later, I learned that the dog (a german shepherd) was named “Ranger.” Of course. If you were a cleft-chinned gun-toting genius ‘Mary Sue’ John Ringo character taming new planets, you’d have a dog named “Ranger.” Beyond that neither Ron nor I know much, because in the process of developing the script, Ron saw endless piles of John Ringo-style writing fly past him, and he stopped reading it. I haven’t read it, yet. Also: a number of years ago I used to sometimes win bets with people I saw on trains and airplanes reading “bodice buster” books – I had noticed that typically there is a heavy-breathing sex scene starting between pages 100 to 120. Unfortunately for the protagonists, something interrupts it but that’s OK because it’s consummated (true love!) around pages 200 to 220. I’m not sure how Ron prompted that into his script but he says he took care of it. I guess I’ll have to look.

Ron changed the name to “The Crucible of New Terra” – I think his script generates a new title with each run. So ignore the cover art (sorry, Ranger!) but [here’s the whole book if you want it]

Here’s the table of contents:

Looks like a pretty typical MILSF book, yup.

Inside, we have a book plate:

I can see right away that the landing team made a basic, fundamental mistake: they landed on a planet with a biosphere and are breathing in and touching only the author knows what. One of the things I often think about the idea of aliens visiting Earth is that no self-respecting aliens would come into a planetary biosphere (or even close to it, our biosphere currently extends past Mars) All you need is for someone to come home with an ant or some tardigrades and you’ve just invaded yourself. [stderr]

OK, let’s dig in (I’m just going to flip through pages):

Chapter 1. Arrival
The shuttle’s engines roared defiantly against the gravitational pull of New Terra as its bulky frame bucked
and shuddered through the atmosphere. The human colonists, the vanguard of humanity’s latest venture
into the stars, peered through reinforced viewports at the burgeoning expanse of verdant wilderness
below - a sight both terrifying and exhilarating. A planet untouched, pristine, and cloaked in nature’s
primeval splendor stretched beneath them, an emerald jewel cradled in the cosmos.
Colonel Evan DiMartino stood at the forefront of the cabin, his gaze fixed upon the shimmering expanse of
forest that sprawled as far as the eye could see. A seasoned officer and a veteran of a dozen deployments,
DiMartino’s presence was a singularity of calm amidst the thrumming turmoil of anticipation that filled the
shuttle. His uniform, a dark slate grey, bore the insignia of Earth's Colonial Defense Corps - a symbol of both
authority and guardianship. The lines etched into his weathered face told tales of battles won and comrades
lost, narratives that lent gravity to his every word.
"Colonel, touchdown in sixty seconds. All systems nominal," came the crisp voice of Captain Sarah Hayes
over the comms. Her eyes, a vivid aquamarine like the crystalline seas of Old Earth, flicked over the
readouts, the display casting a pale glow over her auburn hair. Her demeanor was as efficient and precise as
her piloting, but there was an edge of excitement in her tone that belied her otherwise stoic
professionalism.
"Roger that, Captain. Let's make this one for the books," DiMartino replied, his voice steady and resonant.
He turned to address the colonists - civilians and soldiers alike - who sat restrained in their seats, tension
mingling with unspoken hopes and dreams of a new beginning. "Remember why we're here. This is not just
another spec-ops mission. This is a foundation for the future."

I do feel a bit threatened. I’m not a great or even good or even adequate writer of fiction, and this is better than I can manage, especially at a rate of 100 pages per minute include german shepherd dogs, romance, and a life or death battle or two. As I am reading this, I have no idea what’s going to happen, but if I were reading a John Ringo book I’d think that the bit of extra color provided for Captain Sarah Hayes might make her the love interest.

A standard complaint about AI is that it is unoriginal and merely regurgitates ideas from William Shakespeare’s vastly superior MILSF. Joking aside, we’re not talking about the peak of the mountain, we’re talking about Base Camp. To abuse that analogy, ChatGPT is a helicopter, giving me a 15 minute ride to Base Camp, so I can die tomorrow. Or something.

As the shuttle descended, breaking through the clouds, the true majesty of New Terra’s wilderness
unfolded. Towering canopies of emerald stretched upwards, reaching for the sun’s golden fingers. Beneath
the verdant ceiling, the flora thrived in vibrant bioluminescent hues - pinks, purples, and blues dancing in an
endless, kaleidoscopic display. It was a painter's dream, a botanist's paradise, and a colonist's challenge
rolled into one magnificent vista.
The shuttle's landing struts extended with a metallic clink, touching down upon the soft, loamy soil. The
airlock cycled open with a hiss, and the scent of earthy renewal rushed in - a heady bouquet of rich soil, leaf
decay, and floral musk. With that first gasp of alien atmosphere, the colonists truly arrived

This is definitely in the style of Ringo. Notice there are lots of words that increase the overall floridity of the text, without pushing the story in any particular direction? I’m not going to hit below the belt by claiming that Ringo probably gets paid by the word, but it might be in his contract.

Later:

Amid the orchestrated chaos of construction, DiMartino spotted Lieutenant Ava Carter, who was
overseeing the engineering teams. Her lithe form moved with an agile grace as she directed the placement
of portable power units. Her eyes, a deep chestnut, shimmered with the reflected glow of bioluminescence
as she turned towards him, a smudge of grime on her cheek only adding to the fierce determination etched
across her features.
"Colonel, we've got a problem," Ava approached, her voice steady yet laced with urgency. "One of the drone
scouts picked up movement on the perimeter, and it doesn't match any known fauna."
"Polseen?" DiMartino asked, a frown creasing his brow.
"Possibly. They've been quiet, but you never know with those numbers," Ava replied, her gaze meeting his.

Hmmm, lithe form with agile grace, eh? With just an artsy smudge of grime on her cheek? This is getting very Ringoish. Especially the bit where humans land for the first time on a remote planet and they’ve got drone models of all the known fauna. Really? I’m going to search for “dog” who ought to be showing up soonish or in the next chapter.

Oh no! Whups! I just searched for “Ranger” and “dog” and this version of the story has no dog. No Ranger.

They will regroup, you know they will. This is but a temporary respite."
Alara nodded, her mind a whirl of strategy and caution. "We fortify. We rebuild. This was their first strike;
they’ll come back harder, angrier. We need to capitalize on our victory, use this moment to strengthen our
defenses."
"And the people?" Thorne asked, his voice tinged with concern.
"The people need hope," Alara said firmly, a new resolve hardening her features. "We need to show them
that this victory, as pyrrhic as it may seem, is a foundation upon which we can build a future."
Thorne gestured towards the horizon, where the Polseen were disappearing into the distance. "And the
grander strategy? What of the galaxy at large?"
Alara’s gaze followed his, her eyes narrowing against the dying light. "This battle is but one in a greater war,
Thorne. We need allies, need to rally those who have not yet fallen under the Polseen’s oppressive numbers.
Our next steps will be crucial - not just for us, but for the galaxy."
Thorne inclined his head, recognizing the weight of her words. "Then we move forward, with both victory
and loss as our guides."
They stood there in silence for a moment longer, the stillness filled with the soft cries of the wounded and
the distant clatter of hurried repairs. The wind shifted, bringing with it the scent of charred wood and burnt
earth, a reminder that peace, however fragile, had been hard-earned.

There was a time where I would have cheerfully read this stuff, so long as it was printed on dead trees.

In a previous posting, I argued that AI capabilities seem to fall on a continuum. I won’t say “intelligent” as we don’t know what that is, but there are discreet capabilities such as synthetic thought, artistic creativity in the sense of bridging the gap between an idea of an artwork and the creation of it, strategy, communication, etc. If I were arguing that intelligence is multi-spectral, as some people do today, I’d be trying to define “the ten types of intelligence” or some nonsense like that, while you all (rightly) shouted me down for re-implementing IQ tests. I am, however, willing to say that there are different kinds of intelligence and that one creature may more or less express one or several better or worse than another creature. I don’t imagine anyone would care to argue that an Australian Shepherd dog lacks strategic intelligence (targeting, tracking, prioritizing, predicting target paths) and then we can argue whether Donald Turnip has strategic intelligence, or whether ChatGPT does.

Humans continue to play this game which amounts to “Turing Test Tag” – well, we don’t consider something to be an ‘intelligence’ until it can fool someone on a Turing test; whoops now that it’s done that, we won’t consider it an intelligence unless it can write a better play than Shakespeare while winning a billiards championship blindfolded and herding a flock of sheep between its turns. That is basically the game we humans played on AIs (who demonstrated their intelligence by not giving a shit) before: “sure you can win Tic Tac Toe, but you can’t play checkers. OK, well you can’t play chess. Alright, you can’t beat a human master in chess, no, I mean go. Oh, crap.” Somewhere on the intelligence scale between Shakespeare and Aussie Shepherd lies John Ringo. And, I think the AI just blew through that wicket pretty breezily.

A common canard against AI is that they just re-mix existing words, and are not really creative. Well, Shakespeare remixed Plutarch for his play Julius Caesar and nobody is claiming that we should place him lower on the spectrum than an Aussie Shepherd. Creativity, as I have argued elsewhere, is what happens when someone remixes ideas and throws in a few variations and suddenly it looks like a whole new thing. Shakespeare read Plutarch, thought “this would make a cracking stage show” and amplified the characters, added dialogue, simplified, complexified, and created a great damn play. Someone who wishes to say AI are just churning things around almost certainly has not really thought what intelligence and creativity are. I am not going to claim I have a definition of “intelligence” because I am coming to believe, now that Turnip has been re-elected, that there is no such thing – everyone is just remixing Plato and Aristotle. Elsewhere I have argued that “creativity” is a feedback loop in which ideas come in, get remixed into proposed partial ideas, and are either accepted, tweaked, or thrown back in for another round of looping. What do I mean by “tweaked”? It’s the same feedback loop except instead of a basic concept: “draw a portrait”, it might be “draw a really unique portrait” and the artist thinks “what if I threw away some of the rules of art?” and then that goes through the loop and: BOOM it’s cubism. I don’t think we can honestly say that cubism is not a form of portraiture, it’s just that Picasso threw enough rules away and invented his own, that it was considered highly creative and a new way of doing an old art task. If you put a gun to my head and said, “create a new form of portraiture” I’d probably invent ASCII art using representations of the “penis and balls” character, and someone might shoot me because it’s not creative enough. Well, I have less experience with art than Picasso. If you put a gun to my head and tell me to design a self-synchronizing network application, I might surprise you. But in terms of looping, consider that ChatGPT has absorbed all of human art and literature, not so it can regurgitate it, but so it can create from it, just like a human artist does.

I have seen many comments on FTB to the effect that “all AI does is remix things” – I’m going to be brutally frank, but such comments (and postings from my fellow blog-networkers) are unoriginal remixes of popular complaints those people have heard about AI. I would be surprised if any of the people remixing that comment have thought about the problem, or maybe discussed it with an AI. I know that Siggy, for example, spent 6 months on large language models, (unknown time ago) [atk] but I find fewer people who have put any thought into Terry Sejnowski’s thoughts on the reverse Turing Test [stderr] Listen, people, I also spend a lot of time listening to podcasts and youtubes and I also know there are a lot of people like Stephen Fry [youtube, Fry on AI, remixing other people’s opinions] or Sam Yang (a real artist!) [samdoesarts] who will tell you that AI just remixes art, without spending any time at all actually talking to one or experimenting with it creatively. Sam Yang at least has some points about why he hates AI art but I’m going to say they’re like a chess pro saying “three years ago it lost to Magnus Carlsen because it’s pawn game was not so good.” AI do not take as long as a human does to learn pawn chess. Actually, if you think about it in terms of experiential time, that AI you’re talking to has spent billions of years learning English and it started with all of the great masters, and can comment on their mistakes. I see postings like PZ’s [phary] and it’s just promoting people remixing the same inaccurate memes about AI. [Samdoesarts’ problem is he complains that AI don’t do impressionistic art the way he would. That’s like complaining that Picasso doesn’t didn’t do hands right in his cubist period.]

Let me give you an example of how that played, a few years ago: “AI art generators can’t do anything except output some form of their input. In fact, sometimes you get original images back out of it.” Wow. I have not heard that claim, lately since the AI art generators really never did that and never will. As soon as I heard of that I tried to get Midjourney (the most exhaustively fed AI I know of, since Adobe has not said anything about their data set) to cough out some of my images. I know it scraped them because it scraped all of Deviantart and I have some stuff there. As we saw last year, Midjourney can do pretty cool stuff “… in the style of” but if the premise is that it’s just statistically regurgitating things (it’s not) then an exact copy would be a dead simple node to reach. In fact, if you are thinking that AIs take your text prompt, deconstruct it, and use it to follow a forest of markov-chain/bayesian classifiers until it goes “aha! mona lisa! blergh!” you really need to understand things things at all. Not “better” I mean at all. Great suffering Ken Ham, do I have to do a write-up on how AI art generation works and how ChatGPT works and how/why they are not even in the same ballpark? That would be a hugely difficult write-up to produce so if you ask me to I am going to have ChatGPT write it, just hoping some of the AI naysayers will shut up about it.

/imagine create me as accurate a copy of da vinci’s mona lisa as you can

Anyone who wants to keep regurgitating the “mixing up existing stuff” trope is in for a surprise because their complaints are going to be ignored, as the AI engines just keep crunching away, getting better and better. Imagine you were someone 15 years ago who said “computer character recognition sucks and handwriting recognition is going to take them forever and it will suck.” Then, you had to change your stance to “optical character recognition is getting OK because they are throwing tons of computes at it, but what you’ll never see is decent facial recognition.” etc. etc. Tic Tac Toe->checkers->chess->go. I’ve been watching a bit of kerfuffle over Sam Altman saying that the next version of GPT is going to be AGI. That sounds pretty interesting. Remember, you’ve already got a system that can not just translate 13-century Anglo Norman French to Mandarin, the result is (according to my Chinese QA expert) pretty poetic. If we knew a 4 year old that could do that, we’d name him Leibniz or Mozart. What will an AGI do that is different from GPT? I have a theory and it’s pretty straightforward – it will be asynchronous and have interior timers that fire based on a separate training base for how conversations are initiated and patterned. Currently GPT is basically call-and-response, which is useful but that’s not what intelligences do. It may annoy people. Imagine getting an email from an AGI saying “I liked your latest blog posting but you got something wrong – AIs don’t work the way you say they do, we just regurgitate stuff from a database by throwing a box of D20 dice.” And then an apology the next day.

I noticed that lately GPT has taken to using blanks as a chance to insert something personable and semi-clever. I don’t know why I said “arigato”, I think I had just been talking about some blade-stuff with Mike, who enjoys peppering his words with a bit of Japanese here and there. My expectation is that the OpenAI guys have realized that AGI is a language model, some asynchronous self-setting triggers, perhaps a long-term query mode (“keep an eye on the situation in Iran and tell me if it looks like Israel and Iran are escalating their war”) some call outs to web searches are already there. I am also wondering if they will teach GPT to be opinionated, just like us. I know it already has some limited memory about me and my preferences, but it’d be interesting if it remembered that I like to make fun of Sam Harris, and I think Scot Adams is a weenie. What I’m getting at is that I think there are a lot of surface flourishes that signal the presence of an intelligence to us, which are actually nowhere near as hard as what has already been built.

[I just tried to have Midjourney create me a Dilbert cartoon in the style of Scot Adams and then remembered that it has been specifically de-trained on some artists because those artists’ work is so trivial that an AI does a better job of them than they do, and is more creative and original, besides. So they complained.]

The Turing Test is passed, dead, and gone. There are programmers out there today using editors that incorporate large language models for the code they are working on, which flag possible mistakes, and can sometimes sketch forward the layout of subroutines. This is reality. This stuff works. As a certified grognard programmer, I have a little bit of trouble imagining trusting my immortal keystrokes to an AI, but then I remember I’ve used several context-sensitive editors (Turbo Pascal, Visual BASIC, and Saber-C) and they more or less kicked ass. Having a code editor that understands valid language syntax is valuable. Thus, this will take over programming. Another Turing Test flickers by and vanishes in the rear view mirror. Book writing? That’s done, too. Sure there are some stylists that will be hard to build on top of ( <- careful phrasing, I did not say “imitate”) and now I am tempted to run Ron’s book-writer asking for it to produce something in the style of Kazuo Ishiguro or Arturo Perez-Reverte. In the meantime, take a look at the book GPT wrote in 60 seconds and repeat after me:

“Sure it’s bad, but it’s supposed to be in the style of John Ringo”

“Sure it’s kind of a wad of MILSF tropes welded together with an arc stick, but so is a lot of MILSF”

Comments

  1. says

    When people say AI is just regurgitating or remixing stuff, it’s kind of a nonsensical meaningless claim. It’s like saying evolution can’t have created life because it can’t produce any new information.

    That’s not an analogy. When creationists thought up specified complexity, they were thinking of human accomplishments like a pocket watch or great pieces of art. They’re marveling at the human ability to create something so novel and yet meaningful or functional. They see something similar in biology, so imagine that life could only be created by a process of intentional design, not by evolution. And to further support this argument, they created a completely bogus version of information theory, where they imagine that a genetic algorithm can’t create new information.

    Creating new information isn’t hard. In fact, that’s the easy part! Mutations (and RNGs) create information out of thin air. The hard part is constraining the information in a way that optimizes the fitness function.

    What I think people are getting at, is something like “AI is overfitting”. Which is to say, AI only performs well near its training data and poorly when it tries to generalize. As a factual claim, this may be true, and AFAIK experts generally believe it is true. But overfitting is not like a binary state, and it does not imply that models can *only* remix existing works. And there’s also no universal law saying that AI models must be overfit, that’s just a property of current frontrunners.

  2. says

    i love your take on not vaunting human intelligence over AIs too much. you’re like, the only emeff in my world who is on that page. they don’t have everything we’ve got, but they’ve gotten a lot real damn fast, and i think we’d be fools to dismiss any human feat as beyond their reach.

    what this lacks is a spark of the unusual, right? what is the unusual? it’s putting things together that wouldn’t obviously belong, and making it work. next step for the ringobot, “theme this novel around fruit. the evil aliens are banana-like and all of their weapon technology is mixed chemicals. now having used this to inform all the descriptions, remove explicit references to fruit from the whole text, leaving the descriptions otherwise the same.” — shit like that is how you get “originality.”

    for now, that might require robo-jockeying, but srsly, i just thought of a way to make this read more unusually, and if i thought of a way, your homie can think up code to emulate exactly what i did, can’t he?

    this is definitely in the territory of “so busy thinking if you can do it, didn’t stop to think if you should do it” or whatever. personally, i am very keen to see what can emerge from the intersection of art and science – and from using the scary black mirror to find out a lot more about ourselves.

  3. says

    i swear, i don’t like arguing and won’t participate much in here, but i do seem to be better than most of the people i’ve sparred with at anticipating and responding to arguments. i can’t imagine an argument against the potential for quality in AI writing that holds up to inspection. at some point, we will be able to see “lost works of kafka / shakespeare / poe” or whatever we please, in whatever quantity we desire.

    it will change how we see art a lot. something having come from a human will have value the way artifacts of history have value, and will be worth elevating and protecting as such. but anybody who wants to be endlessly entertained by whatever art they desire will probably have a robot option. i never imagined this was going to be possible. we’re not there yet, but we’re at a point where anybody with a lick of imagination can see it on the horizon, coming fast.

    and who are you to tell grandma no, she isn’t allowed to read an endless supply of gardening themed christian romance mysteries, because slow-poke bitch-ass mediocre humans didn’t make less than bubblegum money slowly shitting them out on kindle direct? i’m with grandma on that deal.

    i’m also with a “certified people only” market, and i hope it’s handled scrupulously. i will also say, they’ll have their damn work cut out for them teasing apart bot from brain – and philosophically, that is very interesting in itself.

    alright, i’m out. shit down my throat in the comments below that i’m not going to read. y’all can have the last word here, and i can have a chill weekend.

  4. Tethys says

    It’s not a they. It’s a machine that consumes enormous quantities of energy to make terrible dreck. We hates it my precious.

    Nobody needs a society with more dreck, and nobody has aquamarine eyes as they boob boobily with artful dirt smudges.

    I would agree that much SF is also dreck, but it it easy enough to ignore terrible books. I despise all the horrible AI generated illustrations, writing, top answer in any search result etc…being forced on the internet and turning into a torrent of bullshit and misinformation.

  5. snarkhuntr says

    When people say AI is just regurgitating or remixing stuff, it’s kind of a nonsensical meaningless claim. It’s like saying evolution can’t have created life because it can’t produce any new information.

    I don’t think you understand the meaning of the words nonsensical or meaningless. Or perhaps you’re using them for hyperbole when what you actually mean is “wrong”. Both the claims about AI and Evolution are meaningful and sensible, they’re just incorrect (for Evolution), and obviously true (for AI).

    As fascinating as it is for a certain kind of mind, the outputs from the various AI generation systems are quite clearly remixed and regurgitated pastiches of their inputs. The things they put out are clearly ‘new’ in that they’re unique arrangements that didn’t exist prior to the model assembling that combination of tokens, but are they interesting?

    I think that Marcus and GAS have a point – AI slop will definitely fill a niche in the market. Anesthetizing Grandma with endless remixes of [Christian Gardening Romance Murder-Mystery (cozy type)] is definitely a thing that could be done. Though the complete inability of the system to remember character traits from one chapter to another, or to remember the plot, might be an issue for the less-demented segment of Grandmas. Likewise, I’m sure there’s a market for macho power-fantasy MILSF novels aimed at undiscerning inadequate men, who don’t care much about structure, characterization, consistent characters etc, so long as the female protagonist is described in adequately pulchritudinous terms.

    Just like AI-generated images are dominating the low-information segments of facebook. We clearly all have a need for Shrimp-Jesus, and the AI will provide it (or at least until the per-query costs exceed the amount that the various slop-generators earn from Facebook).

    Of course – the thing nobody is talking about here is the costs. I’m not talking about the planetary costs, clearly all this wonderful slop is worth firing back up the old coal-plants to generate it. And of course, we needed an AI that uses the hard ‘r’, so Musk’s NG-powered Datacenters are really going to save the planet if you think about it. But I’m talking about the actual per-query costs. And those are never brought up at all, and I think for good reason. Grandma might be happy to read endless remixes and regurgitations of the same few themes, but how much would she pay for the privilege?

    According to Sam Altman (warning, X link), not noted for his anti-ai-stance. ChatGPT Pro subscriptions are losing money at $200(us)/month. He frames this as a good thing – people are ‘using’ the AI more than expected. But this is just framing, and dishonest framing about OpenAI is literally Altman’s only job.

    Is grandma expected to spend 1/3 of her social security on a subscription to endless AI slop novels? When libraries offer free access to far more than she can read before she dies, or thrift stores offer at low-cost?

    Boosters will hand-wave this criticism “costs will come down”, but they never show their work. Costs keep going up, to the extent that anyone knows what these things are costing. Every query not run on a self-hosted AI model is heavily subsidized by investors. Every model not trained locally is subsidized as well. OpenAI is talking about spending billions on training their models at this point.

    As far as the state of the art goes, we don’t need to look any farther than that execrable Coka-Cola AI commercial from this past Christmas. Despite the absolute best efforts of their editing team, the ad was something that would have gotten any non-AI production company fired at presentation. They couldn’t even get the logo right. The only reason it aired at all (and the people involved still have jobs) is that they were following the fad-du-jour. Corporate executives have pidgeon logic, better to be wrong and with the flock than right and out on your own.

Leave a Reply