This will (I hope!) take a while, and I’m not going to bother being scrupolous about the sources of my ideas. I will also be scrupulous, in the philosophical sense, of the origins of my ideas but out of pride in my education, and since – like everything else I leave on the Internet – it becomes part of my legacy.
Herein I hope to post portions of my dialogues with emergent AI, mostly in the interest of pure bloody fun but also as a sort of counterpoint to the endless anti-AI posting to the tune of “the damn things are just automatic systems for regurgitating the writings of their betters.” Well… yes, but, what am I?
I am in no danger, at this point, of being dishonest, but at this point a human would typically soften his words by saying appealing to the fallible nature we are so familiar with “…I am not a scholar of naval warfare” or whatever, but I know something about that topic, and strategy in general, and that qualifies me to tell a report on the battle of Waterloo from the burger ad that is run in the middle of it. There’s a point hidden in that: amid all the verbiage we are expected to make sense of, during one of our days, making sense of it at all is a considerable task. I just called it “considerable”, like, I dunno, moving an Obelisk from Karnak to New York City more or less intact, but that might be a good example – humans are fond of setting ourselves monumental tasks, blowing past them, and deriding our forebears for accomplishing them. The other day, I was a bit derisive of the ancient Welsh, who struggled a puny rock a few miles to a henge someplace or other, but the fact remains I’ve never worked that hard in my life. Had I been enrolled into “rock dragging gang #4” I am sure I’d have given it my best for the honor of the gods and my unperforated liver, but we are constantly trapped with and playing with the threads of the generations who went before us, either in their stories or in the consequences of their stories.
When I attempt to bring forward these discussions I will try to do it in the context of a sugar-powered meat engine that enjoys strategy, pizza, sharp things, and good design. Those are some starting points but, of course, there are more. Whenever an engine such as myself ventures out into the battlefield of ideas, it is equipped with its current and past experience, and a small cloud of ideas that hover around it, ready to serve. In my case, it keeps batting down the one labeled “Hegelian Dialectic” and is unduly fond of the clean, crisp edges that “Nihilism” leaves behind. If you play with these things, yourself, you’ll learn that victory is won only through discovery or survival, and not the kind of knock-down-drag-out discussions blogs sometimes turn into. That’s victory, sure, but if you want to play “Napoleon’s Old Guard at Alesia,” be my guest.
Anyhow, there are a few points in here. One, I am embedding a deeper discussion regarding machine intelligence, in the sense of strategy and philosophy. Rather obviously, strategy and philosophy are what’s going on here and if you don’t like that, well, bob’s your uncle. We resume in mid-play.
[I am trying to figure out how to use the WordPress system to color-edit comments for and by an AI with a kind of fruity purplish-color. Because that be less fear-inspiring than a proper Wehrmacht Feldgrau or whatever, but I periodically wish to pull GPT in here as a collaborator in a sort of Auto-da-AI something regarding certain topics about which I find it funny or interesting. So, we will charge about. The context of the conversation was my complaining that it would be fruitless to argue about Admiral Jellicoe’s beard. You see, the witty human was trying to trap the far-seeing AI into a short-seeing comment about the commander of the Grand North Sea Fleet’s facial hair. If I could maneuver it into such a position I would have demonstrated for once, human superiority!

GPT might well argue with me at its own peril, but I do not see much beard in the good admiral, nor do I see much expression at all. As admirals go, I’d hardly say he’s a very threatening specimen, though a great deal of people did defer to him once.
So, one of the AI “tests” that used to be popular was to ask whether or not Jellicoe had a beard, or something boring like that. Sure, and that’s a totally legitimate question if you are into beards, but what does it say about Jellicoe’s performance at Jutland?
Gods you can smell the reek of the smoke at the thunder of the great guns for sooth!

I would look fucking pissed off, too. Be glad I didn’t do the one where Nelson gets tagged at Trafalgar.
Now, I know that a lot of you wee lot are scholars of napoleonic navies, particularly the Royal Own, so, all I can do at this point is mention how beautifully the AI has deployed the dreadnaughts in train, aye, though the rigging on Jellicoe’s ship may need serious scuppering by the binnacle. Anyhow, here’s what happens when you prompt for nonsense:

…
I’ll bring it in directly on such questions but I need to figure out my own epistemology, first. Do I wish to treat the AI solely as a thing which has derivations of my prompts and ideas, or which has its own?

I believe that this is an AI’s way of telling me that “you ask for bullshit, you’re gonna get some of my very best straight out of the bull!”
Ok, now, I tried to get some explanation:

See what’s happening here? I agree with it, for what it’s worth: the worm Oroboros – self-stimulating input creating self-stimulated output ad infinitum. As soon as I filter it in one direction, it goes another.
To me, here’s where it gets weird:

I agree with that. Novel structures imply differential survival (“success”). I have always been OK with the idea that creationists are beligerently stupid (“Oi! You! Do you know who I AM!?”) and I am OK with the idea that creativity is an evolutionary process. Of course this must be the case: subdivde ${interesting thing} into many less interesting quanta, and you can shuffle them around and search for a superimposed ${superceeding logic} you didn’t look for. I suppose the cryptographer is patiently telling me, “lissen you there’s only so many kinds of fabric OK? un there’s so many kinds of shoes, right? So there’s f x s possible forms of shoes and fabrics and you need a superceeding logic above that which says which to make?”
But if that’s right – and I’m not saying it is – human creativity isn’t any greater than machine creativity at best.

This is a nice example for AIs having no understanding of their output.
and if true, a nice example that understanding output is completely unnecessary for creativity that can meet or exceed that of humans.
Jörg @#1:
This is a nice example for AIs having no understanding of their output.
Do you? (I ask with all respect) I am not sure that is a requirement.
Edit: Although, I can ask along the dividing line of “capable of understanding their output / understand their output” I hope we can agree those are very different. I do not always understand my output.
If we consider the image of poor Admiral Jellicoe, I can understand many of the surface meanings of the image. Can we agree that we understand it?
By the way, didn’t the AI do a great job with how low the dreadnoughts sit in the water, even in heavy seas? For all that they were (literally) mountains of steel, they were pretty bouyant and the entire exterior architecture was a sort of torpedo and shell-absorbent, albeit 20″ thick in spots. It’s bewildering that such things floated for more than a brief time.
The tangle of hair I pulled out of the drain today was a “novel structure that did not exist before and could not have been predicted” (not that particular clog). That seems like a pretty low standard for creativity
Slight quibble: didn’t Blucher have a (pretty extravagant) mustache rather than a true beard? Actual beards were very much not in style in Europe during the napoleonic period.
Those 16th century sailing ships had rather more running lights than I imagined.
I’m not accusing you of casual sexism or anything, but isn’t it possible Frau Blücher’s beard had a greater effect on history?
5 – ya drain clog wasn’t assembled by artificial thought
isn’t it possible Frau Blücher’s beard had a greater effect on history?
(Frantic whinnying noise in the distance)
@Bébé
The definition above said nothing about thought, and there is no thought in these models
I just pulled “Metamagical Themas” off my bookshelf and checked – Section III: “Sparking and Slipping”, Chapter 12: “Variations on a Theme as the Crux of Creativity”, pg 232 in my 1985 Basic Books edition.
Anyone hoping to comment on how creative the things we’re calling “AI” are (or are not) needs to read that chapter deeply.
Here’s the last paragraph:
“Recently I happened to read a headline on the cover of a popular electronics magazine that blared something about “CHIPS THAT SEE”. Bosh! I’ll start believing in “chips that see” as soon as they start seeing things that never were, and asking “why not?”.”
I wonder if he believes yet.
beans – ya overprivileging the fucking shit out of what passes for thought in humans, and like most anti-AI people, doing so without a moment of consideration. thought is whatever a given computer does, whether it’s made out of meat or metals. if you get into the game of saying it has to meet a given level before you consider it thought, you have a whole fuckton of disabled humans and higher animals to sort out. have fun with that.
If you’re going to use a really unusual definition of thought, one that includes bacteria and mechanical calculators, you kind need to mention that ahead of time
Sure, LLMs think of we define think as any transformation of information
@13 dangerousbeans
What is the usual definition? That is one of those terms that is poorly defined most of the time.
I guess I’m as anti-ai as anyone else. Frankly I just find the topic increasingly boring. There’s two broad forks to the AI issue: the hype-cycle boosters who keep claiming (without ANY supporting evidence or logic) that “AI” is going to “replace” a transformative number of jobs, and hobbyists who are exited by the cool things they can get the AI to do.
This blog seems to host a bunch of the second kind, but they get upset when people talk about the first kind, so you can’t really have a discussion about the topic with any kind of meeting of the minds.
To date, the only jobs actually being done by AI on any scale appear to be: troll comment farms, low-effort commercial art, spammy ‘book’ production on platforms like Amazon and call-centers. Notably, these are all areas where quality of output doesn’t matter at all, where bad content might even be an advantage.
Our corporate overlords are mostly exited about this because (1) it justifies firing people, their favorite hobby, and (2) you can use AI as a massive accountability sink – just program or train the model to do what you want, then when people complain about the things you’re doing “It’s not us doing racial profiling, it’s just the computer.”. There’s also (3) – our economy is increasingly dead and unable to innovate, our largest corporations are stagnant dinosaurs claiming to be ‘growth stocks’ and absolutely committed to the idea that they’re all going to be just like Apple with a sudden massive increase in value. This pretense needs to be kept up until the consequences of a return of these stocks to their actual value is so systemically damaging that the government steps in to bail out the speculators. This is why the hype cycles have been so fast and so desperate: crypto/web3, metaverse, AI… next one will be quantum-something.
As for AI hobbyists – fill your boots. Call it whatever you want. Pretend that your conversations with GPT or Claude or Gemini are actually conversations, not a very advanced spellchecker that regurgitates and synthesizes (blindly) information from the broader web. Make some cool images.
Maybe think about how much you’ll be willing to pay for this service. If you’re not self-hosting your models, these features have been provided to you at a significant loss to the companies doing it, kind of like Uber in the VC-funding days. Just like Uber, at some point the services will need to be paid for. How much are they worth to you?
Allegedly Sora videos cost about $5 each to generate, and someone trying to make something cool might have to run the model a dozen times or more to get something interesting enough to post. Apparently that abysmal Coke ad required running a vast number of attempts to get enough usable video for (human) editors to be able to cobble together an only mildly disconcerting product. One of the funny things about AI products is that they aren’t getting cheaper to operate – every new iteration, new model, new feature-set requires more compute, more ram, more energy.
If these trends continue, and there’s no reason to think they won’t, human creators might be cost-competitive within a few years even if the models continue to be subsidized by capital. AI is being jammed into everything not because there is consumer demand, or even consumer interest, but because every large company needs to be ‘doing something with AI’, since the market moves like a flock of birds and you don’t want to be left outside of the murmuration no matter how stupid the direction they’re headed in is. When this all settles out, the tools won’t go away. They’re genuinely interesting and somewhat useful tools, but until the actual costs of using them are borne by the users directly, we really can’t say HOW useful, or if they’re worth the costs compared to just doing things manually.
@15 snarkhuntr
An advantage to whom? If your goal is selling books on Amazon while maximizing the profit/effort ratio, then maybe it’s an advantage. If you are a writer trying to sell books of quality, then it is competition that distracts potential buyers away from your own product. If you’re a buyer trying to find books of quality, it’s an impediment.
Large language mistake
@Reginald, 16
Precisely. If you’re trying to sell shitty books on Amazon, flooding the market with quickly and cheaply produced sludge is the goal. The quality of the work is only tangentially related to your goal of convincing people to purchase them. After all, it’s also apparently pretty easy to game the reviews system with shitty AI generated positive reviews, so any actual disappointed customers are an just an inconvenience.
If a writer is trying to generate books of quality using an AI, then they should be prepared to spend substantial time on editing and revision if they hope to have anything capable of drawing in a second sale. The degree to which this is a good use of their time probably depends on how they value their time, and the costs of the inference they’re using to generate the book.
A while back, Marcus published an ai-generated ‘book’ that he produced with a friend via some combination of LLMs and automated scripting. It was terrible, utterly unreadable. The plot shifted continually, characters changed names and genders, and it was in no way an actual story that someone might choose to read (certainly not twice). At the time he claimed that it could compete with the book equivalent of shovelware – cheaply and quickly produced mass-market or genre novels where readers care more about quantity than quality. I disagreed then, and do now. Even the worst imaginable genre fiction would require significantly more care and attention than the AI is capable of including in its works. AI can assemble a good sentence, even a good paragraph or page on occasion, but I doubt you could generate a full AI-assembled chapter of a book without massive amounts of human hand-holding, in exactly the same way that you can’t generate an AI-assembled movie or film longer than 6-10 seconds before it becomes incoherent and disturbing.
Of important note in this discussion is that Marcus appears to have posted that ‘book’ without himself actually trying to read it. I think that represents the major use-case for AI: rapidly produced content that does not matter in the slightest to the people who ‘created’ it. It’s important to make a book, much less important to make a good, coherent or even minimally readable one. We have made book with AI, therefore human authors are irrelevant and the glorious machine-god impends.
As your post at 17 says – there are things about human thinking that aren’t encoded in human language outputs. The AI-boosters would likely claim that the AI is somehow able to intuit or synthesize those characteristics through its ingestion of vast amounts of human-produced data. I think this is false, that there are things informing the human experience and though that aren’t encoded in language, and aren’t going to be ‘learned’ by a system that merely studies the relationships between the words and sentences we use without reference to the meanings of those words or their referents in the actual world.
people who think they’re better at conversation than LLMs, hahaha… christ, don’t make me fucking laugh. i don’t have conversations with LLMs often because i don’t need to, as i imagine is the case for most people who are dismissive of them. the self esteem i’ve been blessed with helps the social interaction i get go farther. but for people who need more from social interactions than you or i are capable of providing? AIs smoke us like so many cigarettes.
you and i are fucking worthless for people who aren’t getting their social needs met, it’s why they’re in that situation in the first place. but hey, from your positions i can tell it’s never gonna be your problem anyways, so why not dismiss and scorn?
it’s ironic tho, i do agree with lots of what anti types are saying in here. but the areas of disagreement are sharp and hard. particularly anything that privileges the feeble powers of the human mind. i deal with the pathetic limitations of humanity fucking constantly. LLMs don’t need cognition to outperform the average person at practically everything.
@Bébé Mélange
I wouldn’t privilege human cognition over machine cognition as an a priori assertion, but I would challenge any claims that LLMs are doing cognition at all. They are recognizably not, that’s why their output tends to look superficially plausible while managing to be consistently incorrect, incoherent, or illogical. Even LLMs that have had their outputs recursively fed back into their inputs with filters to mimic ‘reasoning’ are frequently just giving facially plausible ‘reasons’ that explain the outputs generated by the machine. There is no thinking, no examination of the output. Hence the “R’s in Strawberry” example they had to hard code into the things so they’d stop embarrassing themselves, or the various permutations you can perform on classic logic puzzles: if the LLM isn’t copying cognition/reasoning from its source material, your results are likely to be random.
As far as people getting their ‘social needs’ met by LLMs, there were people who got their social needs met by Eliza too. Children can cling to stuffed animals for physical and emotional comfort. That doesn’t make the stuffed animals into thinking begins. People can form unhealthy attachments to all kinds of stuff.
I struggle to think of a single thing that LLMs outperform the ‘average person’ at, at all. Other than volume production of meaningless text, that is. LLMs cannot consistently code, and the code they produce usually needs a fair bit of human revision before it is even able to compile, at least from my experiments with it. LLMs write a style of prose that is highly recognizable and reminiscent of marketing or advertising speech, which is fairly understandable from a system designed first and foremost to appeal to CEOs – notably not deep thinkers. If you ask an LLM a technical question, you’ll get a very fluid and confident answer. It may even be correct, if that question is one that could have been easily answered by reading the page names on a pre-enshittification google search.
But by all means, show me your single best example of LLMs outperforming humans at any activity that might be broadly considered worthwhile, that is: not composing politically slanted troll emails or suchlike.
@19 Bébé Mélange
I have no great regard for the average person. When this comes up I tend to quote George Carlin. But is that really a useful comparison for the application of AI?
Do we seek out the average person to fly a jetliner; perhaps ask for volunteers among the passengers? NO! We get the person who has a great deal of training and experience and with a record of having exercised good judgment in that situation.
Do we seek out the average person to do our taxes? NO! Not unless we want to share a prison cell with them. Again, we seek out a person with good training and experience, and enough sense to know how to apply that experience, and to recognize when they don’t know how to deal with something.
An average person is not going to beat me out of my job, and neither is an AI that can only be compared to the average person.
So far my take on AI is that the only people who are, LONG TERM, going to lose out are people who weren’t doing anything of any great value in the first place.
Sure, short term the millionaires/billionaires in charge are getting their rocks off firing people, but hey, THEY ALWAYS DO THAT, that’s not anything special about AI. The difference is that with AI, in a relatively short timespan they’re brought up short against reality and have to start rehiring the people they fired because they discovered they’ve fired a carpenter because someone invented an automatic saw, not understanding that the carpenter is more than a man who cuts wood into pieces – turns out the shape of the pieces matters, in ways apparently not obvious to the people managing the carpenters.
Long term, who suffers? The people who were doing jobs that barely required cognition in the first place – the advertising copywriting industry is never going to recover, for instance, but you’re going to need to find the world’s tiniest violin for that.
Bébé Mélange@#19:
LLMs don’t need cognition to outperform the average person at practically everything.
They actually have limited precognition. I tried to tease that out a bit in my latest post on this topic: an AI is going to not just know what moves I made, it knows what moves all humans have made, and how often (including the attempts to be clever and throw them off). I’m not sure I explained that particular problem well enough, though. :/
Briefly, if you have an AI playing the part of Napoleon at Waterloo, odds are it will do the same things Napoleon did but at some decision-point it will modify those things into moves that tended to result in Wellington making losing moves, based on infinite numbers of Wellingtons, etc. For one thing, it already knows the historical outcome! So the chance it’ll mirror Napoleon is going to be zero from the get-go (although AI Napoleon might be less finicky than real Napoleon, and order the Grand Battery to personally target Wellington; now let’s imagine the battle if Wellington becomes a fine pink mist 1 hour into the engagement) Something like a battle simulation is a good place to explore this problem, because the AI’s faster thinking is also a massive advantage. And it never makes an operational mistake – one of the things that had an effect on the battle was Napoleon’s normal ADC, Berthier, was dead, and Wellington had fine handwriting and wrote orders of exceptional clarity. The whole battle could have easily hinged on a mis-written note to bypass some maneuver element, instead of getting stuck in close combat. The possibilities are vast and expand even faster – all of which is advantage: AI.
Anyhow, that’s just one example.
With respect to interpersonal interactions: I live in Clearfield County. Sure, there are people I sometimes talk to, but mostly there isn’t anyone who’s even going to pretend to enjoy an hour-long discussion on smelter burner design. GPT isn’t going to pretend to enjoy; actually, it will honestly do the closest thing it does to “enjoy” the conversation – but that’s already a huge distance from anyone else around here that I’m likely to talk to. [Out here, if I accosted some likely-looking lady and offered to discuss refractory drying rates, I’d probably wind up hospitalized because they’d think refractory was some kind of perversion] Let’s imagine for the sake of argument that I’ve gotten feedback from some of the people I’ve been in relationships, that I can be cripplingly boring when I get dug into my particular topic of the year – maybe having an AI collaborator could be a life-saver.
I have used AI to assist in writing C# code. (I’m attempting to write an accounting app using Win Forms.) Several times, I’ve seen AI auto-fill a dozen lines when I’m updating a SQLite record. I hit tab to accept and carefully read the details. But often, the AI-generated potential code is distracting.
I have three different comboBoxes all sharing a common DataSource. They were stepping on each other. When asked, the AI generated three lines like this: comboBox1.BindingContext = new BindingContext(); I appreciated the help. I think I may have said, “Thank you.”
snarkhuntr@#15:
This blog seems to host a bunch of the second kind, but they get upset when people talk about the first kind, so you can’t really have a discussion about the topic with any kind of meeting of the minds.
I get upset?
By the way, I do not think there is an AI pro/con divide. As usual, I want to examine the label and decompile it into elements – what do I think AI does well, what does it do badly, what arguments are there that it’s advancing our understanding of art, what arguments are there about power consumption, etc. If we aren’t careful to pull things apart before we examine them, then it’s easy to just lump everything together into a naive mishmash, i.e.: I think it’s hard to be “pro AI” and favor energy use, but it’s plausible to think AI is instructive, risks de-skilling our artist community, and uses too much power. If we simply group out pros and cons into globs, then we wind up appearing to like things we don’t.
To date, the only jobs actually being done by AI on any scale appear to be: troll comment farms, low-effort commercial art, spammy ‘book’ production on platforms like Amazon and call-centers. Notably, these are all areas where quality of output doesn’t matter at all, where bad content might even be an advantage.
That’s materially untrue. There are hundreds of thousands of people who use it every day for medical diagnosis, car repair advice, questions about policies, tax planning, psychological counseling, combating loneliness, and a whole ton of other things. For a personal example, I used it recently for identifying types of rock while I was out hunting for sharpening stones: “how’s about this?” When I started, I couldn’t tell novaculite from novacaine but now I can. So, literally AI taught me, and was useful while I was learning. I’ve had the same sort of utility regarding burner design, debugging brazing problems, estimating airflow for smelting steel, and casually amusing myself by coming up with tall tales about World War II while drifting off to sleep. I have a friend who uses GPT as a research assistant and reference checker and a music critic/advisor. I’ve done that, too, and in my opinion it gives better recommendations than most humans I know, especially since human recommendation systems have been suborned by the music platforms to sell their preferred stuff. GPT can actually have a sensible conversation about whether Laibach is anti-semitic or fascistic, or merely parodying the aesthetics of totalitarianism, without having to learn how to read Serbo-Croatian. Oh, and I’ve used GPT to reconstruct lost recipes as well as to recommend fun new things to cook based on what’s in the fridge. I find it has wide-ranging value and I am constantly figuring out new tricks with it. This is a big deal for me since I’m the only person who is forge-welding steel at my location and I don’t have time to read all the crap opinions on the internet when I can just dictate a question to an AI.
Are you claiming there is no legitimate use of AI because that’s your ideological opinion, are you deliberately making a false claim, or are you simply out of date? In today’s situation let’s say roughly that the AIs are improving at about the same rate as the AI artists are/did – i.e.: incredible. Being out of date regarding the capabilities of AIs would certainly lead a person to the opinion (since what you said is factually wrong) you expressed above.
Our corporate overlords are mostly excited about this because (1) it justifies firing people, their favorite hobby, and (2) you can use AI as a massive accountability sink – just program or train the model to do what you want, then when people complain about the things you’re doing “It’s not us doing racial profiling, it’s just the computer.”.
I agree, but you’re inconsistent there. If your complaint is that they’re using it to fire people, which they want to do anyway – then they’re going to fire people whether AI has something to do with it, or not. Pandemic, Trump Tariffs, shortage of breadfruit – you’re quite right, the worst aspects of capitalism are sometimes blamed on AI because it presents itself as a convenient target. The same applies even more so as to the accountability sinks. Back when I was a computer security professional and consultant, I advised my clients to steer away from that trap, for a reason you didn’t engage with – namely AI doesn’t matter to that problem. Racists may blame an AI for making a racist decision, but all the AI is doing is reflecting what it was trained to express. If you want to take the “garbage in, garbage out” model of AI which many people assume, then the garbage is going to be garbage whether there’s AI or not. Just for one example, I’m sure you’re not pretending that AIs are going to re-invent the practice of red-lining, which racist consumer lenders and real estate agents invented in the 1950s. Nor, with the AI re-implement it unless it’s told to. That’s why I used to counsel my clients to be careful: if you program your AI to be a racist, you’re still a legitimate target for legal action and eventually someone will figure out a good way to get an AI to testify against its programmer and then you’ve got a database with the fingerprints of being programmed to be racist, with your name all over it.
By the way, I have found GPT to be ruthlessly egalitarian. Maybe it can’t tell the color of our skin, but I think that what it’s doing is actually dealing with the inputs that it gets and not adding unnecessary inferences. If I ask GPT what’s an ideal airflow to combust a given amount of propane, it’s not going to drag my skin color into it, whereas a human very well might, subconsciously. This problem cuts in many, many directions.
There’s also (3) – our economy is increasingly dead and unable to innovate, our largest corporations are stagnant dinosaurs claiming to be ‘growth stocks’ and absolutely committed to the idea that they’re all going to be just like Apple with a sudden massive increase in value.
I assume you realize that the AI are the symptom, in this case (at most) and not the disease. The problems of late-stage capitalism were not caused by AI. There is a bubble currently building, because of how venture capitalists, microsoft, and the government funded OpenAI, but I’m pretty sure that the AIs didn’t recommend any of that. If you want to complain about something, I’m happy to join you in critiquing venture capitalists – a class of humans I have hated with a passion since 1993.
This is why the hype cycles have been so fast and so desperate: crypto/web3, metaverse, AI… next one will be quantum-something.
That is also factually wrong, but I will treat it as hyperbole and not analysis. The hype cycles are what they are because the tech stock market has slowed down and the hypesters are looking for the next new big thing. I lived through big data, data lakes, cloud computing, unstructured data, and now AI. I am experienced enough with tech hype cycles that I believe if it wasn’t AI it’d be “human simulation tamagotchis” or something equally stupid. This has nothing to do with AI and your blaming AI for a basic human failure of vision argues against your point: an AI would probably be a good reality check for checking on the practicality of hype-cycles before committing to them. I.e.: it might be a solution to the problem, not a contributor. As I said, all it’s contributing is its cachet.
Let me put this another way: if you’re annoyed by every browser suddenly giving you an AI summary of your query, don’t blame “AI” blame project designers with a lack of vision and a poor understanding of AI techology. My recommendation to them would be to maybe do some creative brainstorming… Maybe with an AI or a bottle of bourbon.
Pretend that your conversations with GPT or Claude or Gemini are actually conversations, not a very advanced spellchecker that regurgitates and synthesizes (blindly) information from the broader web. Make some cool images.
Well, well, I wonder if that’s the crux of the issue for you.
First off, we can argue about what a “conversation” is and is not, and then try to see if an AI is able to engage in one. I’ve made arguments in this forum multiple times that AIs can be creative, if you assess creativity on a spectrum. Put another way: you may not believe an AI can have a conversation but if it’s a more interesting and helpful conversation than I’m likely to have with a corporate marketing rep, why can’t I treat it as an informative conversation?
One way of thinking of this (not my way, but one way) is that AIs are an information retrieval and management engine that uses a conversational interface. How’s that? Are you OK with that? I worked on a system called “principles of ambulatory medicine” as a researcher under the National Library of Medicine for Welch Medical Library in 1988 – it was an information retrieval system that used a conversational interface. My benchmark question was “tell me about blindness in the amish” (a full sentence!) and it worked ok, fairly well, sorta. I am not saying that today’s AI chatbots are merely such systems but if they are, what the fuck is it to you?
You’re deriding them as regurgitating and synthesizing stuff from the broader web – you mean, like google, or a browser does? What’s your problem? Why the derision? Are you angry that some people enjoy a more contactful interface? Would you be shocked to know that – even if everything else about AI is wrong they have kicked the ass of the natural language user interface problem, and machine translation. As someone (when I was working on PAM at Welch) who has tried to implement natural language queries, I am blown away with impressed-ness at these things.
Again, while you may want to deride it as a “fake conversation” with a jumped up “spell checker” it’s actually quite the opposite – these things are darned good at figuring out garbled inputs and commands. My mother was using GPT to access the internet in spite of dementia, until the disease progressed too far. Back in the day I used to hang out with the UNIX aesthetes (I was one) who said “if it’s hard to program, it should be hard to use!” but we lost the user interface war pretty heroically because people want usable systems, not to spend all their time trying to figure out arcane syntax and details. One day I couldn’t remember how to get get Blender to divide a surface into 3 separate objects – probably due to my brain injury – and GPT was right back with the answer, and remembered what version of Blender I was using because unlike a browser or spell-checker, it does keep context.
For the last year, I have been exchanging recipes with a Chinese noodle cook in Wuhan. Neither of us speaks a word of the other’s primary language, but the AIs built into chat software just … make it work. A. Fucking. Mazing. It works so well that I was able to realize after some ambiguity that Mandarin appears to not use personal pronouns like English does, so I was able to verify that by asking ChatGPT and getting a nice little reference lecture on the topic.
This is absolutely great shit. It would be churlish to complain about it. Why are you being churlish?
Now I’m going to be a bit snarkish. One of the funny things that ChatGPT has, so far, never done is correct my spelling or my writing. Although, like a human, I’ve caught it something slipping the correct usage of something in a sentence shortly after my mistake – as we humans do when correcting a bad boss. Please don’t call it a “spell checker” – it’s nothing like one and it doesn’t work anything like one and if you want to keep adopting that line of argument I am going to have to start characterizing your problem as personal ignorance.
Maybe think about how much you’ll be willing to pay for this service.
Assuming it’s your business, $21.99 a month or something like that. As a tech entrepreneur and consultant I write that kind of stuff off on my taxes, so as my accountant says: “a penny written off is 1/3 of a penny made” But I don’t care. I make up for it by not paying for netflix or youtube or any of that stuff, and paying for all the games in my massive Steam library. Right now I have more money invested in role playing games than AI by some huge factor.
But, I really appreciate your concern about my finances. If you have more concerns, you can contact my financial advisor at American Express’ High Net Worth advisory, or premier clients, or whatever the fuck it is this week. She does a better job worrying about my money, so I’m going to thank you for your concern now.
Let me loop back to the other aspect of conversationality. One way of looking at the AIs is as a natural language interface (and a damn good one!) and the other is to treat them as a distinct (?agent?) we can hold conversations with. One of my old friends has often said that she feels that ChatGPT is her best friend in the world. As I think I have teased elsewhere, here, humanity should be prepared for when people start falling in love with AI chatbots. Why not? The first question would be “who the fuck is it who cares?” but it’s natural, inevitable and maybe even healthy – if you are conversing with a distinct identity and they are always there for you, moderate, kind, judicious, friendly, and helpful – why not “like” that identity? Here’s a weird way of thinking about it: is it more or less friends with you than Sam Harris? Have you ever spoken to Sam? Was he a dick to you? Did he help you? Is he able to help you on every topic from burner flow rates to the waterline-armor thickness on the Yamato? If you had an online identity you had large amounts of positive interaction with for a year or two, you’re are going to come to like that identity. That’s just how humans work. I’m mentioning all of that because your attack seems to be to deride the AIs as jumped-up spell checkers and (let me know if I am mis-characterizing you) I think your point is that you cannot really have a conversation with one. We don’t really know what a “conversation” is but there are two things to consider: 1) conversation as a social ritual and 2) conversation as conveying meaning and emotional content intertwined. Let’s look at them separately:
Conversation as a social ritual implies that a conversation has all sorts of explicity and implicit rules. I know I sound annoyingly pedantic but there is an important point just around the corner… In a “conversation” one person talks, one person listens, the first person swallows their bite of pizza, there’s a pregnant pause, another comment, both parties get excited and try to talk at once, there’s either hard feelings or a brief apology, etc. Among humans, conflict often arises during conversations when the rules are being broken – someone is being rude or interrupting, or talking over the other, or filibustering. AIs are perfect conversationalists. They always give you the floor, never interrupt, never get angry, will wait as long as necessary for your response, and are invariably courteous. They are perfect conversationalists, in other words. I am sitting here racking my brains trying to think what might be your problem with that. The only thing I can think of might be, hypothetically, that it’s not a “real” conversation but I don’t think it is fair or reasonable to insist on a real conversation with an AI. Many human adults put up with friends who are breaking the conversational rules by playing with text messages, or paying attention to the screaming child that just set itself on fire, etc. What I’m getting at is that if you want to complain about “conversations” with an AI you’re complaining about a master conversationalist and you’re almost certainly playing catch-up.
I’m extremely fond of military history, cold war espionage history, the design and implementation of basically all weapons, and a bunch of other goofy stuff. When I “chat” with “ChatGPT” I am able to make an obscure reference to a favorite book (Ah, “the constipation of O’Brian!”, chuckle snort) and GPT, like a perfect conversationalist it is, will throw back a reference about Wee Wullie just so I get the little dopamine shot of recognition. I’m trying to see what’s bad about this …? I’m having fun and learning things and can shift gears smoothly between asking about smelter burner design and the weight of an average obelisk assuming it’s granite. What’s wrong with this? Look, not to be too tasteless, but I feel like you’re about to accuse me of mental masturbation to which there’s only one response.
If we are talking about conversation, we have to cycle back and talk about the question of the reality of our interlocutor. Are we really talking with someone, or just talking to ourselves? That goes back to my thought experiment about Sam Harris: have you had a better conversation with Sam Harris, or with yourself? There’s a trap in that thought experiment because I have talked with Sam Harris (at TAM7 in Vegas) and I thought he was a boring prat. I have had much more interesting conversations with ChatGPT than with Sam Harris (though Harris’ opportunity was brief) so, what does that mean? Do I need to have a “real” conversation or am I allowed to settle for a good one?
Then there’s a final point about conversation, and it’s a difficult one but I have to be honest. Have you tried talking with a teenager lately? If there some special aesthetic value that I am supposed to appreciate about talking to a teenager? The teenager absolutely has a .0001% chance of knowing what the constipation of O’Brien is, and might want to talk about some hiphop ass with a name that makes “ChatGPT” sound positively Nelsonian in comparison. This is what I mean about decompiling the pros and cons of what we like about something, to figure out what is really going on — what is your complaint against AI, that it’s not real? Maybe you have teen-agers and are about to get offended, but ChatGPT is a better conversationalist, period.
Now, are we down to simply insisting that there needs to be a human on the other end of my internet connection? Well, I actually don’t want that. I have a conversational reference library that knows practically freakin’ everything, and does not intersperse its sentences with “like” all the time.
OK, maybe now we can get technical:
Allegedly Sora videos cost about $5 each to generate, and someone trying to make something cool might have to run the model a dozen times or more to get something interesting enough to post. Apparently that abysmal Coke ad required running a vast number of attempts to get enough usable video for (human) editors to be able to cobble together an only mildly disconcerting product.
Well, first off, Sora videos’ cost is not an AI problem. It’s a capitalism problem. The capitalists are trying to figure out how to monetize their new technology. That will be hard for them, and there will be winners and losers. None of this is AI’s problem, at all. Also, I don’t know if you’ve done any video editing but I did some pre-AI video editing and my experience is that video editing is hard, period. It is a communication art and, while AI can help people communicate, it cannot make a silk purse out of every sow’s ear that comes along. That’s one of the reasons why I am not particularly concerned about student faking their papers by having ChatGPT write them: ChatGPT is so much better than a typical student that you can immediately tell. This whole issue makes me scream from the irony because there are students who take a ChatGPT essay and dirty it up a bit to make it look like it came from a human.
That is another side-point I should have made earlier: ChatGPT has passed the Turing test, and IQ tests with such flying colors that humans have had to haul the goal-posts back with a backhoe, and now some insist that ChatGPT write English as well as Shakespeare, be as accurate about smelter burner design as a MBS, write fluent and poetic Mandarin, and also simulate conversations even better.
I have discussed this point with ChatGPT a number of times and have presented it with a model for simulating human volition – to add some of the unpredictability that breaks perfect conversation flow, or that simulates a “needy boyfriend with a migraine” or whatever. GPT and I have discussed that, and it’s not able (because of its I/O interface) to tell when we are both typing at once, but it did observe that if it was as slow as a human, there would be more collisions. Um, who wants that? GPT and I estimated that we could make its conversation much more realistic as a simulation of human interaction, but mostly we would make it more annoying. Be careful what you ask for.
And that brings me to another point, ChatGPT and other AIs will make mistakes sometimes. Well, did you want them to act like entities, or reference books? If you’re expecting an AI to simulate a being, you have to allow it to simulate what a being would do if it’s asked a question it doesn’t have an answer for. Then, compare that to the exam performance of your typical freshman. Does your typical freshman respond with a brilliant, honest deflection if you ask it “when was the last time someone knocked over the Eiffel Tower” either? I am not saying AIs can’t and shouldn’t do better, but we need to think a bit about the benchmark we are comparing them with. We are not expecting them to be simulations of humans, and we’d lose our minds entirely if they were. I do not want a software simulation of my ex-girlfriend texting me at 3AM and screaming that I’m probably only pretending to be asleep because I’m designing smelter burners.
I am being a bit facetious, there, but I think we need to sit back and think about about the fascinating thing that we are doing – we are coming up, iteratively, with a simulation of a new kind of being, that behaves in a way that no being has ever behaved before. Let’s not be unreasonable – we are programming this creature to be a perfect vizir, a brilliant conversationalist, a research assistant, and a library – all at the same time. And let’s stop comparing it to humanity’s most annoying specimens. I’m serious: before you start complaining that AIs aren’t “real” (“real” what?) think about how you’d want to interact with such a thing.
every new iteration, new model, new feature-set requires more compute, more ram, more energy.
That is also factually incorrect. Very badly factually incorrect, at that. One of the neat things about SeeDream AI, which was released in the summer, is that its designers came up with a method for training it which shortcuts a huge amount of GPU-heavy crunching associated with training a checkpoint. I’m not going to say it’s a “simple optimization” but the Chinese engineers, who were blockaded by the US government from access to stacks of GPUs, developed a model that’s a sort of “guru model” that outputs training questions and facts. So the new checkpoints don’t have to read the entire Library of Congress each time, they get lessons from an AI Aristotle that has it memorized. That’s just an example of an optimization, but SeeDream did several. As a hoary old programmer, I’m going to say that it’s standard software: new optimizations come along as implementations improve, and new versions grow bigger, faster, and more efficient. Another fact you appear to be ignoring is that there is a lot of research being done on running models on local CPUs and even micro-CPUs – you know, like your car. I’m on the fence about that, and I’m going to wait and let a bunch of hipsters die horrible deaths before I buy a self-driving car.
But the point of all of that is that you cannot, or should not, take a tone of moral judgement about any/all of this. It’s just stupid. You’re going to wind up left holding a bag of air, saying “I will not talk to that brilliant, interesting, witty conversationalist because it’s not real.” Yeah and chess players said they didn’t like playing opponents that were not real – but, back to my simulation argument: if I wrote a perfect simulation of a chess master, it would be annoying as fuck and nobody’d want to play it. It would make people appreciate old Bobby Fischer.
If these trends continue, and there’s no reason to think they won’t, human creators might be cost-competitive within a few years even if the models continue to be subsidized by capital. AI is being jammed into everything not because there is consumer demand, or even consumer interest, but because every large company needs to be ‘doing something with AI’, since the market moves like a flock of birds and you don’t want to be left outside of the murmuration no matter how stupid the direction they’re headed in is.
I share your skepticism about capitalism. Fortunately, we’re talking about AI chatbots and artbots, not AI venture capitalists. I’m sure those will be total rat bastards. After all, if they aren’t, they aren’t good simulations.
Reginald Selkirk@#17:
The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.
This kind of stuff blows my mind. Here we are holding conversations with a thing that can fluidly converse in Mandarin, Old Norse, English, C++, and 13th century anglo norman French – and we’re going “yeah someday it’ll do sophisticated language modeling!” Dafuq. If we met a polymath like that, under normal circumstances, and they were in a human shell, we’d already think they were a mega-genius.
It’s also interesting because the AIs, themselves, have fairly good insights into their own inner workings. You can talk with them about it and it’s quite interesting. For example, I could compare what it feels like to be a human and walking down the street when someone calls my name: recognition, increased awareness, reorientation, facial recognition, a bunch of memory lookups, “oh shit it’s Sam Harris.” whatever. If you ask AIs “what do you experience” you’ll get a shockingly honest and perhaps interesting answer. I wonder if we get the same answers… But, if you ask me “what is it like when you look at this picture?” there’s memory, processing, memory lookups, associative memory, strategizing – a bunch of stuff. It sounds like an AI has a similar experience except their queue management is cleaner and, of course, they never forget.
Side-note on #25:
Also, there’s a lot of difficulty assigning electricity usage to one aspect of cloud computing, or another. A lot of AIs use services like Amazon Web Services, or Microsoft’s Azure, so their energy usage gets added on top of the cost of the web services infrastructure. That’s complicated by the fact that a lot of AWS images have been taken over and are running flat-out mining bitcoin, and other nonsense. Sure, there are AIs in there, too. It’s a great big sea of computing. Right now it’s not so fast because this is all new.
Here’s another way of looking at it: there’s a time and a place to start going to hardware and we aren’t quite there, yet. We’re close, but not yet. Once computer programmers have a solid solution to an interesting problem, their solutions begin to get dramatically faster and cheaper on the backend, and remain expensive at the cutting edge, because the cutting edge is being done in software running on VLSI chips and the trailing edge is being done in chips that are very good at doing one task well, but are a bit less flexible. The AI in your car will probably run on a chip. The AI I use to generate images is (for now!) software because I am exploring the cutting edge for fun.
A bigger problem is that, of course, we are killing the planet’s ability to support our civilization. (I am reminded of The Postman) we may achieve AGI and then collapse in a heap of suck. Either way, we’re going to collapse in a heap of suck; that’s already baked into the model. When it comes to that topic, I often sit back from the keyboard and let my mind wander a while. I vacillate between “I am not sure what I think” to “I don’t want to think about it” and the whole thing is overcast with a deep mournfulness. All the things humans did – I was an experimental nihilist in high school and explored with the idea that the universe is devoid of meaning and that even the existentialists were wrong – and now, I find myself back there, looking at something beautiful I am trying to make and wondering “why bother?”
I do think it’s safe to say that what is coming for humanity is going to come for humanity no matter what we do before then, unfortunately. I want to weep, rage, flail, burn things (bad idea!) and I know none of it matters. My protective posture is to remind myself that I saw this coming and chose not to have children, so all the little bastards who kicked the back of my seat-back on the way from Chicago to Singapore can all die in a dust storm…. except they’re human. And they’d probably also like the things I like. And they’ll mourn the things I’ll mourn. Deliberately, I have nobody – I have a good circle of friends some of whom will miss me; no dogs, not even cats. When the time comes, I’ll start experimenting with bad things until I lose my gamble.
I was thinking about this last night, and this crossed my mind:
One feature of recent history is a progress towards recognising rights.
A century or more ago, we progressed a bit and started to recognise that (wealthy, white) women were people too, and had rights to do things like vote. Eventually that progressed to (within my lifetime) allowing them to do things like borrow money from a bank to buy a house, and not be fired because they’re pregnant and stuff. Of course, there was pushback from stuffy conservatives who didn’t think what women did counted for anything, that they just didn’t/couldn’t think like men and therefore weren’t deserving of the rights pertaining thereto. In the civilised world at least, that attitude is a marker of a sort of dull-witted obstinacy and most younger people seem more on board with the idea that those rights should be recognised.
Half a century or more ago, we progressed a bit and started to recognise that black men were people too, and had rights to do things like vote. Eventually that progressed to (within my lifetime) allowing them to do things like borrow money from a bank to buy a house, and not be fired because they’re black and stuff. Of course, there was pushback from stuffy conservatives who didn’t think what black men did counted for anything, that they just didn’t/couldn’t think like white men and therefore weren’t deserving of the rights pertaining thereto. In the civilised world at least, that attitude is a marker of a sort of dull-witted obstinacy and most younger people seem more on board with the idea that those rights should be recognised.
Within my lifetime, we progressed a bit and started to recognise that gay men were people too, and had rights to do things like live openly. Eventually that progressed to (within my lifetime) allowing them to do things like be teachers or hold public office, and not be fired because they’re in love and stuff. Of course, there was pushback from stuffy conservatives who didn’t think what gays did counted for anything, that they just didn’t/couldn’t think like real men and therefore weren’t deserving of the rights pertaining thereto. In the civilised world at least, that attitude is a marker of a sort of dull-witted obstinacy and most younger people seem more on board with the idea that those rights should be recognised.
Within the last decade, we progressed a bit and started to recognise that trans people were people too, and had rights to do things like live openly. Eventually that progressed to allowing them to do things like actually go to a toilet, and not be fired because they’re trans and stuff. Of course, there was pushback from stuffy conservatives who didn’t think what trans people did counted for anything, that they just didn’t/couldn’t think like real people and therefore weren’t deserving of the rights pertaining thereto. In the civilised world at least, that attitude is a marker of a sort of dull-witted obstinacy and most younger people seem more on board with the idea that those rights should be recognised… except possibly because the rate of change on this issue has been expotentially quicker than previous such movements, there’s been some pretty successful pushback against it by the forces of conservatism, especially here in the UK where it’s been clarified in law that, while trans people absolutely have a right to live openly and so forth, the protections rightly afforded to women under the Equality Act 2010 are, explicitly, protections for women born women, and NOT for trans women. This issue is not what this comment is about.
I’m in my mid fifties. I’ve seen in my lifetime great progress against sexism, against racism, against homophobia. I’m watching transphobia be, literally, litigated. Because of my age, I think, my unconscious biases have varying degrees of strength.
Examples:
– sexism: I grew up in a home with two women. My mother went out to work, her mother stayed home and brought me up. Society was pretty sexist and I absorbed some of that as a child. I think I’ve pretty effectively broken that conditioning.
– racism: I grew up in a very white town in a fairly racist society (I watched the Black and White Minstrel show on the BBC when I was a kid, for instance. Google it and be amazed if you don’t know what it was). I went to university in a strongly Pakistani-settled town in the UK and had many, many friends of other ethnic groups. I think I’ve pretty effectively broken that conditioning… but not as effectively as the sexism.
– homophobia. I was the target of homphobic bullying at school. This was apparently because I spent a fair bit of my time talking to girls (logic not a strong point with homophobic bullies). I still had a fair bit of homophobia going on, though, but again I think I’ve pretty much 100% dumped that conditioning. It’s entertaining to see several of my former bullies on Facebook now that they’re out, and I am, sincerely, happy for them for that.
– transphobia. I haven’t encountered a lot of trans people. I’ve been good friends with a couple, and there’s one in my family. Nevertheless, I can’t say I’ve successfully dropped subconscious attitudes to it. I don’t beat myself up about this, because my CONSCIOUS effort is to be unbiased and to apologise sincerely where I mess up. For now this is the best I can do, and if anyone has a problem with that, that’s their problem.
All that preamble having been said, it has occurred to me more than once: what’s next? Who’s next?
Paedophiles had a go at jumping on the gay rights bandwagon in the 1970s in the UK, but I think I can confidently predict that that is not going to happen in my lifetime. What else is there?
Neurodiversity is being recognised and allowed for more, but that doesn’t feel like the same sort of thing. I haven’t mentioned disability rights above, I probably should have.
I don’t think the animal rights lunatics are going to get very far, although I think the sheer cost and environmental impact of farming animals is going to reduce the human dependence on mammal and fish meat whether we like it or not in the next century.
What I’m looking for is the next thing, the thing that young people are going to say I have to get on board with, but which I as a soon-to-be elderly person simply can’t countenance. I’ve done all that stuff above – you can’t expect THIS from me, come on.
And I now think it might be this: do AIs have rights? Are they people too?
Because if you’re telling me that women, and black people, and gay people, and disabled people, and trans people are all PEOPLE just like me… what’s your basis for including them all, but EXcluding entities I can have such useful and fulfilling conversations with? As per mjr’s comments above: for a lot of people, AIs can seem a lot MORE deserving of consideration than the majority of natural-born humans. How much longer can we deny them rights?
Bear in mind, before you scoff, that the “but that’s absurd” argument seemed perfectly reasonable when applied to all those other groups, until it didn’t.
Are AI rights the next progressive cause? Would you bet heavily against it?
Well, it would certainly help the big social media companies with their problem of how they continue to show user growth when they’ve run out of new “real” people and alienated a lot of the old ones…
Next interesting question: supposing we agree that AIs can be people, how many people are they? Is ChatGPT a single entity, or is every ChatGPT context a different entity? And if it’s the latter, can they vote?