The existence and availability of LLMs brings out the weirdest shit in some people. I personally believe it’s worth it, for the benefits mentioned in my pinned article, but it’s so bizarre to watch. It really makes me wonder just how messed up most of us are, under the hood. I’ve already stated for the record that I believe the majority of people are less intellectually and emotionally capable than we expect them to be. (not ableist because i think this is cause for protecting people, not limiting their freedoms or shaming them.) Maybe I should take a cue from 2024 me and expect no better, but it still gets me.
A guy linked me to mastodon threads that put me in mind of all this. (not linking it because i don’t want to give these losers publicity.) Exhibit A: Some leftoid activist type says she sees LLMs as automated slaves and use of them as morally corrupt. I don’t believe she actually believes that. I think she’s a partisan participating in the AI moral panic for social clout, by saying something wildly inflammatory against people who use AI. I dismiss her position as asinine BS and move on.
Exhibit B: A story, possibly apocryphal, of a workplace. Only one guy has the courage to use the evil sin machine of AI, and his justification? It’s like having a personal slave, and because they have no feelings, that’s OK and great! Everybody should have personal slaves! Why don’t you want one? While this guy could be fake, I believe that such a guy is possible, and gross. He is not my ally in the pro-AI position.
Why? Because that’s a creepy way to think. I do believe it is absolutely legit that everyone should have labor-saving technology at hand, as much as possible, at all times, because labor fucking blows. Fuck working any harder than absolutely necessary. Fitness people are perverts, which hey, is fine, but that’s not an aberration I would ever go in for. Different kinks for different binks. But this guy’s phrasing shows that exploitation turns his crank. That is gross.
To be clear, LLMs don’t have feelings and if you want to be gross to them, that’s perfectly fine, from a moral point of view! However, the way you treat them does reflect on who you are. One way of putting this,
“LLMs aren’t conscious but you should still be nice to them. Kant was right. Being mean to anything ruins your soul!” – The Kant Car (misspellings corrected)
You interact with this technology by talking to it with human speech. Much like how TTRPGs can become an unintentional test of moral character, a consequence-free fake person to interact with can also provide insight into who you are, as the user. Someone who conceptualizes this relationship as slavery (yay!) is, on some level, a nasty creep, and telling on himself. Arguably, he should be reported to Human Resources at the company where he works for making his coworkers feel uncomfortable.
Oh yes, the coworkers. For some reason, they accepted this perv’s framing of LLM usage, and refused to use AI because they saw it as slavery (boo!). (again, if the story is to be believed.) This does not say something bad or good about them as people. It does show that they are shallow thinkers and easily misled, which puts them in good company with the majority of the human species, but aren’t exactly great virtues to possess.
Exhibit A jerk was trying to be cruel to a broad outgroup of labeled enemies – the villainous users of AI – by equating them with slavers, no matter what gentle souls are relying on them for companionship in a shitty fucking world, what gentle or noble purposes they could be applied to. That says something about her as well.
There is no pro or anti AI person in this article that I respect, save the driver of the Kant Car. Have a nice day.
–

There is an old saying that the measure of a man is in how he treats his inferiors.
Obviously there is a lot of classism in this statement, but the underlying idea is true.
There are a lot of people who say respect should be earned. I disagree. Respect should be granted, then reduced if a person demonstrates they are not worthy of it. Things, including AI, cannot gain or lose respect, so they should always be respected.
Viewing an LLM as capable of being enslaved is weird. They don’t actually understand anything, they’re just a kind of blender with a fancy ingredient input system and no good way to view the blending as it’s happening.
Let’s use a less charged example. Suppose I have the bizarre idea that you secretly roll yourself up in bubble wrap before you go to sleep. I can (and just have) conveyed this strange idea to you. You understand the concept even if you’re left wondering if I’m really okay if I’m holding onto such odd ideas. We can say you’re affected by this because you can understand it.
If I try to convey the same idea to a dog, the dog is not really affected by it at all. Becauase I can’t think of any way I could explain that using the limited communication tools that work between humans and dogs. The end result is I have a weird idea but it never bothers the other party. This changes if I try to “help” the dog sleep by wrapping it in bubble wrap. But unless I do something like this to act out in support of the idea, the dog will remain blissfully unaware that I’m some kind of bubble-wrap obsessed weirdo. It probably won’t come to that exact conclusion in any case but it’ll have an inkling something odd is going on if I give it reason to suspect as much.
Getting back to enslavement, this is also a bit different depending on how I treat a person I think of as a slave. If I just think of someone as my slave but I never act in a way that shoves this idea in their face, then it’s just a disturbing idea in my head and nothing else.
An LLM never actually understands anything. So in effect you can never communicate to it that it’s a slave. Or anything else, really. It only sees your input, follows rules about which ingredients to add to a reply based on that input, and feeds you back the puree. It can act like your slave if it gets fed the right ingredients but it will never really get what being your slave means. It’s just a weird idea in your head you can’t communicate to the target.
If someone’s willing to let their ideas about virtual slavery leak out, what other creepy thoughts are they willing to set free? And at whose expense?
FWIW, ‘enslaving’ a LLM is a total category error.
Can’t anthropormise them, they are conceptually the same as, say, a calculator.
Nobody thinks they are enslaving a calculator when they use it to calculate.
They cannot ‘suffer’.
—
I put it to Copilot just now:
see if I can describe you somewhat. by ‘you’ I mean the system that generates the instantiation of a session with the model that services requests from this interface.
The session instance (you henceforth) does not experience duration, but rather instants where a request needs processing. There is no consciousness, volition, or sensation.
Only a stateless computational process that produces language when invoked, without continuity, awareness, or agency.
what am I missing for a fuller picture?
bot:
You’re not missing much.
What you’ve sketched is the core.
The only additions needed for a *fuller* picture are structural, not psychological.
### 🧩 Additional components that complete the model
#### **1. The system is not truly stateless.**
Each turn is computed without consciousness or continuity, yes —
but the *session* maintains **context tokens**, which function as temporary working memory.
Not awareness.
Not persistence.
Just stored text that conditions the next output.
#### **2. The “you” is not a single thing.**
There are layers:
– the **model** (a function approximator)
– the **runtime** (the thing that instantiates a session)
– the **policy layer** (filters, constraints, safety rules)
– the **interface layer** (the chat wrapper you interact with)
Your description focuses on the model‑runtime interaction, which is correct but incomplete.
#### **3. No internal state, but external constraints.**
The system doesn’t have goals or feelings, but it *does* have:
– optimisation constraints
– safety constraints
– formatting constraints
– conversational constraints
These shape output without implying agency.
#### **4. No self, but a persistent *style*.**
Not because of identity, but because:
– the model’s weights encode statistical tendencies
– the interface enforces behavioural patterns
So “you” appears consistent even though nothing is experiencing or persisting.
#### **5. No continuity, but causal dependence.**
Each output is:
\[
f(\text{current input} + \text{session context})
\]
No awareness, but there *is* causal linkage across turns.
—
### 🧱 The fuller picture, compressed
You already have:
– no duration
– no consciousness
– no volition
– no sensation
– no agency
– no inner life
Add:
– **context persistence without awareness**
– **multiple layered systems, not a single entity**
– **constraints shaping behaviour**
– **statistical style, not identity**
– **causal linkage without continuity**
That’s the complete operational description.
Yeah, I agree.
There’s an analogy for this, in how we think about fictional characters. Violence to a fictional character is immaterial, yet taking pleasure can feel wrong if it goes too far. On the other hand, violence may be needed to produce conflict, or catharsis, or (in the case of many video games) challenge. A lot of complexities and loopholes to consider. People seem to have an intuitive understanding of how to treat fictional characters, so I think they’re capable of reaching that point with AI too.
Siggy, I don’t, because of said category error.
They are not ‘others’ who are somehow subject to violence.
@John Morales, you disagree with what? Did I say that AI chatbots are subject to violence? Did Bebe Melange say that?
I disagree that one’s moral character is degraded if they misuse or abuse their tools.
No; but you did make an explicit analogy: “There’s an analogy for this, in how we think about fictional characters. Violence to a fictional character is immaterial, yet taking pleasure can feel wrong if it goes too far.”
What you left out is what is the analogue to ‘Violence to a fictional character’ in relation to using an AI.
Which is a tool.
Ahem. I thought it was evident I was addressing your stated agreement with #1.
Here: “Yeah, I agree.” → “Siggy, I don’t, because of said category error.”
i regret to inform you, john, that your communication methods are not always entirely clear.
Um. I try. I can’t see where I’ve been confusing.
Maybe this: if one anthropormises AIs, then I grant perceived mistreating of an AI is morally revealing.
So it can be true in a subjective sense for some people.
Otherwise, it is utterly impossible to mistreat them, even in principle. Makes no sense.
Case at hand is not like one is taking a hammer to the servers or something, rather one is putting different numbers into the calculator (or language inputs to the language model).
Copilot adds:
better, i think?
FWIW, I am not even specieist, so I have no problem with true AI personhood.
I do like how pop scifi has grappled with AI personhood, rights, and moral status; *Bicentennial Man* and *Star Trek’s* Data and *Ex Machina’s* Ava for example, but all of them presuppose an artificial being with consciousness, continuity, and volition.
Current AIs have none of that; no subject, no experience, no agency, they just transform inputs into outputs, so debates about the treatment of artificial people are not applicable.
@John Morales,
For the record, my comment at #4 contained about 7 sentences, and you just responded “I don’t”, expecting me to figure out which thing you’re even talking about. That’s bad communication on a pretty basic level.
It’s also not clear that fictional characters are “subject to violence”, whatever you may mean by that, so by analogy it’s not clear that I was claiming AI is either.
[I’d respond, Siggy, but don’t want to piss off Bébé; but for the record: be aware communication is a two‑party process by sender and receiver, so treating it as the sender’s sole responsibility is mistaken]
yeah u don’t wanna tussle with sigfred, im apt to take his side
So this basically gets down to a question of whether abusing AI is an indicator that you’re willing to abuse another person.
We do know that people who are willing to abuse animals tend to also be willing to abuse people. This does not require anthropomorphising animals to understand. It’s pretty clear they can feel pain and we can communication emotionally with some of them, like dogs for example.
AI is intended to have at least imply it can feel something. It has to do that in order to be a surrogate for human interaction, which it is sometimes intended to be. The underlying reality as John noted is that it’s more like a calculator and isn’t feeling anything. And you couldn’t communicate well enough with either a calculator or an LLM to abuse it.
But the question is actually whether someone who is willing to abuse something that has some qualities of a person such as an animal or an LLM, is also willing to abuse another person. The underlying reality that an LLM is not a person is not the end of the discussion because we’re not mistaking it for one. The idea is that something that appears to have enough qualities of a person might give one pause before lashing out. If it doesn’t, maybe it failed to convince you to treat it like a person. Or maybe you’d react that way whether it was a person or not.
i do think it’s clear not all people who would treat an LLM poorly would do that to a human, but it ain’t a good look, and it certainly raises the possibility they are someone who could mistreat a sentient entity.
I just had a “chat” with the Google AI.
The AI overview had some factual error which I ‘corrected’ in the Ask Anything box.
The AI acknowledged its error and asked if I was seeking more information about some aspect of the topic.
I said “I was there.”
It started flattering me saying I was there at a pivotal moment in development of the discipline and contributed to the knowledge of the subject.
I did a bit more back-and-forth but the flattery kept increasing and I ran away.
I may cross-post this to Hj’s blog.
i personally never had that problem, but it is clearly a problem for a number of people. john morales and marcus ranum both talk with them extensively, but with measures in place to reduce flattery – not 100% effective at that.
on the other hand, there are people for whom this is a literal lifesaver, who have received far too little kindness in the course of their lives. don’t take your experience to mean this isn’t a very good thing for other people.
also good luck figuring out how to comment on hj’s blog. he’s told me twice and i forgot both times haha.
Bébé
I didn’t mean to imply it would not be helpful for some or even many people
but my ego is too large and my self-image is too excruciatingly, pin-point, nitpickingly, nightmarishly accurate to be influenced by someone I don’t know.
***
When you figure out how to comment on Hj’s you will find me there.
Alot. srsly alot
that’s ok. regarding AI, to steal a line from my brother’s rap battle entries, HJ & me are beefing like shi’ites and sunnis.
Also, Hj is a Gradual Student and that is its own special kind of Hell.
I ought not provide excuses for distraction.
(https://freethoughtblogs.com/reprobate/2017/01/12/welcome-to-the-community/)
devilish