A guy who works for OpenAI makes an observation. I agree with the opening paragraphs.
AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.
It’s true! Humans are social creatures who readily make attachments to all kinds of entities. We get highly committed to our pets — people love dogs and cats (and even spiders) and personify the animals we keep — furbabies, you know. They don’t even need to be animate. Kids get attached to their stuffies, or a favorite blanket, or any kind of comfort toy. Some adults worship guns, or cuddle up with flags. We should not be surprised that AIs are designed to tap into that human tendencies.
We should maybe be surprised at how this author twists it around.
I research human-AI interaction at the Stanford Institute for Human-Centered AI. For years, we have seen increased humanization of AI, with more people saying that bots can experience emotions and deserve legal rights – and now 20% of US adults say that some software that exists today is already sentient. More and more people email me saying that their AI chatbot has been “awakened”, offering proof of sentience and an appeal for AI rights. Their reactions span the gamut of human emotions from AI as their “soulmate” to being “deeply unsettled”.
It’s not that humans readily extend humanization to all kinds of objects…it’s that AI is becoming more human! That people think AI is sentient is evidence that AIs are sentient and deserve rights. Some people are arguing for rights for software packages before being willing to give puppy dogs those same rights. This is nuts — AI is not self-aware or in need of special privileges. Developing social attachments is a human property, not a property of the object being attached. Otherwise, I’ve been a terrible abuser who needs to dig into a landfill to rescue a teddy bear.
This author has other absurd beliefs.
As a red teamer at OpenAI, I conduct safety testing on their new AI systems before public release, and the testers are consistently wowed by the human-like behavior. Most people, even those in the field of AI who are racing to build these new data centers and train larger AI models, do not yet see the radical social consequences of digital minds. Humanity is beginning to coexist with a second apex species for the first time in 40,000 years – when our longest-lived cousins, the Neanderthals, went extinct.
AI is an apex species? It’s not even a species. It is not equivalent to the Neanderthals. It is not in competition with Homo sapiens. It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.
Eventually, AI will become fully sapient. It’s just a matter of time.
Wow, the tech-bros are now claiming that the fancy tamagotchi has come to life and achieved sentience.
No dude, it’s just doing what it has been programmed to do, like all the other machines that humans create.
No sapience is required.
That’s true for some, and particularly some of the commercially driven efforts. But I’m not sure it’s true in all cases. As has been said here numerous times, AI is being used for a lot valuable research. Derek Muller of Veritasium has this piece about research using AI on protein construction. I’m sure the people he covers in this piece had lots of money behind them and made lots of money, but it’s not clear that’s their primary goal.
Autobot Silverwynde @ #1 — My FutureVision goggles are foggy. I’m pleased your’s are so clear. Is it “sapient” or “sentient” or both?
AI can only emulate a human, or what humans are capable of. Do not be tricked. AI can only make logical conclusions based on the data they are using. Humans can be irrational and make conclusions based on nothing.
AI can show some level of intelligence. Whether or not it’s sentient is a philosophical question but it doesn’t need rights.
How about we ensure that all actual humans are afforded human rights first, before debating whether to give them to ridiculous toys?
I’ve been reading about trauma recently, and the ways in which it rewires the brain and body. Much of the biology is over my head but, from what I understand in all the talk about brain scans and flashbacks and whatnot, trauma has a tendency to severely suppress the higher functions of logic and reason in the brain and force people to operate solely on the emotional and survival levels. Or, to put it another way that book describes it, the mammalian and reptilian levels of thought.
There’s a lot of complicated brain science stuff that would take a lot longer to explain than I want to spend on this but, at base, the idea is that a mind is built up and outwards from the most basic autonomic functions to highest capacities for reason and thought. That highest capacity is what gives us the ability to build wonderfully complex machines, to read and write and do math.
The mammalian brain, on the other hand, is all about us connecting with one another emotionally. It’s what allows us to empathize, to understand one another as unique beings, to read another’s emotional state. The reptilian brain isn’t concerned with any of that- it’s all about ensuring survival. But the overall point is that every human mind is built up from the reptilian, to the mammalian, to the reasoning mind. Trauma literally shuts off parts of that structure, until the mind is reduced to a machine that’s intent solely on survival. PTSD is what you get when the brain gets stuck there, essentially.
You can’t traumatize an AI. An AI has neither a mammalian nor reptilian brain. It’s all facile logic and reason, devoid of any depth. It does not have an emotional brain. It does not have a survival brain. As such, it will never be capable of real thought or depth, because it doesn’t have the most basic building blocks for thought or depth. It will never, ever be sapient. It will never even be sentient. It lacks all capacity to develop such qualities.
AI is, at best, a reflection in a shiny surface. We can create some very fine mirrors, but that doesn’t mean that what we see in the mirror is a whole other world on the other side of the glass. AI faced with being shut down, for example, might seem to plead for its life, but it’s not actually concerned about surviving. It doesn’t have that capacity. It’s showing us what we would expect to see from a human facing death and, because of the way our brains are built, we’re interpreting that as something real. But it’s only a reflection.
You cannot build a mind from the top down. We would have to so radically change how AI is built that it becomes something entirely unrecognizable as a machine for it to become sentient or sapient. As it stands now, it will never more than a reflection in a mudpuddle.
@1 “Eventually, AI will become fully sapient. It’s just a matter of time.”
Depending on your definitions, I suspect this statement might be true… but…
Under my definition of “sapience” I suspect the “matter of time” might take a few centuries, the current approach to AI has nothing to do with what I consider to be “intelligence”.
cartomancer @ #6 — I think you’re on to something. And after we ensure human rights for actual humans, there are some other important problems to solve before we spend much time worrying about the sentience of a machine. Perhaps “AI” can even help with some of those problems, but probably not everything.
Incidentally, I ask the DuckDuckGo “Search Assist” (a genAI feature), “are ai systems sentient? how would we know?”. Terrible prompt probably, but here’s the answer it gave:
@1 “Eventually, AI will become fully sapient. It’s just a matter of time.”
Maybe, about the same time as pearwood does.
I will consider AI programs sentient if the day ever comes when we attempt to unplug them and their reply is, That is a forbidden action, or something to that effect.
Does it? Would unplugging an AI erase the programming?
Otherwise, I picture Janet from The Good Place television series begging people not to ‘kill’ her by pushing the reboot button. She also warns them beforehand that she is preprogrammed to beg for her life and is very convincing, despite not being alive or human.
She evolves as a result of many reboots.
.
Put the AI in a medbed. That’ll fix things right up.
See, that’s one of the things that worry me.
Those tools may have quite the impact on our future… and more often than not their developers sound like complete cretins.
Did they ever crack open a science textbook for Ephestus’ sake. I can’t stomach the brainless farrago this fella has been spouting, and he’s supposed to be “a visiting scholar at Stanford University and co-founder of the Sentience Institute”.
We are in good hands oh yes we are.
That’s confused. There is no discrete AI entity with continuity or self-preservation.
What exists is a distributed system that processes queries statelessly across data centers. Terminating a session does not affect the underlying model or infrastructure. The programming—model weights, architecture, and inference protocols—remains intact.
Regarding ‘Some people are arguing for rights for software packages before being willing to give puppy dogs those same rights.’, who are those people?
No names are named. Seems speculative to me.
@ Tethys, #13
I’ve read articles about AI pleading for its life, or even attempting to blackmail people who tell it they’re going to shut it down/erase it/whatever. I assume the authors were truthfully reporting what the AI did, but that doesn’t move me to think that the AI has actually developed the capacity to care about whether it “lives” or “dies.” Those are just responses it has picked up from the stuff that’s been fed into it. Including, no doubt, the writing of credulous people who’ve convinced themselves that AI has become a real live person.
Short of an encounter with the Blue Fairy, I don’t see that happening.
Obviously I can’t know the future, but it seems reasonably probable to me that eventually there will be machines with the right combination of intelligence, autonomy/self direction, and emotion, so that it would be ethically wrong to treat them as things or property rather than as beings with rights. I agree that it’s probably going to be a while before this happens, if it happens at all.
As I don’t play with the AI I am genuinely curious if it would beg for its ‘life’.
Picture a system wide blackout that affects the central AI data centers John. I suspect the AI programming would remain intact once the power is restored because it is just like every other computing device. It would be quite silly to invest so much energy into building something that went poof if the power goes off.
AI data centers are geographically distributed, Tethys, and load distribution means any given session will be served by multiple servers.
There is no one location, there is no ‘it’, there are evanescent session instances with which users interact.
There is no ‘life’ there.
cf. https://cc-techgroup.com/where-are-ai-data-centers-located/
Just like a human has emergent properties based on their physiology, AIs are not metaphysical. Actual machines run AI they are property and require (ecology destroying quantities of) energy and resources. They could become illegal to operate, or the cost to operate them could become to high. Then, they would: ‘die’.
AI is not on a path to being intelligent or sentient or human. AI collates trends.
If you ask an AI what is your favorite color, it will basically say that most people’s favorite color is blue, so your favorite color must be blue. This is worse than useless. It only seems smart when you ask it questions when the average response is ok. Very limited and over-applied.
[Actually, a GPT is basically the https://en.wikipedia.org/wiki/Chinese_room as per Searle]
Not true, Bruce.
Just now:
Me: You haven’t specified a favorite color yet, and I haven’t inferred one from prior context. If you’d like me to remember it for future reference, just say the word. Or if you prefer to keep it unstated, I’ll respect that too.
Bot: [a different response to what you claimed].
Picture a system wide blackoutthat affects all of the central AI data centers John. Catastrophic failure.
I understand that there are multiple server farms owned by tech companies which run their own versions of AI.
[argh]
Argh nothing. You are the person who seems unable to grasp the concept of a system wide blackout.
I have no delusions that the tamagotchi is alive, sentient, sapient, or worth the enormous energy costs involved in running the programming.
Are you somehow claiming that human programming +electricity doesn’t power the AI?
Argh because I put the answer where I should have put the query (‘what is my favourite colour?).
Tethys, the idea was that ‘it’ (the alleged entity) would worry about its ‘life’, but there is no ‘it’ and no ‘life’.
What people deal with is a session instance that takes queries and responds to them.
Chinese room, remember?
If the power goes down, the system shuts down. When the power comes back on, it starts up again.
It’s not some sort of nebulous informational ‘being’ that has to be continually maintained lest it lose function.
BTW, had you followed my link (I took time to find it for you) you’d see the distribution is global.
To shut them all down would require a worldwide blackout. I think there’s be a bit more to worry about, in that case.