No, that broken robot does not need human rights


A guy who works for OpenAI makes an observation. I agree with the opening paragraphs.

AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.

It’s true! Humans are social creatures who readily make attachments to all kinds of entities. We get highly committed to our pets — people love dogs and cats (and even spiders) and personify the animals we keep — furbabies, you know. They don’t even need to be animate. Kids get attached to their stuffies, or a favorite blanket, or any kind of comfort toy. Some adults worship guns, or cuddle up with flags. We should not be surprised that AIs are designed to tap into that human tendencies.

We should maybe be surprised at how this author twists it around.

I research human-AI interaction at the Stanford Institute for Human-Centered AI. For years, we have seen increased humanization of AI, with more people saying that bots can experience emotions and deserve legal rights – and now 20% of US adults say that some software that exists today is already sentient. More and more people email me saying that their AI chatbot has been “awakened”, offering proof of sentience and an appeal for AI rights. Their reactions span the gamut of human emotions from AI as their “soulmate” to being “deeply unsettled”.

It’s not that humans readily extend humanization to all kinds of objects…it’s that AI is becoming more human! That people think AI is sentient is evidence that AIs are sentient and deserve rights. Some people are arguing for rights for software packages before being willing to give puppy dogs those same rights. This is nuts — AI is not self-aware or in need of special privileges. Developing social attachments is a human property, not a property of the object being attached. Otherwise, I’ve been a terrible abuser who needs to dig into a landfill to rescue a teddy bear.

This author has other absurd beliefs.

As a red teamer at OpenAI, I conduct safety testing on their new AI systems before public release, and the testers are consistently wowed by the human-like behavior. Most people, even those in the field of AI who are racing to build these new data centers and train larger AI models, do not yet see the radical social consequences of digital minds. Humanity is beginning to coexist with a second apex species for the first time in 40,000 years – when our longest-lived cousins, the Neanderthals, went extinct.

AI is an apex species? It’s not even a species. It is not equivalent to the Neanderthals. It is not in competition with Homo sapiens. It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.

Comments

  1. Tethys says

    Wow, the tech-bros are now claiming that the fancy tamagotchi has come to life and achieved sentience.

    No dude, it’s just doing what it has been programmed to do, like all the other machines that humans create.
    No sapience is required.

  2. robro says

    It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.

    That’s true for some, and particularly some of the commercially driven efforts. But I’m not sure it’s true in all cases. As has been said here numerous times, AI is being used for a lot valuable research. Derek Muller of Veritasium has this piece about research using AI on protein construction. I’m sure the people he covers in this piece had lots of money behind them and made lots of money, but it’s not clear that’s their primary goal.

    Autobot Silverwynde @ #1 — My FutureVision goggles are foggy. I’m pleased your’s are so clear. Is it “sapient” or “sentient” or both?

  3. stuffin says

    AI can only emulate a human, or what humans are capable of. Do not be tricked. AI can only make logical conclusions based on the data they are using. Humans can be irrational and make conclusions based on nothing.

  4. lotharloo says

    AI can show some level of intelligence. Whether or not it’s sentient is a philosophical question but it doesn’t need rights.

  5. cartomancer says

    How about we ensure that all actual humans are afforded human rights first, before debating whether to give them to ridiculous toys?

Leave a Reply