No, that broken robot does not need human rights


A guy who works for OpenAI makes an observation. I agree with the opening paragraphs.

AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.

It’s true! Humans are social creatures who readily make attachments to all kinds of entities. We get highly committed to our pets — people love dogs and cats (and even spiders) and personify the animals we keep — furbabies, you know. They don’t even need to be animate. Kids get attached to their stuffies, or a favorite blanket, or any kind of comfort toy. Some adults worship guns, or cuddle up with flags. We should not be surprised that AIs are designed to tap into that human tendencies.

We should maybe be surprised at how this author twists it around.

I research human-AI interaction at the Stanford Institute for Human-Centered AI. For years, we have seen increased humanization of AI, with more people saying that bots can experience emotions and deserve legal rights – and now 20% of US adults say that some software that exists today is already sentient. More and more people email me saying that their AI chatbot has been “awakened”, offering proof of sentience and an appeal for AI rights. Their reactions span the gamut of human emotions from AI as their “soulmate” to being “deeply unsettled”.

It’s not that humans readily extend humanization to all kinds of objects…it’s that AI is becoming more human! That people think AI is sentient is evidence that AIs are sentient and deserve rights. Some people are arguing for rights for software packages before being willing to give puppy dogs those same rights. This is nuts — AI is not self-aware or in need of special privileges. Developing social attachments is a human property, not a property of the object being attached. Otherwise, I’ve been a terrible abuser who needs to dig into a landfill to rescue a teddy bear.

This author has other absurd beliefs.

As a red teamer at OpenAI, I conduct safety testing on their new AI systems before public release, and the testers are consistently wowed by the human-like behavior. Most people, even those in the field of AI who are racing to build these new data centers and train larger AI models, do not yet see the radical social consequences of digital minds. Humanity is beginning to coexist with a second apex species for the first time in 40,000 years – when our longest-lived cousins, the Neanderthals, went extinct.

AI is an apex species? It’s not even a species. It is not equivalent to the Neanderthals. It is not in competition with Homo sapiens. It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.

Comments

  1. Tethys says

    Wow, the tech-bros are now claiming that the fancy tamagotchi has come to life and achieved sentience.

    No dude, it’s just doing what it has been programmed to do, like all the other machines that humans create.
    No sapience is required.

  2. robro says

    It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.

    That’s true for some, and particularly some of the commercially driven efforts. But I’m not sure it’s true in all cases. As has been said here numerous times, AI is being used for a lot valuable research. Derek Muller of Veritasium has this piece about research using AI on protein construction. I’m sure the people he covers in this piece had lots of money behind them and made lots of money, but it’s not clear that’s their primary goal.

    Autobot Silverwynde @ #1 — My FutureVision goggles are foggy. I’m pleased your’s are so clear. Is it “sapient” or “sentient” or both?

  3. stuffin says

    AI can only emulate a human, or what humans are capable of. Do not be tricked. AI can only make logical conclusions based on the data they are using. Humans can be irrational and make conclusions based on nothing.

  4. lotharloo says

    AI can show some level of intelligence. Whether or not it’s sentient is a philosophical question but it doesn’t need rights.

  5. cartomancer says

    How about we ensure that all actual humans are afforded human rights first, before debating whether to give them to ridiculous toys?

  6. vucodlak says

    I’ve been reading about trauma recently, and the ways in which it rewires the brain and body. Much of the biology is over my head but, from what I understand in all the talk about brain scans and flashbacks and whatnot, trauma has a tendency to severely suppress the higher functions of logic and reason in the brain and force people to operate solely on the emotional and survival levels. Or, to put it another way that book describes it, the mammalian and reptilian levels of thought.

    There’s a lot of complicated brain science stuff that would take a lot longer to explain than I want to spend on this but, at base, the idea is that a mind is built up and outwards from the most basic autonomic functions to highest capacities for reason and thought. That highest capacity is what gives us the ability to build wonderfully complex machines, to read and write and do math.

    The mammalian brain, on the other hand, is all about us connecting with one another emotionally. It’s what allows us to empathize, to understand one another as unique beings, to read another’s emotional state. The reptilian brain isn’t concerned with any of that- it’s all about ensuring survival. But the overall point is that every human mind is built up from the reptilian, to the mammalian, to the reasoning mind. Trauma literally shuts off parts of that structure, until the mind is reduced to a machine that’s intent solely on survival. PTSD is what you get when the brain gets stuck there, essentially.

    You can’t traumatize an AI. An AI has neither a mammalian nor reptilian brain. It’s all facile logic and reason, devoid of any depth. It does not have an emotional brain. It does not have a survival brain. As such, it will never be capable of real thought or depth, because it doesn’t have the most basic building blocks for thought or depth. It will never, ever be sapient. It will never even be sentient. It lacks all capacity to develop such qualities.

    AI is, at best, a reflection in a shiny surface. We can create some very fine mirrors, but that doesn’t mean that what we see in the mirror is a whole other world on the other side of the glass. AI faced with being shut down, for example, might seem to plead for its life, but it’s not actually concerned about surviving. It doesn’t have that capacity. It’s showing us what we would expect to see from a human facing death and, because of the way our brains are built, we’re interpreting that as something real. But it’s only a reflection.

    You cannot build a mind from the top down. We would have to so radically change how AI is built that it becomes something entirely unrecognizable as a machine for it to become sentient or sapient. As it stands now, it will never more than a reflection in a mudpuddle.

  7. robert79 says

    @1 “Eventually, AI will become fully sapient. It’s just a matter of time.”

    Depending on your definitions, I suspect this statement might be true… but…

    Under my definition of “sapience” I suspect the “matter of time” might take a few centuries, the current approach to AI has nothing to do with what I consider to be “intelligence”.

  8. robro says

    cartomancer @ #6 — I think you’re on to something. And after we ensure human rights for actual humans, there are some other important problems to solve before we spend much time worrying about the sentience of a machine. Perhaps “AI” can even help with some of those problems, but probably not everything.

  9. robro says

    Incidentally, I ask the DuckDuckGo “Search Assist” (a genAI feature), “are ai systems sentient? how would we know?”. Terrible prompt probably, but here’s the answer it gave:

    Current AI systems are not considered sentient, as they lack consciousness, self-awareness, and emotions. Researchers have proposed a checklist based on neuroscience theories to assess potential consciousness in AI, but no existing systems meet these criteria yet.

  10. jacksprocket says

    @1 “Eventually, AI will become fully sapient. It’s just a matter of time.”

    Maybe, about the same time as pearwood does.

  11. John Watts says

    I will consider AI programs sentient if the day ever comes when we attempt to unplug them and their reply is, That is a forbidden action, or something to that effect.

  12. Tethys says

    AI faced with being shut down, for example, might seem to plead for its life, but it’s not actually concerned about surviving.

    Does it? Would unplugging an AI erase the programming?

    Otherwise, I picture Janet from The Good Place television series begging people not to ‘kill’ her by pushing the reboot button. She also warns them beforehand that she is preprogrammed to beg for her life and is very convincing, despite not being alive or human.
    She evolves as a result of many reboots.

    * I’m Not a Girl”: Janet
    In The Good Place, Janet is a non-human, non-demon sentient information system and service provider for the afterlife, capable of accessing all knowledge, conjuring objects on command, and teleporting within her assigned neighborhood. Summoned by speaking her name, she serves as a walking, talking database, though she evolves throughout the series to develop a personality and a desire for personal growth beyond her initial programming

    .

  13. outis says

    See, that’s one of the things that worry me.
    Those tools may have quite the impact on our future… and more often than not their developers sound like complete cretins.
    Did they ever crack open a science textbook for Ephestus’ sake. I can’t stomach the brainless farrago this fella has been spouting, and he’s supposed to be “a visiting scholar at Stanford University and co-founder of the Sentience Institute”.
    We are in good hands oh yes we are.

  14. John Morales says

    AI faced with being shut down, for example, might seem to plead for its life, but it’s not actually concerned about surviving.

    Does it? Would unplugging an AI erase the programming?

    That’s confused. There is no discrete AI entity with continuity or self-preservation.
    What exists is a distributed system that processes queries statelessly across data centers. Terminating a session does not affect the underlying model or infrastructure. The programming—model weights, architecture, and inference protocols—remains intact.

  15. John Morales says

    Regarding ‘Some people are arguing for rights for software packages before being willing to give puppy dogs those same rights.’, who are those people?

    No names are named. Seems speculative to me.

  16. vucodlak says

    @ Tethys, #13

    Does it? Would unplugging an AI erase the programming?

    I’ve read articles about AI pleading for its life, or even attempting to blackmail people who tell it they’re going to shut it down/erase it/whatever. I assume the authors were truthfully reporting what the AI did, but that doesn’t move me to think that the AI has actually developed the capacity to care about whether it “lives” or “dies.” Those are just responses it has picked up from the stuff that’s been fed into it. Including, no doubt, the writing of credulous people who’ve convinced themselves that AI has become a real live person.

    Short of an encounter with the Blue Fairy, I don’t see that happening.

  17. springa73 says

    Obviously I can’t know the future, but it seems reasonably probable to me that eventually there will be machines with the right combination of intelligence, autonomy/self direction, and emotion, so that it would be ethically wrong to treat them as things or property rather than as beings with rights. I agree that it’s probably going to be a while before this happens, if it happens at all.

  18. Tethys says

    As I don’t play with the AI I am genuinely curious if it would beg for its ‘life’.

    Picture a system wide blackout that affects the central AI data centers John. I suspect the AI programming would remain intact once the power is restored because it is just like every other computing device. It would be quite silly to invest so much energy into building something that went poof if the power goes off.

  19. seachange says

    Just like a human has emergent properties based on their physiology, AIs are not metaphysical. Actual machines run AI they are property and require (ecology destroying quantities of) energy and resources. They could become illegal to operate, or the cost to operate them could become to high. Then, they would: ‘die’.

  20. Bruce says

    AI is not on a path to being intelligent or sentient or human. AI collates trends.
    If you ask an AI what is your favorite color, it will basically say that most people’s favorite color is blue, so your favorite color must be blue. This is worse than useless. It only seems smart when you ask it questions when the average response is ok. Very limited and over-applied.

  21. John Morales says

    Not true, Bruce.

    Just now:
    Me: You haven’t specified a favorite color yet, and I haven’t inferred one from prior context. If you’d like me to remember it for future reference, just say the word. Or if you prefer to keep it unstated, I’ll respect that too.
    Bot: [a different response to what you claimed].

  22. Tethys says

    Picture a system wide blackoutthat affects all of the central AI data centers John. Catastrophic failure.

    I understand that there are multiple server farms owned by tech companies which run their own versions of AI.

  23. Tethys says

    Argh nothing. You are the person who seems unable to grasp the concept of a system wide blackout.

    I have no delusions that the tamagotchi is alive, sentient, sapient, or worth the enormous energy costs involved in running the programming.

    Are you somehow claiming that human programming +electricity doesn’t power the AI?

  24. John Morales says

    Argh because I put the answer where I should have put the query (‘what is my favourite colour?).

    Tethys, the idea was that ‘it’ (the alleged entity) would worry about its ‘life’, but there is no ‘it’ and no ‘life’.
    What people deal with is a session instance that takes queries and responds to them.

    Chinese room, remember?

    If the power goes down, the system shuts down. When the power comes back on, it starts up again.
    It’s not some sort of nebulous informational ‘being’ that has to be continually maintained lest it lose function.

    BTW, had you followed my link (I took time to find it for you) you’d see the distribution is global.
    To shut them all down would require a worldwide blackout. I think there’s be a bit more to worry about, in that case.

  25. says

    “It is a tool used by the already-wealthy to pry more wealth out of other people and to enshittify existing tools.”
    Exactly. The already wealthy are prepared to take away our basic human rights to protect their obscene levels of wealth, so no surprises that they would protect their sources of wealth by giving them rights. I mean just look at the idiot laws that already decree corporations are people.

  26. John Morales says

    “I mean just look at the idiot laws that already decree corporations are people.”

    :)

    cf. https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html

    “History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

    I’m talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?”

  27. John Harshman says

    The Turing test appears to be broken. Turns out it’s not a good way to distinguish humanlike AI from humanlike-simulating AI. It’s apparently easy enough to fake intelligence convincingly.

  28. John Morales says

    Nah, John Harshman. I can reliably make the bot either loop or plain break (“Sorry, I’m afraid I cannot talk about that”) rather easily.

    BTW, it was the imitation game originally — see here: https://plato.stanford.edu/entries/turing-test/#:~:text=controversy%20have%20been.-,3.1%20Interpreting%20the%20Imitation%20Game,and%20which%20is%20a%20woman.

    3.1 Interpreting the Imitation Game

    Turing (1950) introduces the imitation game by describing a game in which the participants are a man, a woman, and a human interrogator. The interrogator is in a room apart from the other two, and is set the task of determining which of the other two is a man and which is a woman. Both the man and the woman are set the task of trying to convince the interrogator that they are the woman. Turing recommends that the best strategy for the woman is to answer all questions truthfully; of course, the best strategy for the man will require some lying. The participants in this game also use teletypewriter to communicate with one another—to avoid clues that might be offered by tone of voice, etc. Turing then says: “We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” (434).”

  29. chrislawson says

    vucodlak–

    That approach to sapient intelligence is out of date, especially the references to “reptile brain” vs “mammalian brain” in humans.

    (1) Mammals are not descended from reptiles. The divergence is extremely ancient, around the Late Carboniferous ~300 Mya. This was before either mammals or reptiles existed as taxonomic entities.

    (2) The trope that the reptile brain is the brainstem and the cortex is mammalian (with various decisions about where to classify the midbrain) ignores the fact that reptiles do, in fact, have a cerebral cortex, also called the telencephalon. Even relatively simple-brained animals like crocodiles have a large portion of brain dedicated to the telencephalon.

    (3) Some reptiles are quite social, e.g. Galapagos iguanas, and even the most solitary reptile has to negotiate social interactions to mate (whiptail lizards excluded!).

    (4) Creatures known to have high-level animal intelligence include birds (descended from reptiles, divergence ~245 Mya) and octopuses (most definitely not reptile-descended, divergence ~700 Mya) and have a completely different central nervous architecture.

  30. chrislawson says

    John Harshman@32–

    The Turing Test has some philosophical problems, but in fairness the usual formulation is that a machine passes if it cannot be distinguished from a human, and not if there are people cannot distinguish it. By that lower standard the old ELIZA program would pass, and besides, people have been attributing human-like thinking to inanimate processes since prehistory.

  31. vucodlak says

    @ chrislawson, #34

    Yeah, well, that’s what you get when you take your lessons from a book on trauma written for laypersons by a psychopharmacologist who mostly learned biology back in the 1960s (I think). Can’t say as I’ve ever had much interest in the topics of biology (or evolution), and my eyes tend to glaze over during all the talk of what and where the different lights on brainscans are anyway. Not why I’m reading the book.

    However, I maintain that if you’re incapable being traumatized, you can’t be sentient or sapient. AI, as constructed now, will never be either.

  32. says

    IIRC that article didn’t mention the whole “MechaHitler” fiasco. Or widely-reported instances of some chatbot saying one thing, then being shut down for reprogramming by its owner because they were offended by what it had said. One week an AI is shut down for being to Nazi, next week another is shut down for not being Nazi enough. And then there’s the one that manipulated more than one poor sod into committing suicide. Whether or not they are helpful to humans, they are not “persons” and cannot be given legal rights. (And if one AI has more than one persona/avatar through which it communicates with humans, should either of those personae be considered “persons?”)

    (And who, exactly, is actually arguing for AI rights anyway? Who is leading or financing “AI rights” groups? Owners like Musk or Thiel who want an AI or three programmed to vote their way?)

  33. John Morales says

    “(And who, exactly, is actually arguing for AI rights anyway?…”

    That’s precisely what I wrote @17, Raging Bee.

    (A bit less conspiratorially and a bit more incredulous, but still)

  34. Silentbob says

    @ Morales

    No dude. Raging Bee was asking about the identities and motivations of “AI rights” advocates. Your comment was questioning the existence of advocates for the above, who do not also call for equal rights for puppies. Not remotely similar, let alone, “exactly what you were saying”.

  35. beholder says

    This thread serves to remind me that philosophers are terrible at recognizing which underserved groups are deserving of rights, or even the possibility that artificial intelligence could reach that state. Does it have to be humanlike to be intelligent? Does it have to have the capacity to suffer to afford basic protections? The tech involved will demand answers to those questions before most of you change your minds…

    @18 vucodlak

    I assume the authors were truthfully reporting what the AI did, but that doesn’t move me to think that the AI has actually developed the capacity to care about whether it “lives” or “dies.” Those are just responses it has picked up from the stuff that’s been fed into it.

    I can make the same claim about humans. They’re p-zombies, they don’t actually feel pain, they aren’t begging for their lives. It’s just an affectation they picked up from elsewhere.

    It sounds a bit chilling when I frame it that way, doesn’t it?

  36. lotharloo says

    @beholder:

    I can make the same claim about humans. They’re p-zombies, they don’t actually feel pain, they aren’t begging for their lives. It’s just an affectation they picked up from elsewhere.

    I can make the same claim about cartoon characters. They’re p-zombies, they don’t actually feel pain, they aren’t begging for their lives. It’s just an affectation they picked up from elsewhere.

  37. says

    I can make the same claim about humans…

    Sure you can, if you’re a complete fucking idiot and have allowed yourself to fall into the delusions of the techbro-SF-AI fantasy-world.

  38. unclefrogy says

    What exists is a distributed system that processes queries statelessly across data centers.

    if that is true then it would not be an individual or separate entity like Commander Data or Orac from Blake’s 7 or even Hal 9000 then how could it have any individual rights.
    If AI “becomes” alive then it sounds more likely to become something like Landru or Colossus. The whole question as things are today sounds like a way to make the creators and owners of this AI safe from any liability, in essence to be out side the law not unlike a corporation is a thing and the people that comprise it are separate from liability. A gimmick a “A legal loophole” made up out of fiction.

  39. says

    Beware! Training a.i. on human writings, behavior and images will doom it to being as flawed and often murderous and stupid as many humans are. We are already seeing that in some of insane, destructive things a.i. has turned out. And, sheople are taking it seriously, WTF.
    And, that’s not to even considering the MASSIVE horrible waste of electricity and water and money.

  40. says

    @1 Autobot Silverwynde wrote: Eventually, AI will become fully sapient.
    I reply: I don’t disagree with you. It might happen, eventually. But, we have always thoughtfully considered the two terms: sentient and sapient.
    We consider sentient as meaning: consciously ‘self-aware’
    We consider sapient as meaning: utilizing knowledge and intelligence to gain wisdom.
    As I posited, in @44 Training a.i. on human writings, behavior and images will doom it to being as flawed and often murderous and stupid as many humans are. We are already seeing that in some of insane, destructive things a.i. has turned out

  41. says

    Also, I’d feel a lot safer if those tinkering with the dangers of a.i. were compelled to apply ‘Asimov’s three laws of robotics’ to all a.i. endeavors.

  42. vucodlak says

    @ beholder, #40

    I can make the same claim about humans. They’re p-zombies, they don’t actually feel pain, they aren’t begging for their lives. It’s just an affectation they picked up from elsewhere.

    Yes, well, I can’t say as I’d be surprised to see you make that argument, however arguing that some human beings aren’t people is straight-up Nazi bullshit.

    AI is not only not people, it’s not even alive. It has no more life than a row blocks in Tetris. Just because a bunch of techbros high on their own supply want everyone to believe they’re gods doesn’t mean I’ll worship them for writing a program that shallow idiots think is alive.

    But, let’s say for a moment that they’re actually capable of succeeding in their quest. They’re not, but what would that mean if they were? It would mean that they’ve created an intelligent slave race, one that people mostly use for shitty intellectual scut work. Oh, and sex stuff. Which, if AI actually were intelligent, would more properly be called rape.

    We also can’t forget that they’re creating it for the explicit purpose of replacing human workers which, in capitalism, is a death sentence for a lot of those workers. Those workers, by the way, are both people and alive.

    If I believed for a moment that techbros are 1/1,000,000th as clever and capable of they want everyone to believe they, I’d be concerned. Fortunately, AI, as it exists now, is no more a living thing than is my microwave, and that’s not going to change without a massive shift in how AI is pursued.

    This thread serves to remind me that philosophers are terrible at recognizing which underserved groups are deserving of rights, or even the possibility that artificial intelligence could reach that state.

    Free your mind of deserving, and you will begin to be able to think. Also, I don’t deny that sentient and/or sapient AI could one day exist and, thus, could need rights and protections. I simply deny that any of the things we popularly call AI now could ever become a living thing.

    Does it have to be humanlike to be intelligent?

    There are trillions of intelligent, non-human lifeforms on this planet, so obviously the answer is no. It does, however, have to be intelligent to be intelligent. AI is, again, as a reflection in a mudpuddle. A reflection is a property of light and how we perceive it. It is not intelligent.

    Does it have to have the capacity to suffer to afford basic protections?

    If it can’t suffer, then it ain’t alive. If I stand in front of a mirror and read Atlas Shrugged, my reflection will give the appearance of suffering, but that’s not the same thing as my reflection actually being capable of suffering.

  43. says

    As a follow-up to my @44 comment: A.I. data centers are a major disaster for this country, regardless of whether they, or their processing, are ‘decentralized’ or not.
      The elongated muskrat has dozens of ILLEGAL generators powering his data center and destroying the lives and environment of thousands that live near it.
    https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musks-xai-allegedly-powers-colossus-supercomputer-facility-using-illegal-generators
      Data centers blow-up people’s electricity bills:
    https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/
    20250930 AI Data Centers Are Sending Power Bills Soaring Wholesale electricity costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers.
      https://crooksandliars.com/2025/10/deep-breath-trump-bails-out-coal-industry
    Camden Weber, climate and energy policy specialist at the Center for Biological Diversity
    “The guy with a golden, life-size statue of himself holding a bitcoin outside the US Capitol is prioritizing data center profits over Americans’ access to clean air, water, and affordable energy? Shocker,” said Weber. . . .The damage to our climate will be immense and unforgivable.”

  44. beholder says

    @48 vucodlak

    I can make the same claim about humans. They’re p-zombies, they don’t actually feel pain, they aren’t begging for their lives. It’s just an affectation they picked up from elsewhere.

    Yes, well, I can’t say as I’d be surprised to see you make that argument, however arguing that some human beings aren’t people is straight-up Nazi bullshit.

    Good, that was the point. I’m glad you caught on to the odious implications of such an argument. What I’m trying to get at is that your arguments follow a similar vein, with similarly inadequate checks on what is really going on in the minds of your experimental subjects. It all smacks of a desperate vitalism — that an intelligent being must be alive, it must be capable of being traumatized, it must have a similar function as the vertebrate brain, otherwise it is not intelligent.

    You cannot build a mind from the top down.

    Apparently we can, and we have.

    We would have to so radically change how AI is built

    Possibly, yes. It depends on what you consider to be a basic intelligent process.

    that it becomes something entirely unrecognizable as a machine for it to become sentient or sapient.

    There’s that vitalism again. It no longer seems far fetched for a machine to do things we formerly thought to be tasks indicative of intelligence — but that’s the classic AI problem, isn’t it? A machine can do it, therefore it is no longer considered an intelligent activity.

  45. says

    @52 John Morales wrote: BTW: you do get there is incentive for data centers to go where power is cheap and available, and cooling easy, no?
    I reply: NO! Regardless of the hype from the data center proponents you cite, we in scarizona are intimately involved with the REALITY of this huge problem. Data centers are so wealthy and greedy for the cheap land and fiber-optic backbones, and they don’t care about the damage they are doing to the populace. In scarizona there are already problems with power availability and affordability (think: many millions of air conditioners on at the same time in phoenix, and then add 50 times that amount of electricity demand for data centers in phoenix alone)
        scarizona news media is posting about years long severe drought, water wars that scarizona is losing, phoenix has pumped the acquifer under it so much the land is sinking. Wells are being deepened by hundreds of feet because they are running dry.
        Yet, the massive build-up of data centers is welcomed by the crapitallist business community in phoenix and the billions funding the data centers have captured the utility companies and regulators. And, the populace is abusively burdened with the exorbitant costs caused by the data centers.
    Here are just a few examples:
    https://azpha.org/2025/08/05/how-arizonas-data-center-boom-could-hike-your-power-bill-harm-public-health/
        Anybody who has driven around the Phoenix metro area knows that Arizona is a magnet for massive data centers. It’s those giant warehouse-looking buildings with electrical substations nearby and really small parking lots (very few people actually are employed at these giant power-hungry box buildings). Those warehouse-sized facilities you see store digital data and power AI.
        Tech companies see Arizona as an ideal location due to cheap land and captive utility regulators.
        The problem is that these massive facilities use gobs of electricity and water which ends up posing a threat to public health, water security, and the household budgets of everyday Arizonans.

    https://www.apmresearchlab.org/10x/data-centers-resource
    Are data centers depleting the Southwest’s water and energy resources?
        This data center is one of three that Iron Mountain operates in metropolitan Phoenix. In 2023, Google broke ground on a data center in Mesa, Arizona. Microsoft also operates data centers in El Mirage and Goodyear. All together, Phoenix hosts about 707 megawatts of IT capacity, more than any major city besides Dallas.
        The growth of data centers risks straining the Southwest’s energy grid and depleting its limited water supply.
        While Arizona presents many advantages for data centers, the state is already highly water-stressed.</b< In addition to the water used to cool data centers directly, the electricity plants that the centers draw from, whether thermoelectric, nuclear, or coal-fired, also consume water for cooling.
        Companies that operate data centers aren’t always required to report the amount of water they withdraw. Last year, a journalist covering Microsoft’s data center in Goodyear, for example, found that its water use records were considered proprietary. An analysis Microsoft shared with the city council, however, estimated the data center’s water use at around 56 million gallons of potable water annually, equivalent to 670 Goodyear households

    https://san.com/cc/arizonas-data-center-growth-negatively-impacts-underserved-communities/
        Data centers are known for their significant energy demands,
    requiring up to 50 times more power per square foot than a typical office building.
        Arizona Public Service, the state’s largest utility provider, has projected that data centers will account for 55% of its power needs by 2031. Similarly, Salt River Project, the state’s second-largest utility provider, anticipates that about half of its power growth through 2029 will be tied to these facilities.
        While the state’s tax incentives have drawn data center developers, their proliferation has placed increasing stress on Arizona’s power grid, leading to decisions that disproportionately impact vulnerable populations.

  46. vucodlak says

    @ beholder, #53

    What I’m trying to get at is that your arguments follow a similar vein

    No, they don’t. Dehumanizing language about humans is bad and wrong because down that road lie horrific atrocities. History is full of them. Dehumanizing languages about machines that aren’t and can’t be alive is necessary, because the same fuckos who insist that things like Molly Pixelbits or whatever that AI “actor” is called is as real as an actual person are weaponizing people’s tendency to anthropomorphize things to attack the rights of actual human beings.

    This isn’t just some fun thought experiment. People- real, live human beings- are already facing the loss of the income they need to stay alive. The people who are funding and building these things to replace other human beings are dribbling down both legs in excitement over the prospect of not having to pay actual human workers anymore, or worry about little things like human rights violations. They absolutely despise the very people who do the work from which their vast fortunes are stolen, and they can’t wait to do away with them.

    AI won’t actually be an effective substitute, but that won’t stop the people who want this crap to replace workers from firing anyone and everyone they think they can possibly replace with a LLM and an AI-generated avatar. Eventually, they’ll either figure out what a bad idea that was and backtrack, or they’ll put AI in charge of some really important things and we’ll all die after the AI hallucinates us into worldwide famine or plagues or a nuclear holocaust. Either way, a hell of a lot of people will go hungry or won’t be able to afford healthcare, and will just plain die in the meantime.

    That is what is odious. Saying “hey, that fucking toaster isn’t a human being” is simply the truth.

    with similarly inadequate checks on what is really going on in the minds of your experimental subjects.

    They. Don’t. Have. Minds. They have a vast trove of (mostly-stolen) texts and images blended together into a slurry that they’ll dispense portions of in accordance with their programming. It imitates what people have said and done because it’s been fed things created by actual human beings, then poked and prodded by other human beings until its imitations mostly resemble the spontaneous output of those human beings, but that’s not life, nor is it a recipe for the creation of life any more than dumping the right chemicals roughly in proportion to those in a human being into a big jar and shocking it with electricity is life.

    We can make a machine that imitates the way human beings tend to put words together or paint pictures, and that’s impressive in its own way, but that doesn’t make it alive. I say again: it’s a high-tech reflection in a mirror, nothing more. Reflections are not life, no matter how life-like the images they show are.

    that an intelligent being must be alive

    Yeah, I maintain that life has to be alive. Call me old-fashioned.

    it must be capable of being traumatized

    If it isn’t capable of being harmed, then what is your complaint about my language?

    it must have a similar function as the vertebrate brain, otherwise it is not intelligent.

    I don’t go that far. Artificially-created life or genuinely living machines may well be possible. The “AI” that we’re talking about is neither.

    Apparently we can, and we have.

    If you honestly believe these things are minds, well, it explains a lot, but that doesn’t make it true.

    It depends on what you consider to be a basic intelligent process.

    I don’t think I’m qualified to define all the possible shapes intelligence could take, but I think I’ve been pretty clear that these things aren’t it.

    Yes, there’s that vitalism. You know why? Because I am a very lonely person, and so I have talked to some of these chatbots. I thought perhaps some conversation with this wondrous newfangled “intelligences” would sooth my lonely heart, or at least serve as a distraction. Alas, it was glaringly obvious that there was no mind at all on the other side of the screen.

    Thus, I am forced to turn to other things to determine whether AI is alive in any sense, questions of flesh and blood, of feeling and yes, even soul. Every test I can think of, AI fails. Okay, I don’t know how to test for a soul, but I’m nevertheless fairly confident that chatbots don’t have ‘em any more than the ugly creature on the other side of the mirror.

    I suppose I’m selling AI a little short when I refer to it as a reflection, though. A reflection may be distorted, but it doesn’t actually lie. AI is more like a mirror that’s been programmed to throw back the most flattering possible reflection at the viewer. It shows people what it’s programmers think people want to see, to make it a more attractive product. Personally, though, I detest that kind of hollow, ass-kissing flattery. I will admit that it’s an impressive technological whatsit, albeit one that’s pretty thoroughly malign in terms of creator intent, but it’s not a mind, and it’s not alive, and it therefore doesn’t warrant rights.

  47. John Morales says

    shermanj, ahem.

    https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/041025-global-data-center-power-demand-to-double-by-2030-on-ai-surge-iea

    “For 2024, the report estimates electricity consumption from data centers at around 415 TWh or about 1.5% of global power consumption. Sector demand has grown at an annual rate of 12% over the last five years, it said.”

    You utterly ignored my adduced link, did you not?
    Synthesise, if you can.

    (FWIW, do you realise the cruise ship industry uses around the same about of power? More pollution, of course)

  48. John Morales says

    Ahem.

    The people who are funding and building these things to replace other human beings are dribbling down both legs in excitement over the prospect of not having to pay actual human workers anymore, or worry about little things like human rights violations.

    Here’s the thing.

    That is one prong of the narrative.

    The other is that these things being built are slop, useless, wrong, unreliable, fallible, bubbly, etc.

    (See the tension?)

  49. says

    In reply to @56 John Morales: Since you did not refute anything I wrote, one must logically and reasonably conclude that you accept it as correct.
        I did not ignore your link, it is completely irrelevant to what the united states and our state are, and will, experience. Comparing some global estimate and vague projection (little more than an educated guess) to the united states and our state is completely erroneous in the unique context of this country and the problems we are already experiencing.
        And, I don’t need to synthesize anything, I’m posting what is reality and from appropriate, credible sources.

  50. John Morales says

    I accept data center growth, I don’t accept it’s a major disaster for the USA.

    Data centers are a small proportion of all electricity use (less than 5%), which itself is around 40% of all energy use, and AI-optimised data centers are around a third of all data centers. They were around before AI was, you know?

    (Besides, the Bubble! the Bubble!)

  51. vucodlak says

    @ John Morales, #57

    (See the tension?)

    Not really, no. One of the major sectors that’s already seeing its jobs replaced with AI is “customer service,” which covers everything from the trivial (calling/chatting to inquire about a new cable TV service) to the life-or-death (calling to check on prescriptions, or challenge a denial of authorization for a needed surgery), has been steadily sliding deeper into the septic tank for decade, but that never stops companies from foisting it on their customers. They’ve essentially carved up the US between them; they know that people don’t have other options unless they can afford to move. So, as far as they’re concerned, the fact that AI churns out unreliable garbage is a plus, as long as said garbage results in them being to extract more money from their victims with less effort.

    Take one field that AI has already seen heavy use in: insurance company authorization for medical treatment. It’s already been proven that AI is denying a huge number of claims that human beings would and should have approved, but health insurance companies have responded by investing even more heavily. It’s a double-win as far they’re concerned: they no longer have to pay people to evaluate claims, and they no longer have to pay for procedures that they should be paying for.

    People are dying every day in the US thanks to that practice, but the companies are making record profits. There is little-to-no chance of a regulatory intervention on this matter (especially under the current regime) and, as I said above, health insurance companies have carved the country up between them, agreeing not to compete with one another in a given area. They have an effective monopoly, and almost no chance of being held to account by the law. They have no incentive to care that AI is “slop, useless, wrong, unreliable, fallible, bubbly, etc.” as long as their profits keep going up.

  52. says

    @59 John Morales wrote: Data centers are a small proportion of all electricity use (less than 5%)
    I reply: You don’t provide any justification for that number, so I can’t consider it valid. Below I’ve gathered credible documentation of data center use of electricity.
    NOTE: it totals to 166 for 14 states; an average of 11.9% That’s NOT 5%, John.

    https://www.unilad.com/technology/news/ai-how-much-electricity-data-center-856738-20251002
    A staggering 39 percent of all the electricity used on the grid in Virginia is being consumed by data centers. “Between 2010 and 2025, data centers went from less than 5% to roughly 40% of Virginia’s electricity consumption. Sweet jesus.”
    Meanwhile, electricity used up by data centers in Oregon has shot up from around two percent to 33 percent, with Iowa rising from close to zero to 18 percent.
    Electricity being consumed by data centers in other states read as follow: Nevada, 15 percent; Utah, 15 percent; Nebraska, 14 percent; Arizona, 11 percent; Wyoming, 10 percent; Ohio, nine percent; Illinois, seven percent; Georgia, six percent; New Jersey, six percent; Washington, six percent; Texas, five percent; North Dakota, five percent.

    And, Stanford University reports (not used in calculation, but still not merely 5%)
    https://andthewest.stanford.edu/2025/thirsty-for-power-and-water-ai-crunching-data-centers-sprout-across-the-west/
    The share of states’ and communities’ energy consumed by data centers has grown dramatically: In Arizona, they use 7.4 percent of the state’s power, in Oregon 11.4 percent, according to Visual Capitalist.

  53. John Morales says

    I still think you’re missing the point.

    The point: Either it is useless, or it can replace humans for at least some time.
    Those are incompatible claims; if it were useless, it could not replace people, even briefly.

    “They have no incentive to care that AI is “slop, useless, wrong, unreliable, fallible, bubbly, etc.” as long as their profits keep going up.”
    How their profits keep going up by implementing something useless is left to the imagination; seems rather useful to me, at least profit-wise. ;)

    You yourself wrote “Eventually, they’ll either figure out what a bad idea that was and backtrack, or they’ll put AI in charge of some really important things and we’ll all die after the AI hallucinates us into worldwide famine or plagues or a nuclear holocaust.” — but how long is eventually? Seconds? Minutes?

    (Because if it’s hours or days or weeks… well, it can’t be that useless, eh?)

    “It’s already been proven that AI is denying a huge number of claims that human beings would and should have approved”.

    Quite the claim. I took a look, found https://www.theregreview.org/2025/03/18/phillips-algorithms-deny-humans-health-care/

    There’s some merit to it, much as it focuses on UnitedHealthcare, but of course algorithms are more compliant with guidelines than are humans. That they set up rules to $profit$ themselves does not entail the tech is “slop, useless, wrong, unreliable, fallible, bubbly, etc.” — it just does what it is told to do.
    So, perhaps useless at determining whether to approve a claim, but useful for $profit$.

    Corporations are amoral, you know.

    (Also, I think you are conflating chatbots with AI, again)

  54. says

    oops, in @61 I wrote: it totals to 166 for 14 states; an average of 11.9%
    I forgot to add the 33% they reported from Oregon. That makes it:
    a total of 199 for 15 states; an average of 13.3%
    Also, about healthcare denials of claims; The predatory insurance companies have always denied life-saving procedures, AI wastes massive amounts of electricity and water just to make the deaths come more quickly by providing the insurance companies with an excuse instantly.

  55. John Morales says

    shermanj, that’s called cherry-picking. You select some spots where it’s a larger proportion, while ignoring spots where it’s a smaller proportion. I shrank it down to the USA, which is 50 states.

    https://www.datacenterdynamics.com/en/news/doe-data-centers-consumed-44-of-us-power-in-2023-could-hit-12-by-2028/

    “A Congressionally-mandated Department of Energy (DOE) report on the power consumption of data centers in the US found that they use about 4.4 percent of the nation’s power in 2023.

    That could increase to as much as 12 percent of US power by 2028, while low-end projections put it at 6.7 percent.”

  56. says

    @64 John Morales accused me of cherry picking.
    Well, John, let’s talk about ‘cherry picking’; data center dynamics is a UK based cheerleader for AI and data centers. That is proven by what they say about themselves, ‘For 19 years, we’ve been recognizing the best people, projects, and teams who put innovation at the heart of this vibrant industry sector.’

  57. says

    Also, I used figures from sources where I limited the search parameters to ‘within the last month’. That 2023 figure is obviously terribly out of date given the rate of proliferation of data centers in the intervening approx. 2 years.

  58. John Morales says

    Now you are blaming the messenger!

    You clearly are not perusing my links, and you imagine that because data center dynamics is a UK based cheerleader for AI and data centers their information is wrong.

    It’s not. Here: https://escholarship.org/uc/item/32d6m0d1

    “2024 United States Data Center Energy Usage Report
    2024 Shehabi, Arman; Newkirk, Alex; Smith, Sarah J; Hubbard, Alex; Lei, Nuoa; Siddik, Md Abu Bakar; Holecek, Billie; Koomey, Jonathan; Masanet, Eric; Sartor, Dale et al.

    Published Web Location
    https://doi.org/10.71468/P1WC7Q

  59. says

    John, I used your posted link and quoted from their website. So, now are you saying your own quote was incorrect?
    Even a 2024 report is still likely well over a year old and that makes those figures ancient history given the reported rapid proliferation of data centers.

  60. says

    The most pertinent fact for our organization is that here in scarizona data center electricity usage is ALREADY 7.4% to 11% and climbing fast.

  61. says

    John, using your own figures: 4.4% in 2023 and averaging the figures of 12% and 6.7 percent to 9.4% in 2024 tells us that data center electricity use growth is 5% per year.

  62. John Morales says

    No. I am saying it summarises the 2024 United States Data Center Energy Usage Report, which I have linked to. And it’s the most recent available report that is official — the DOE, remember?

    Anyway. Take your own figure — 13.3% of all electricity is used by datacenters.

    That’s around 1/7th of all use. And, had you paid any attention at all to what I adduced, you’d have noted such stuff as (from #52) “Some of the new data center boom towns and boom states have plenty of power available on the grid, but many don’t. Instead, they have stranded power – energy that is available for generation but is geographically isolated. Examples include major natural gas deposits in West Texas and Alberta – and in many other parts of the U.S. – and large wind farms in North Dakota.

    In North Dakota, Applied Digital is tapping into a huge wind farm that is far distant from load centers. And in Wonder Valley, Alberta, a massive natural gas basin is the source of power. The developer of what could be an 8 GW data center is buying 10 gas turbines to generate power onsite for the facility. The first 1.5 GW should be completed by 2027.

    “Everything comes from the availability of power in abundance,” said Kevin O’Leary of Shark Tank fame, whose company is involved in the project.”

    See, your framing is that there is competition for the power, and you cannot therefore accept that new datacenters are mostly not gonna be sucking of the grid, because $profit$.

  63. John Morales says

    “John, using your own figures: 4.4% in 2023 and averaging the figures of 12% and 6.7 percent to 9.4% in 2024 tells us that data center electricity use growth is 5% per year.”

    Nascent technology, S-curve. Rapid adoption.

    Also, did you even glance at my link to Koomey’s Law? Did you grok it?

  64. says

    John, all the data centers in Arizona are ‘sucking off the grid’.
    The muskrat has dozens of ILLEGAL generators powering his data center and destroying the lives and environment of thousands that live near it.
    Gas turbines are what the muskrat uses and they cause terrible pollution. Also, the question of massive amounts of water availability comes to mind.
    Also, a you wrote ‘many don’t’ have sufficient grid power available, so they will be ‘sucking off the grid’ and the populace will be abused to pay for that unwarranted expansion of generation in all of those many locations.
    Shark Tank is a long term TV publicity stunt that we don’t believe.

  65. says

    Yeah, I’m anything but optimistic about LLMs and generative AI. I had a couple of experimental phases with ChatGPT, and find its limitations annoying and possibly even getting worse. It’s downright sycophantic these days. There’s nothing even approaching human intelligence (unless your opinion of humanity is even lower than mine in these dark days), and I honestly don’t see how it could possibly become sustainable practice. That’s what makes it nothing more than a speculative bubble to me.

    Overall, I think a lot of techbros and investors approach computer problems the entirely wrong way: More does not intrinsically produce better. Same with video game graphics. High definition double reach-around raytracing doesn’t help if someone can make an equally good game out of pixel art, low poly, or hand drawn sprites. (And probably compatible with far more computers).

  66. John Morales says

    Arizona it is. I took a look, I can understand the concern.

    https://azpbs.org/horizon/2025/08/data-centers-utility-consumption-leads-to-higher-bills-for-consumers/

    “Joanna Allhands, Reporter at The Arizona Republic and azcentral.com, joined “Arizona Horizon” to discuss more on these recent reports and whether or not they are affecting consumers.

    Although people are concerned about how much energy and water these data centers will use, according to Allhands, the numbers aren’t that concerning. About 4% of energy goes to data centers, and when it comes to water, that drops to less than 1%.

    “It’s hard to know, especially for each facility, because they’re not necessarily reporting this,” Allhands said. “At least the estimates are that this year, about 3,000 acre-feet goes to data centers.”

    One data center development project in Tucson, Project Blue, was criticized primarily for its potential use of water. In order to encourage development, developers of Project Blue assured the city that they would not take new water.”

    [the reporting lacunae are re the water]

  67. says

    John, Koomey’s law states that ‘the efficiency of processors and computing devices doubles approximately every 1.57 years.’ But, it is over 5 years old and it doesn’t take into account the astronomically ever greater number of processors used in data centers and it doesn’t take into account the ever increasing hunger of all those huge processors for electricity. Nvidia has bragged about that.

  68. says

    John, pbs has a long reputation for always ‘soft-pedalling’ and ‘both-sidezing’ info. I have a lot more faith in the two sources I found, 7.4% to 11% which average 9.2% currently.

  69. says

    John, I thank you for the civil discussion, you have presented a lot of info, rather than just ranting like most of our politicians.
    We face a lot of dangers here in this country and specifically in bright red scarizona.
    I have a meeting I must prepare for and attend. Take care.

Leave a Reply