Normally, I’d say that Sam Altman deserves any pushback he gets. But the AI hatred seems to be getting a little too intense.
On the morning of Friday, April 10th, a 20 year-old Texas man named Daniel Alejandro Moreno-Gama was arrested for allegedly throwing a molotov cocktail at Sam Altman’s mansion on Russian Hill in San Francisco. Less than two days later, police arrested 25 year-old Amanda Tom and 23 year-old Muhamad Tarik Hussein for allegedly firing a gun at the same house from their car before speeding away.
Earlier the same week, and thousands of miles away, an unknown assailant fired 13 shots into the front door of city councilman Ron Gibson, who had just voted to approve a new data center in Indianapolis against a groundswell of public outcry. A sign that read “NO DATA CENTERS” was left tucked under the doormat.
I can understand why all the AI-hate: data centers are environmental catastrophes, they represent a gross invasion of our privacy, they don’t seem to contribute much of value to society, but wow, they sure help improve billionaires profits. Unfortunately, in addition to a rational opposition, there are also crackpots with bizarre paranoid fantasies.
Little is known about the motives of Tom or Hussein, or the politics of the Indianapolis shooter, but reporters and the online commentariat quickly dredged up Moreno-Gama’s Discord chats and Substack posts. He was a reader of rationalist and AI doomer Eliezer Yudkowsky, who argues, as the title of his last book puts it, if Silicon Valley builds a “superintelligent” AI, “everyone dies.”
Yeah, if you’re citing Yudkowski, you’re a victim of extreme derangement. I guess it’s predictable that if your reaction is to throw molotov cocktails at people’s houses, you’re probably not building your case on a sound foundation. AI is not superintelligent, or even intelligent at all, it’s a tool that can be used by bad people to do bad things. Unfortunately, it’s also the case that AI proponents have built up this gigantic edifice of hype, pumping up the imagined power of AI to the point that they are actively asserting that it might lead to the end of humanity.
If you take at face value what the AI executives themselves have been saying for the last decade, that an AI powerful enough to make humans go extinct is nascent, then acting with force to stop it would be a rational action. The AI industry and its executives—including Sam Altman—need to own this outcome, not blame it on Yudkowsky, safety researchers, or worried activists who take what they say literally.
That’s fair. The people who have pumped up the hype are reaping what they have sown.
The nonsense promoted by the Less Wrong crowd isn’t the real danger, though. This is the real danger:
Inequality is through the roof. A bona fide tech oligarchy is ascendent, buffeted by leverage provided by AI. Its data centers, which bring few jobs and hike electricity bills, are enraging communities on the right and the left. Slop is everywhere. AI-generated art and text is undercutting creatives, powered by pirated, non-consensually ingested work. Employers from Amazon to Block to Duolingo to Meta are firing tens of thousands of workers and citing AI as the reason. AI may one day cure cancer, we’re told; great, even if we believe that, who will be able to afford the treatment?
That’s the anger fueling the anti-AI violence. To the handwringing AI industry insiders blaming doomers and poor messaging, ordinary people are saying: Wake up. We have good reason to hate AI and the people who profit from it. And yes, as people get desperate, as young people increasingly feel like AI elites have mortgaged their future, as residents who vote to regulate AI or ban local data center projects only to see their will overridden in favor of industry interests—well how do you expect them to feel? What do you expect? There is a distinct risk of further escalation.
If I had the opportunity to vote to stop the construction of a local data center, I’d take it, no question. I’m not at the point of throwing molotov cocktails, though. At the rate this country is falling apart at the hands of the oligarchs, give me a year to come around.



Sadly, Sam Altman is kind of passé these days. The new techno-boogie man is Anthropic which was founded by former OpenAI developers. Their Claude AI is now threatening the jobs of developers in the tech industry. The promise is fewer high-priced geeks and nerds will be needed in the not-too-distant future when anybody can have Claude Code produce the tool they need.
If you’re interested in getting some insight into AI in general, and ChatGPT specifically, checkout Professor Casey Fiesler on YouTube (here). She’s a PhD teaching “AI Ethics” at University of Colorado Boulder. I can’t verify the veracity of what she says, but she’s apparently knowledgeable about the field.
Her main mantra is “AI isn’t magic.”
She’s also a lawyer…which is interesting because I personally think there are a lot of legal mine fields waiting for AI. Some of them are already emerging such as genAI producing text that is essentially copied from copyright protected publications. Tons of lawsuits won’t kill what’s being called “AI” these days but I suspect it will curtail some of the enthusiasm. Lawyers have a lot of power in tech companies.
Anyone who firebombs Altman can’t be all bad.
wait until electricity and water rates spike even higher because data centers are consuming far more than their share. we may see a real revolution then.
The Onion’s crack researchers found the motive.
Title: Man Who Threw Molotov Cocktail At Sam Altman’s Home Claims He Was Following ChatGPT Recipe For Risotto
https://theonion.com/man-who-threw-molotov-cocktail-at-sam-altmans-home-claims-he-was-following-chatgpt-recipe-for-risotto/
It’s hard to say what’s likely to kill us, but it seems that capitalism, nationalism, and nuclear weapons are a bigger threat than AI. Is your worry that the AIs will launch the nukes? Is that a greater or lesser danger than Donald Trump? Is the current releasing of CO2 in the Persian Gulf worse than a few data centers? Which is more likely to escalate, my ChatGPT game of “thermonuclear war” or Israel’s nuclear game of liar’s poker?
We are already into the zone where the boffins are expecting the collapse of agriculture soon and Ukrainian engineers are putting M2 heavy machine guns on tracked drones for clearing Russians out of foxholes. The machine-gun drones are getting more autonomous with every upgrade. SKYNET is already killing Russians. I think the scenario is so fucked it’s hard to say which fucking fuckage is the fuckedest. I’ll keep using recycled plastic bags, and driving a hybrid. I dunno why.
Inequality is through the roof. A bona fide tech oligarchy is ascendent, buffeted by leverage provided by AI. Its data centers, which bring few jobs and hike electricity bills, are enraging communities on the right and the left. Slop is everywhere. AI-generated art and text is undercutting creatives, powered by pirated, non-consensually ingested work. Employers from Amazon to Block to Duolingo to Meta are firing tens of thousands of workers and citing AI as the reason. AI may one day cure cancer, we’re told; great, even if we believe that, who will be able to afford the treatment?
I am reminded of how Andrew Carnegie fired loads of iron-workers when the Bessemer process roughly halved iron-work labor requirements and everything switched to steel. Meanwhile, Rockefeller’s oil pumps and lines put zillions of colliers out of work, creating new problems and solving old ones. Cars jammed the streets of New York and the new trains allowed the 12-storey city block sized mountains of horse poo to be shipped and dumped in New Jersey. This was capitalism, up to its usual shit, which was heedlessly creating and destroying jobs.
All those centrist programmers who didn’t unionize when they had power, maybe regretting it now. Turns out Elon didn’t actually care any more than the admiralty cared about the colliers.
Ironically, an AI might recommend a more humanistic course of action and almost certainly would advise against a nuclear war. I really think the problem is laissez-faire capitalism and the moral nihilism business schools teach when seeing people as interchangeable resources.
In slightly better news on the subject – Missouri Town Council Approves Data Center. A Week Later, Voters Fire Half of Council. People really hate this stuff.
The plutocrat tech bros are all sociopaths. They don’t care how many people’s lives they destroy. Their obscene data centers cause massive damage for miles around: air pollution, noise pollution, massive heating, depletion and contamination* of dwindling water resources, wasting huge amounts of electricity that the captive utility consumers must pay for, etc. All while their AI entities spew bullshit at ignorant people. It is going to be difficult to survive in this highly likely, truly apocalyptic world. And, I’m being optimistic.
*many tech sites found that they have to put anti-corrosive additives in the water they use that can’t be removed, so that water becomes toxic waste.
p.s. We at our organization are dedicated to pacific behavior (physically peaceful, non-violent). However, these plutocrrats are committing violent acts against the populace. We will not support this massive abuse of resources and people. We have taken a pledge to NOT support or use AI.
Tell me about it! The governor of Wyoming had a closed-door meeting with California tech-bros in Jackson a few weeks ago to decide where was best to put data centers in the state. Not surprisingly, very few people around here think more data centers is a good idea given the lack of snowpack this year. Someone mentioned she was tired of subsidizing billionaires. I wrote to the mayor a few months ago about a data center that is already going in, and he assured me that the center will be self-contained and will recycle water and that it will “only” use 0.6 percent more water than the city of Cheyenne. I’m suspicious that he used a percentage because the actual volume of water would be too scary. If everyone does that, the amount of water for data centers going to augment pretty fast. It’s going to be pretty hard for the state to convince anyone to pull back on water use when there seems to be planty for people who don’t live here or care about the state, but see an opportunity to make some money.
One more post. We hold free community computer clinics, provide refurbed computers to low-income people and shelters for abused people, so we are far from being luddites or anti-tech. However, responsibility to the populace is more important than the policy of ‘tech advancements at all cost’.
@6 Marcus Ranum is correct: Inequality is through the roof.
Thus we must point out how all the huge abusive corporations are destructive to society and not to be trusted, encouraged or supported.
“Its data centers, which bring few jobs and hike electricity bills, are enraging communities on the right and the left.”
Data centers were around long before AI, because they handle all sorts of data and computation, not just AI. The world financial system and internet has run off them for decades. Cloud services run off them.
Gather a lot of computational processing power and data storage in one location, and that is a data center.
AI is merely a new workload class, though the fastest-growing one of course.
—
Point being, before AI there was no such backlash, but there were plenty of datacenters.
In the news: https://www.theguardian.com/technology/2026/apr/15/amazon-enters-agreements-nine-australian-renewable-projects-power-datacentres
↓
Amazon has entered power agreements with nine new renewable projects in New South Wales and Victoria, as the technology company seeks to source renewable power for its datacentre operations in Australia.
The nine deals, including one windfarm and 10 solar and battery projects, will take the amount of renewable energy Amazon is sourcing in Australia from 430MW to nearly 1GW.
The power purchase agreements are contracts between energy providers and datacentre operators to meet the expected demands of their centres. Amazon has entered into agreements for more than 20 projects in Australia as it aims to reach net zero carbon emissions by 2040.
These include power from Victoria’s Golden Plains 2, the largest windfarm in Australia, which began operating in 2024. It also includes the solar and battery storage farm in Muswellbrook in New South Wales, which is being built on a former coalmine site.
John Morales @ #12 — And I think it’s fair to say that not all “AI data centers” are being used for Large Language Models like ChatGPT. Equating “AI” with ChatGPT and other LLM technologies is an error. AI covers a lot of ground, and some of it probably doesn’t require massive data centers.
Indeed, robro. e.g. https://localai.computer/learn/run-ai-locally
You do realize that literally every piece of data on this planet is hosted in… data centers. They are a necessity to modern information. I agree the AI craze is out of control, but eliminating data centers would be the equivalent of banning the Internet.
I work in a small data center by the way.
More good news – Maine bans new data centres until November 2027
unclestinky, that is not the claim you adduced:
So it’s fine to build 2 19.99MW datacenters instead of a 40MW one.
Or one hundred, for that matter. So long as each one is not over 20MW.
NIMBY and BANANA, Mainely.
I still think Yudkowski is a reckless catastrophist, but he’s definitely low on the blame chain compared to AI-hyping venture parasites, environmentally blind profiteers, and CEOs looking for cover to lay off workers.
I don’t know whether the coming bursting of the AI bubble (there seems to be almost universal agreement that it’s a bubble) will make things better or worse. Some economic analysts have concluded that the AI data centers will be a complete loss, but that more capacity will have been added to the electrical grid, due to this bubble. And, that that will be the only permanent benefit.
Then, there is the problem of the data centers requiring new AI chips every two to three years, and that those cannot be made without helium. So, the recent loss of 1/3 of the global He supply in Qatar, for perhaps up to five years, may throw that whole business out of whack. Fewer chips made? More expensive chips? Who knows.
It’s chaotic all around, but at least a huge crash of the AI market would be a nice slap in Big Tech’s face, and, though it might bring great suffering to all the rest of us too, we seem to be in for that in any case.
I think there’s a bit of difference between AI focused datacenters and the ones that have existed priot to the AI craze. The datacenters I worked at before were in industrial or business areas. They weren’t setting up shop after buying a farm. That would make no sense because they required significant network access. With AI datacenters you still need network access but I’m not sure they need the scale of network access that a traditional datacenter does. They definitely need more cooling and processing power than a normal datacenter though, which means they’re going to either spread out more or have more demanding cooling requirements. Couple that with a wilingness to move in next to where people live and you have a very different problem.
I also don’t see the point in making up absurd claims about AI. There are plenty of real, pressing issues with AI so one does not need to make up reasons to look upon it unfavorably. For one thing after the Google book fiasco AI techbros have managed to cement the idea of Mass Theft as a Service being an allowable business model. This is the new “corporations are people” BS. And the next time someone invents a business model around stealing huge swathes of the internet it’s going to be harder to argue against legally. But Napster, now that was obviously an issue (sarcasm).
Actually the simple fact that the software being marketed as “AI” can’t possibly take over the world is another problem. It can’t accomplish that because it lacks any sense of understanding on any topic. And that means it’s not artificial intelligence at all, that’s just a label, a marketing gimmick. A lie.
So an accurate if unflattering summary of this stuff could go something like “This crop of so-called AI software is created by stealing from us, we’re teaching it for free, and the end goal is for some tech oligarch to steal our jobs so he can get rich.” Sure there are other details to it but that’s the nature of a summary. It doesn’t include everything. And nothing about that summary is wrong.
Ultimately if AI shows more bust than boom I wouldn’t be surprised if these new datacenters don’t go away. I’d imagine at least some of them will just quietly convert over to something like cryptobro bitcoin mining. That might not be a bad end either. If it happens at a large enough scale it might devalue everyone else’s crypto and put an end to that creative Ponzi scheme as soon as they start cashing out to make sure someone else is holding the bag. So if you have crypto, sell it ASAP if AI companies start dropping off.
garnetstar @ #20 — The LLM and so called genAI area definitely feel like a bubble. There’s a tremendous amount of hype and there are some issues with it: copyright infringement, infringement on private information, that feature called “hallucination”, and so on.
But AI has been around for quit a while in various forms and some aspects are deeply ingrained in the way we do things. So I would guess the “AI bubble” will be like the “dot-com Bubble” of the 90s. The bubble bursts, some of the players disappear, others adapt, but AI keeps going. Even LLM and genAI technology will survive, though altered in ways, because these technologies have some utility.
lanir:
To say the jobs were stolen implies they still remain, but no longer belong to us.
Almost as if AI had utility, no?
I see that all the time; ‘they are no good yet they steal our jobs’ type of thing.
(Me, I’m not sure what has supposedly been stolen from me by this crop)
—
But yes, it’s software. It is not data centers. So you got that bit right.
One of the main dangers is the “ai slop everywhere” problem. People and more so bots will flood the world woth content faster than you can sort truth from lies. The other danger is people learny to rely on quick, easy, sort of correct answers from ai rather than learning anything themselves.
Daleks are cyborgs instead of AIs, so they will not like data centers. Just saying…
I see that all the time; ‘they are no good yet they steal our jobs’ type of thing.
John, do you really not understand how both of those things can be true at once? It’s really not the complicated a concept: an employer would have no problem replacing a competent human or three with a chatbot that spouts mediocre output, because a) the chatbot is cheaper and won’t get uppity, and b) the employer doesn’t care, or doesn’t have incentive to care, about the quality of the cheaper chatbot’s/scabbot’s output.
Are you really this obtuse, or are you just being a techbro-libertarian and smugly insisting the owners of capital are always rational and right?
@12 John Morales wrote: AI is merely a new workload class
I reply: of course you are technically correct. It’s just that this ‘new class’ of data enters is obscenely bulked up on steroids. The tech articles I’ve read (along with the photos of the muskrat’s many dozens of illegal noisy, polluting gas turbines at his data centers) show these are magnitudes more destructive than previous data centers. They are destroying the life of many communities. That’s the real point.
robro @22, yes, the economists’ anyalses wern’t saying that what I might call “normal” AI, that’s been around for a while and that actually does useful things, and that is advancing at the normal rate, wouldn’t remain. As did, as you say, some of the dot.com companies.
They were mostly writing about the massively expensive infrastructure, forecasting what value might remain from that.
Maybe they could turn the former AI data center buildings into low-cost housing units!
RB:
That’s what xenophobes say about immigrants, I understand that much.
It’s not warranted.
Obviously, if they do a shitty job of doing the jobs they have, um, stolen (notice where the agency is attributed!) that ain’t gonna be very good for the former employer, because now the enterprise is less competent.
Enterprises still need to serve their customers, and shitty service is not conducive to that.
You don’t think profit and competitiveness are incentives?
(Scab-bot! Heh. The AI is stealing the jobs!)
Point being, that dynamic has been around as long as profit-driven businesses have been around; it is not new due to AI.
Obviously, if they do a shitty job of doing the jobs they have, um, stolen (notice where the agency is attributed!) that ain’t gonna be very good for the former employer, because now the enterprise is less competent.
Yes, because as we all know, businessmen and owners of capital are always 100% rational enlightened beings whose profit-driven self-correcting decisions are, by definition, always the best and most enlightened, and everyone who disagrees with a businessman’s decision is either a shortsighted fool or a malicious malcontent…right?
Oh, and comparing AIs to immigrant workers is yet another layer of silliness.
I noted there actually are incentives, contrary to your assertion that there are not..
Here: “Enterprises still need to serve their customers, and shitty service is not conducive to that.
You don’t think profit and competitiveness are incentives?”
How you imagined that meant I claimed businessmen and owners of capital are always 100% rational enlightened beings is pretty obvious, though. Straw dummies are much easier to engage.
“Oh, and comparing AIs to immigrant workers is yet another layer of silliness.”
But I did not compare AIs to immigrant workers, did I?
That is another false claim by you, another dummy.
I noted the same thing as you said is said by their detractors.
cf. https://www.fosterglobal.com/blog/debunking-the-myth-of-the-job-stealing-immigrant/
@John, employers reduce quality to extend profits often, and well documented. Hence the term shitification.
In theory competition is meant to provide pressure in the direction of better quality at lower cost. However we are in an environment of monopolies and cartels, with the regulators against such behaviour neutralised, so not so much.
“@John, employers reduce quality to extend profits often, and well documented. Hence the term shitification.”
Ahem. Cory Doctorow, and it’s ‘enshittification’.
That’s profit-driven degradation of a captive audience.
I am perfectly familiar with the concept.
A concept misapplied by you, because AI does not yet have a captive audience to enshittify.
As opposed to when? Before regulatory capture was a thing?
Or is it days of yore? Before LLMs? 2022, you mean? :)
But fine. I get your objection.
All those jobs are being stolen by AI, which is useless and cannot do the jobsm ad this is well documented.
It follows that eventuallym there will be no jobs.
Doom!
Heh heh heh.