Keep your AI slop out of my scientific tools!

I’m a huge fan of iNaturalist — I use it all the time for my own interests, and I’ve also incorporated it into an assignment in introductory biology. Students are all walking around with cameras in their phones, so I have them create an iNaturalist account and find some living thing in their environment, take a picture, and report back with an accurate Latin binomial. Anything goes — take a photo of a houseplant in their dorm room, a squirrel on the campus mall, a bug on a leaf, whatever. The nice thing about iNaturalist is that even if you don’t know, the software will attempt an automatic recognition, and you’ll get community feedback and eventually get a good identification. It has a huge userbase, and one of its virtues is that there always experts who can help you get an answer.

Basically, iNaturalist already has a kind of distributed human intelligence, so why would they want an artificial intelligence bumbling about, inserting hallucinations into the identifications? The answer is they shouldn’t. But now they’ve got one, thanks to a $1.5 million grant from Google. It’s advantageous to Google, because it gives them another huge database of human-generated data to plunder, but the gain for humans and other naturalists is non-existent.

On June 10 the nonprofit organization iNaturalist, which runs a popular online platform for nature observers, announced in a blog post that it had received a $1.5-million grant from Google.org Accelerator: Generative AI—an initiative of Google’s philanthropic arm—to “help build tools to improve the identification experience for the iNaturalist community.” More than 3.7 million people around the world—from weekend naturalists to professional taxonomists—use the platform to record observations of wild organisms and get help with identifying the species. To date, the iNaturalist community has logged upward of 250 million observations of more than half a million species, with some 430,000 members working to identify species from photographs, audio and text uploaded to the database. The announcement did not go over well with iNaturalist users, who took to the comments section of the blog post and a related forum, as well as Bluesky, in droves to voice their concerns.

Currently, the identification experience is near perfect. How will Google improve it? They should be working on improving the user experience on their search engine, which has become a trash heap of AI slop, rather than injecting more AI slop into the iNaturalist experience. The director of iNaturalist is trying to save face by declaring that this grant to insert generative AI into iNaturalist will not be inserting generative AI into iNaturalist, when that’s the whole reason for Google giving them the grant.

I can assure you that I and the entire iNat team hates the AI slop that’s taking over the internet as much as you do.

… there’s no way we’re going to unleash AI generated slop onto the site.

Here’s a nice response to that.

Those are nice words, but AI-generated slop is still explicitly the plan. iNaturalist’s grant deliverable is “to have an initial demo available for select user testing by the end of 2025.”

You can tell what happened — Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash.

The iNaturalist charity is currently “working on a response that should answer most of the major questions people have and provide more clarity.”

They’re sure the people who do the work for free hate this whole plan only because there’s not enough “clarity” — and not because it’s a terrible idea.

People are leaving iNaturalist over this bad decision. The strength of iNaturalist has always been the good, dedicated people who work so hard at it, so any decision that drives people away and replaces them with a hallucinating bot is a bad decision.

So much effort spiraling down the drain of AI

Google has come up with a new tool for generating video called Veo — feed it some detailed prompts, and it will spit back realistic video and audio. David Gerard and Aron Peterson decided to test it and put it through its paces, and see whether it produces output that is useful commercially or artistically. It turns out to be disappointing.

The problems are inherent to the tools. You can’t build a coherent narrative and structured sequence with an algorithm that just uses predictive models based on fragments of disconnected images. As Gerard says,

Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.

The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.

AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.

But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.

So all this generative AI is good for is making blipverts, stuff to catch consumers’ attention for the few seconds it’ll take to sell them something. That’s commercially viable, I suppose. But I’ll hate it.

Unfortunately, they’ve already lost all the nerds. Check out Council of Geeks’ video about how bad Lucasfilm and ILM are getting. You can’t tell an internally consistent, engaging story with a series of SIGGRAPH demos spliced together, without human artists to provide a relevant foundation.

Not in the market right now, but I’d consider it

I drive a 2011 Honda Fit. It’s an ultra-reliable car, running without a hitch for 14 years now, not even a hiccup. The labels on some of the buttons on the dashboard are wearing off, but that’s the only flaw so far. I feel like this might well be the last car I ever own.

Except…the next generation of Hondas might tempt me to upgrade.

It’s a three hour drive from my house to Minneapolis, and maybe a ballistic trajectory would make the trip quicker.

Also, not exploding is an important safety feature to me.

I’m sorry, we’re going to have to ban tea now

People use tea for tasseography, or tea leaf reading, which is silly, stupid, and wrong, so we have to stomp this vile practice down hard. Big Tea has had its claws in us for too long, and now they’re claiming they can tell the future, when clearly they can’t.

Once that peril is defeated, we can move on to crush ChatGPT.

Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.

Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began “lovebombing him,” as she describes it. The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.”

“I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory,” this 38-year-old woman admits. “He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”

I recognize those tactics! The coders have programmed these LLMs to use the same tricks psychics use: flattery, love bombing, telling the person what they want to hear, and they have no limits to the grandiosity of their pronouncements. That shouldn’t be a surprise, since the LLMs are just stealing the effective tactics they steal off the internet. Unfortunately, they’re amplifying it and backing it up with the false authority of pseudoscience and the hype about these things being futuristic artificial intelligence, which they are not. We already know that AIs are prone to “hallucinations” (a nicer term than saying that they lie), and if you’ve ever seen ChatGPT used to edit text, you know that it will frequently tell the human how wonderful and excellent their writing is.

I propose a radical alternative to banning ChatGPT and other LLMs, though. Maybe we should enforce consumer protection laws against the promoters of LLMs — it ought to be illegal to make false claims about their product, like that they’re “intelligent”. I wouldn’t mind seeing Sam Altman in jail, right alongside SBF. They’re all hurting people and getting rich in the process.

Once we’ve annihilated a few techbros, then we can move on to Big Tea. How dare they claim that Brownian motion and random sorting of leaves in a cup is a tool to read the mind of God and give insight into the unpredictable vagaries of fate? Lock ’em all up! All the ones that claim that, that is.

They’re not geniuses — they’re pretentious twits

I rather strongly dislike Chris Hedges, but I have to admit that sometimes he makes a good point.

The last days of dying empires are dominated by idiots. The Roman, Mayan, French, Habsburg, Ottoman, Romanoff, Iranian and Soviet dynasties crumbled under the stupidity of their decadent rulers who absented themselves from reality, plundered their nations and retreated into echo chambers where fact and fiction were indistinguishable.

Donald Trump, and the sycophantic buffoons in his administration, are updated versions of the reigns of the Roman emperor Nero, who allocated vast state expenditures to attain magical powers; the Chinese emperor Qin Shi Huang, who funded repeated expeditions to a mythical island of immortals to bring back a potion that would give him eternal life; and a feckless Tsarist court that sat around reading tarot cards and attending séances as Russia was decimated by a war that consumed over two million lives and revolution brewed in the streets.

It would be funny if it weren’t so tragic. There’s a great comic-horror movie that made this same point: The Death of Stalin. In the aftermath of Stalin’s death, the people who profited from the tyrant’s death bumble about, scrambling to take over his role, and it’s simultaneously horrifying and hilarious, because you know that every childlike tantrum and backstabbing pratfall is concealing death and famine and riots and futility. It portrays the bureaucrats of the Soviet Union as a mob of idiots.

There’s a new movie out that has the same vibe, Mountainhead. It’s not as good as The Death of Stalin, but it’s only fair that it turns the stiletto against American idiots, the privileged CEOs and VCs of Silicon Valley. The premise is that a group of 4 fictional billionaires are getting together for a poker game (which they never get around to) at an isolated mansion in the mountains. One of them, who is kind of a blend of Steve Jobs and Mark Zuckerberg, has just unleashed an AI on his social media company that makes it easy to create deepfakes and spoof other users — it turns out to be very popular and is also creating total chaos around the world, with assassinations, wars, and riots breaking out everywhere. He is publicly unconcerned, and actually suggests it’s a good thing, and suggests that we all need to push through and do more, promoting accelerationism. He’s actually experiencing visible anxiety as everyone at the meeting has their eyes locked to their phones.

What he wants to do is buy some AI-filtering technology from another of the attendees, Jeff, who doesn’t want to give it up. He just surpassed the others in net worth, and doesn’t want to surrender his baby. So they all decide that the solution is to murder Jeff so they can steal his tech. They aren’t at all competent at doing real world action, trying to shove him over a railing, clubbing him to death, etc., and their efforts all fail as Jeff flees into a sauna. They lock him in and pour gasoline on the floor, using their hands to try and push it under the door so they can set him on fire.

One of the amusing sides of the conflict is that all of them are using techbro buzzwords. The pompous elder “statesman” of the group is frequently invoking Kant and Hegel and Nietzche and Marcus Aurelius to defend his decisions, while clearly not comprehending what they actually wrote. They shout slogans like “Transhuman world harmony!” and declare themselves the smartest men in America, while struggling to figure out how to boil an egg. They have such an inflated sense of their own importance that they plan to “coup out” America and rule the world from their cell phones.

They’re idiots.

One flaw to the movie is that the jargon and references are flying so thickly that it might be a bit obscure to the general public. Fortunately, I had just read More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity by Adam Becker, so I was au courant on the lingo. It made the movie doubly depressing because it was so accurate. That’s actually how these assholes think: they value the hypothetical lives of future trillions over the existence of peons here and now. It’s easier to digest the stupidity when it’s coming from fictional characters, rather than real people like Yudkowsky and MacAskill and Andreesen and Gates and Ray Kurzweil (unfortunately, Becker twice says that Kurzweil is neither stupid nor crazy — sorry, he’s one or both of those). Fiction might make the infamous go down a little more smoothly, but non-fiction makes it all jagged and sharp and horrible.

Tech is the new religion. Écrasez l’infâme.

Still carrying water for Musk

Here’s a nice Washington Post headline:

SpaceX has a partially successful test flight

The subhead tells the real story.

SpaceX successfully launched its Starship on May 27, but the rocket lost control mid-flight and eventually fell apart.

They failed to recover the reusable booster, which exploded, and the second stage was tumbling out of control, and exploded. SUCCESS!

This was the ninth Starship launch, and none of them have “succeeded” by any reasonable meaning of the word. Maybe someone needs to teach the editors at the WaPo the word “failed”? Somehow, I think they’re going to need to use that word a lot in the next few years, in lots of contexts.


Here’s a detailed breakdown of the flaws in Starship design, with Elon Musk at the top of the list of problems.

Musk isn’t an engineer and doesn’t understand iterative design, and now SpaceX and NASA are facing a sunk cost fallacy.

You never achieve iterative design with a full-scale prototype. It is incredibly wasteful and can lead you down several problematic and dead-end solutions. I used to engineer high-speed boats — another weight- and safety-sensitive engineering field. We would always conduct scale model tests of every aspect of design, iteratively changing it as we went so that when we did build the full-scale version, we were solving the problems of scale, not design and scale simultaneously.

SpaceX could have easily done this. They already proved they could land a 1st stage/Booster with the Falcon 9, and Falcon 9’s Booster could launch a 1/10 scale Starship into orbit. Tests of such a scaled-down model would help SpaceX determine the best compromise for using the bellyflop manoeuvre and retro rockets to land. It would help them iteratively improve the design around such a compromise, especially as they will be far cheaper and quicker to redesign and build than the full-scale versions. Not only that, but these tests would highlight any of the design’s shortcomings, such as the rocket engines not having enough thrust-to-weight ratio to enable a high enough payload. This allows engineers to do crucial, complete redesigns before the large-scale version is even built.

If you have even a passing knowledge of engineering, you know this is what iterative design looks like. So, why hasn’t Musk done this?

Well, developing a Starship like this would expose that making a fully reusable rocket with even a barely usable payload to space is impossible. Musk knows this: Falcon 9 was initially meant to be fully reusable until he discovered that the useful payload would be zero. That was his iterative design telling him Starship was impossible over a decade ago, as just making the rocket larger won’t solve this! But he went on ahead anyway. Why?

Well, through some transparent corruption and cronyism, he could secure multi-billion-dollar contracts from NASA to build this mythical rocket. But, by going for full-scale testing, he could not only hide the inherent flaws of Starship long enough for the cash to be handed over to him but also put NASA in a position of the sunk cost fallacy. NASA has given SpaceX so much money, and their plans rely so heavily on Starship that they can’t walk away; they might as well keep shoving money at the beast.

This is why Starship, in my opinion, is just one massive con.

That is the real reason why Starship was doomed to fail from the beginning. It’s not trying to revolutionise the space industry; if it were, its concept, design, and testing plan would be totally different. Instead, the entire project is optimised to fleece as much money from the US taxpayer as possible, and as such, that is all it will ever do.

Are you telling me that Billy Bob Thornton is lying to me?

I started watching this new cable series, Landman, mainly because it has Billy Bob Thornton in it. I think he’s a good actor, even if he has fallen into the rut of playing bad, cynical characters…which is what he does in this show. It’s about rough, tough, oilmen doing the difficult, dangerous, and lucrative job of drilling for oil in Texas, and it really plays up the idea that manly men are all obnoxious and arrogant because they need to be in order to keep the oil flowing.

I was not particularly enjoying it. It’s a kind of self-serving genre, the whole assholes being assholes because it makes them great at getting shit done thing. I kept at it just because Billy Bob is so entertaining at doing that thing. But then I got to the third episode, where Billy Bob is entertainingly raging at a liberal lawyer woman (of course — no man in this show would be so wimpy) about the futility of wind turbines.

Do you have any idea how much diesel they had to burn to mix that much concrete? Or make that steel and haul this sh¡t out here and put it together with a 450-foot crane? You want to guess how much oil it takes to lubricate that fսck¡ng thing? Or winterize it? In its 20-year lifespan, it won’t offset the carbon footprint of making it. And don’t get me started on solar panels and the lithium in your Tesla battery. And never mind the fact that, if the whole world decided to go electric tomorrow, we don’t have the transmission lines to get the electricity to the cities. It’d take 30 years if we started tomorrow. And, unfortunately, for your grandkids, we have a 120-year, petroleum-based infrastructure. Our whole lives depend on it. And, hell, it’s in everything. That road we came in on. The wheels on every car ever made, including yours. It’s in tennis rackets and lipstick and refrigerators and antihistamines. Pretty much anything plastic. Your cell phone case, artificial heart valves. Any kind of clothing that’s not made with animal or plant fibers. Soap, fսck¡ng hand lotion, garbage bags, fishing boats. You name it. Every fսck¡ng thing. And you know what the kicker is? We’re gonna run out of it before we find its replacement. It’s the thing that’s gonna kill us all… as a species. No, the thing that’s gonna kill us all is running out before we find an alternative. And believe me, if Exxon thought them fսck¡ng things right there were the future, they’d be putting them all over the goddamn place.

Wait a minute…I’m at a green university that has been putting up turbines. We’ve got a pair of them pumping out 10 million kWh of electricity. We’ve got photovoltaic panels all over campus. We’ve got a biomass gasification facility. We’re officially carbon neutral right now — how could that be, if the installation of these features was so expensive that we’d never be able to offset their carbon footprint?

That stopped me cold. If Billy Bob delivered that rant to my face, I wouldn’t be able to answer it. I don’t have the details to counter any of his points, because I don’t have the background. I have been told that each of our wind turbines is an expensive capital investment, but that they pay for themselves in about a year of operation, which kind of undercuts Billy Bob’s claim. I also live in a region where people are putting them fucking things all over the goddamn place. Who am I going to believe, the scientists and engineers who are providing the energy to run my workplace, or a fictional character in a fictional television show that valorizes the oil industry?

So I stopped watching and went looking for verifiable information, because, you know, university administrators and bureaucrats do have a history of lying to us. Maybe Billy Bob is right. He sure does have a lot of passion on this point, and we all know that angry ranting is correlated with truth. Then I found this video.

It includes references! It turns out that data defeats ranting, no matter how well acted.

(Sorry, I just copy-pasted from the video description, and YouTube butchers URLs.)

[1] Life cycle analysis of the embodied carbon emissions from 14 wind turbines with rated powers between 50Kw and 3.4Mw (2016)
https://pure.sruc.ac.uk/ws/portalfile…

[2] Life cycle energy and carbon footprint of offshore wind energy. Comparison with onshore counterpart (2019)
https://www.sciencedirect.com/science…

[3] Life-cycle green-house gas emissions of onshore and offshore wind turbines (2019)
https://www.sciencedirect.com/science…

[4] Life Cycle Analysis of Wind Turbine (2012) https://www.researchgate.net/profile/…

[5] Orders of Magnitude – Energy
https://en.wikipedia.org/wiki/Orders_…)

[6] The Keystone XL Pipeline and America’s History of Indigenous Suppression
The Keystone XL Pipeline and America’s History of Indigenous Suppression – UAB Institute for Human Rights Blog

[7] ExxonMobil lobbyists filmed saying oil giant’s support for carbon tax a PR ploy
https://www.theguardian.com/us-news/2…

[8]Bonou, A., Laurent, A., & Olsen, S. I. (2016). Life cycle assessment of onshore and offshore wind energy-from theory to application. Applied Energy, 180, 327–337.
https://sci-hub.se/10.1016/j.apenergy…

[9] Weinzettel, J., Reenaas, M., Solli, C., & Hertwich, E. G. (2009). Life cycle assessment of a floating offshore wind turbine. Renewable Energy, 34(3), 742–747.
https://sci-hub.se/10.1016/j.renene.2…

[10] GCC – Potential Climate Change report
https://s3.documentcloud.org/document…

[11] Smoke, Mirrors, and Hot Air – How ExxonMobil Uses Big Tobacco’s Tactics to Manufacture Uncertainty on Climate Science
https://www.ucsusa.org/sites/default/…

Basically, scientists have done life-cycle analysis of wind turbines, measure all the energy and CO2 produced to build one of those damned things, and weighed it against the total energy produced over their lifetime, and also compared CO2 produced by a wind turbine against the CO2 produced for an equivalent amount of energy produced by burning coal and oil, and guess what? Billy Bob lied to us all. Wind turbines pay for themselves in less than a year, and oil-burning plants produce 60 times more CO2 than an equivalent bank of wind turbines.

I’m not resuming the series. I was already put off by the gross sexism of the show, but learning that it’s propaganda for Big Oil killed it for me.

I also learned that this show is made by the same people who made another popular series called Yellowstone that I’ve never watched and never will. What I saw of it is that it’s about heroic ranchers in the mountain west, and I’ve known ranchers — they tend to be horrible ignorant people with an extreme sense of entitlement — and I know that the regions full of ranchers tend to be regressive, bigoted, unpleasant strongholds of far-right political movements. Think Idaho. So I’ll pass on that one, too.

I bet it’s easy to get funding for those kinds of shows, though.

AI anatomy is weird

Ars Technica has a list of the worst features of the internet. It’s depressing how much of the stuff mentioned is just growing and taking over everything. Sadly, Google gets mentioned three times, for their voice assistant, search, and the incorporation of AI.

I encountered a terrible example of AI assistance. Here’s some AI advice on hygiene.

Does the AI not understand the words “front” and “back”, or is it very confused about the location of the urethra and anus?


Or try this one.

The US is going to ban TikTok?

I’m sorry, I’m just now learning that congress has passed legislation to force the sale of TikTok. This is just weird…our uber-capitalist nation is trying to control an independent Chinese corporation?

Wait, not sorry. I don’t use TikTok, so in a personal sense, I don’t care. I’ve glanced at it, and it’s the worst social media app out there — it’s nothing but blipverts for idiots and posers. I never saw the appeal, although it does seem to be extremely popular.

Oh wait, sorry again, Facebook is definitely the most atrocious, evil, terrible social media app. I abandoned that so long ago that I’d forgotten how awful it was. Instagram is bad, too, but I do have an account there, and for the same reason I clung to Facebook as long as I did — I’ve got family who use it, so it’s nice to keep up with them. Although that useful function is being diluted by the fact that I’m seeing family photos interspersed with mobile game ads that I don’t want to play and vapid photos of young women just standing there, smiling at me. I’m a crotchety old man, I just want to yell at them to do something, say something, tell a joke, do you think being pretty is sufficient reason to interfere with my interactions with people I care about? It’s not good.

The solution to my grumpiness is simple, though: don’t subscribe to TikTok, unsubscribe from services that don’t appeal, let people who do like them use them. That feels almost…libertarian to me, but OK, not meddling seems like a good approach.

But then I learn that Trump has an alternative solution. He wants Elon Musk to buy TikTok.

Can you imagine, after seeing the hash he’s made of Twitter, how much worse Musk would make TikTok?

These guys all make the worst possible decisions.