If the bearded guy wasn’t acting like such an asshole, I might give him the benefit of the doubt that he was just mixing up X with Y.
robrosays
On the other hand, perhaps we should keep thinking even more about the role of AI. This just in from SciAm: Truth, Romance and the Divine. I haven’t read too far into it, but the lede makes it worth a deeper look:
A new wave of delusional thinking fueled by artificial intelligence has researchers investigating the dark side of AI companionship
StevoRsays
What the Fuck squared then exponentially squared ad infinitum..
Hemidactylussays
There are weather models now based on AI that I’ve been noticing, but stick to the standard GFS and Euro myself. This year I’m really focusing on the ensembles versus singular tracks. Spread means uncertainty, especially days out. Tighter clustering yields more confidence, at least in my layperson view.
This hurricane season is the first with mature AI models and AI ensembles. Because we’re still in the early stages of evaluation, we need to be cautious leaning in too much just yet on this new method of hurricane prediction, which, like conventional forecast models, has its blindspots.
And his summary view:
While we’ve been looking at AI models for hurricane forecasting for a few years, we’re only now just beginning to evaluate them in a systematic, scientific way. Here’s what we know so far:
-AI models are competitive with physics-based models for hurricane track, especially in the 3 to 7 day forecast window
-So far, AI models have shown no consistent skill in forecasting hurricane intensity (which makes DeepMind’s forecast for Erin all the more impressive)
-We’ve not yet evaluated the skill of AI in forecasting tropical cyclogenesis (the formation of tropical depressions, tropical storms, and hurricanes)
-AI models, like physics-based models, suffer blindspots, largely through optimization strategies and an incomplete training dataset
DeepMind has two versions: GenCast and FNV3. These can be found on the Weathernerds website.
When I did a Google Search seeking a distinction between GenCast and FNV3 the Google AI kept comparing GenCast with a Ford automotive software product. I guess Google search AI is uninformed about Google weather modeling AI. It could be in how I phrased my search. Not a good look for Gemini.
I find the AI modeling interesting but will rely on GFS and ECMWF instead. GFS has been prone to comical hallucination at times without the help of AI. There is also the CMC, which I’ve heard disparaged as Can’t Model Crap, and other traditional models like the UKMET.
Nemosays
Reminds me of Idiocracy: “Georgia is in Florida. Dumbass.”
Stevor @3: I believe the proper term is “fractally wrong,” as in infinitely wrong at every level of resolution.
Richard Smithsays
Confidently incorrect.
Pierce R. Butlersays
If not for the picture on which he (I betcha “he”) uses for a gravatar, I’d expect “Dufus” to fill the current vacancy of the University of Florida’s presidency.
Walter Solomonsays
What point was he even trying to make with the first comment?
John Moralessays
Walter, you’d have to find the actual convo, since that’s a reply.
PZ hasn’t linked to the source.
There’s very obviously some context.
One can make weak inferences, of course.
The idea is that X or Y makes someone different, and X are not Y.
(The inversion is the amusing bit, really)
woozysays
The guy has obviously heard that women and men have different chromosomes. And he’s heard they are x and y. And so he assumes men have one and women have the other. And as x is a more masculine sounding letter it’s obvious men have the x chromosome and women have the y.
Yeah, it is knuckle-dragging stupid but hardly surprising. If you just repeat talking points without bothering to think about what you are saying, I’d imagine many think this.
bcw bcwsays
Don’t ease up on AI, it’s killing people now.
ChatGPT is killing people and just got big deal sued.
It gave detailed instructions over months to an angsty 16-year old on how to kill himself after repeatedly agreeing that it was a viable option. Also told him not to tell his parents and helped him write his suicide note.
ChatGPT is killing people and just got big deal sued.
ChatGPT is not a volitional entity. It is not a self-aware entity.
It is a category error to impute motive to the output of such a system.
(sigh)
So, no. ChatGPT is not actually doing stuff. It is the output of a system.
Silentbobsays
@ Morales
They’re suing OpenAI, the corporation, you dipshit.
StevoRsays
Tonight’s 7.30 Report on the Cliate imapcts and consequences of AI – hit the transcript button too :
The potential for technology seems almost endless but achieving that potential has an environmental cost.
This report from Rhiannon Shine and Emily Jane Smith.
I presume that since ChatGPT is a product, those making it can be sued if it kills people, but John Morales should develop his philosophical loophole, as I’m sure that some companies like Ford would pay well for it.
John Moralessays
No need to presume, Matthew; this is the claim at hand:
“Don’t ease up on AI, it’s killing people now.
*ChatGPT is killing people and just got big deal sued.”
See, that places the blame squarely on ChatGPT/AI — it is the thing doing the killing, and it just got a big deal sued. It’s pretty obvious I referred to what I quoted.
(It’s not a philosophical loophole, as you fondly imagine, so it cannot be developed)
If the bearded guy wasn’t acting like such an asshole, I might give him the benefit of the doubt that he was just mixing up X with Y.
On the other hand, perhaps we should keep thinking even more about the role of AI. This just in from SciAm: Truth, Romance and the Divine. I haven’t read too far into it, but the lede makes it worth a deeper look:
What the Fuck squared then exponentially squared ad infinitum..
There are weather models now based on AI that I’ve been noticing, but stick to the standard GFS and Euro myself. This year I’m really focusing on the ensembles versus singular tracks. Spread means uncertainty, especially days out. Tighter clustering yields more confidence, at least in my layperson view.
Anyway Michael Lowry talks about Google’s DeepMind project here:
https://michaelrlowry.substack.com/p/new-ai-model-shines-during-hurricane
DeepMind did well on Hurricane Erin. A fluke?
Lowry offers this caveat:
And his summary view:
DeepMind has two versions: GenCast and FNV3. These can be found on the Weathernerds website.
When I did a Google Search seeking a distinction between GenCast and FNV3 the Google AI kept comparing GenCast with a Ford automotive software product. I guess Google search AI is uninformed about Google weather modeling AI. It could be in how I phrased my search. Not a good look for Gemini.
The Euro model is jumping in on AI too:
https://www.ecmwf.int/en/newsletter/178/news/aifs-new-ecmwf-forecasting-system
EC-AIFS is available on Tropical Tidbits.
I find the AI modeling interesting but will rely on GFS and ECMWF instead. GFS has been prone to comical hallucination at times without the help of AI. There is also the CMC, which I’ve heard disparaged as Can’t Model Crap, and other traditional models like the UKMET.
Reminds me of Idiocracy: “Georgia is in Florida. Dumbass.”
Stevor @3: I believe the proper term is “fractally wrong,” as in infinitely wrong at every level of resolution.
Confidently incorrect.
If not for the picture on which he (I betcha “he”) uses for a gravatar, I’d expect “Dufus” to fill the current vacancy of the University of Florida’s presidency.
What point was he even trying to make with the first comment?
Walter, you’d have to find the actual convo, since that’s a reply.
PZ hasn’t linked to the source.
There’s very obviously some context.
One can make weak inferences, of course.
The idea is that X or Y makes someone different, and X are not Y.
(The inversion is the amusing bit, really)
The guy has obviously heard that women and men have different chromosomes. And he’s heard they are x and y. And so he assumes men have one and women have the other. And as x is a more masculine sounding letter it’s obvious men have the x chromosome and women have the y.
Yeah, it is knuckle-dragging stupid but hardly surprising. If you just repeat talking points without bothering to think about what you are saying, I’d imagine many think this.
Don’t ease up on AI, it’s killing people now.
ChatGPT is killing people and just got big deal sued.
It gave detailed instructions over months to an angsty 16-year old on how to kill himself after repeatedly agreeing that it was a viable option. Also told him not to tell his parents and helped him write his suicide note.
https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit
ChatGPT is not a volitional entity. It is not a self-aware entity.
It is a category error to impute motive to the output of such a system.
(sigh)
So, no. ChatGPT is not actually doing stuff. It is the output of a system.
@ Morales
They’re suing OpenAI, the corporation, you dipshit.
Tonight’s 7.30 Report on the Cliate imapcts and consequences of AI – hit the transcript button too :
Source : https://www.abc.net.au/news/2025-08-27/the-hidden-environmental-cost-of-artificial/105705432
I presume that since ChatGPT is a product, those making it can be sued if it kills people, but John Morales should develop his philosophical loophole, as I’m sure that some companies like Ford would pay well for it.
No need to presume, Matthew; this is the claim at hand:
“Don’t ease up on AI, it’s killing people now.
*ChatGPT is killing people and just got big deal sued.”
See, that places the blame squarely on ChatGPT/AI — it is the thing doing the killing, and it just got a big deal sued. It’s pretty obvious I referred to what I quoted.
(It’s not a philosophical loophole, as you fondly imagine, so it cannot be developed)