People use tea for tasseography, or tea leaf reading, which is silly, stupid, and wrong, so we have to stomp this vile practice down hard. Big Tea has had its claws in us for too long, and now they’re claiming they can tell the future, when clearly they can’t.
Once that peril is defeated, we can move on to crush ChatGPT.
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.
Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began “lovebombing him,” as she describes it. The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.”
“I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory,” this 38-year-old woman admits. “He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”
I recognize those tactics! The coders have programmed these LLMs to use the same tricks psychics use: flattery, love bombing, telling the person what they want to hear, and they have no limits to the grandiosity of their pronouncements. That shouldn’t be a surprise, since the LLMs are just stealing the effective tactics they steal off the internet. Unfortunately, they’re amplifying it and backing it up with the false authority of pseudoscience and the hype about these things being futuristic artificial intelligence, which they are not. We already know that AIs are prone to “hallucinations” (a nicer term than saying that they lie), and if you’ve ever seen ChatGPT used to edit text, you know that it will frequently tell the human how wonderful and excellent their writing is.
I propose a radical alternative to banning ChatGPT and other LLMs, though. Maybe we should enforce consumer protection laws against the promoters of LLMs — it ought to be illegal to make false claims about their product, like that they’re “intelligent”. I wouldn’t mind seeing Sam Altman in jail, right alongside SBF. They’re all hurting people and getting rich in the process.
Once we’ve annihilated a few techbros, then we can move on to Big Tea. How dare they claim that Brownian motion and random sorting of leaves in a cup is a tool to read the mind of God and give insight into the unpredictable vagaries of fate? Lock ’em all up! All the ones that claim that, that is.











