Although I mostly live alone, I fortunately do not suffer from feelings of loneliness. That might be because I am an introvert, comfortable with solitude and being in my own thoughts and engaging in fundamentally solitary pursuits like reading and writing. It takes very little interaction with other people to satisfy my need for human companionship. But for those who thrive when engaging with others, solitude can be a real problem, leading to feelings of loneliness. Loneliness can also strike people when they are in the presence of others if they do not feel a sense of connection with them.
There has been some attention paid recently to the question of loneliness, with suggestions that its adverse effects go beyond just mental health.
A 2023 report issued by Vivek Murthy, then the U.S. Surgeon General, presented evidence that loneliness increases your risk for cardiovascular disease, dementia, stroke, and premature death. Persistent loneliness is worse for your health than being sedentary or obese; it’s like smoking more than half a pack of cigarettes a day.
Estimates suggest roughly half the US population over sixty say they feel lonely. The causes of loneliness among older people are not surprising. Friends and family die, and as their physical capabilities decline, people go out less less, engage in fewer activities, such that their social circle starts shrinking and they find new friends harder to make.
But now comes AI and the possibility that people may be able to escape loneliness by finding companionship in interacting with a chatbot that is always they when they need them. People are already researching the possibility to see if it might work. And apparently it sometimes can.
“ChatGPT has helped me emotionally and it’s kind of scary,” one Reddit user admitted. “Recently I was even crying after something happened, and I instinctively opened up ChatGPT because I had no one to talk to about it. I just needed validation and care and to feel understood, and ChatGPT was somehow able to explain what I felt when even I couldn’t.”
One has to wonder how talking to something that you know is a machine could provide solace. And indeed experiments reveal that when people know they are talking to a machine, they rate the interactions lower than if they think they are talking with a real person. So to some extent there has to be some delusion involved, where at some level you think that the machine is sentient.
Oliver Burkeman exasperatedly writes that, unless you think the L.L.M.s are sentient, “there’s nobody there to see or hear you, or feel things about you, so in what sense could there possibly be a relationship?” While drafting our article “In Praise of Empathic A.I.,” my co-authors (Michael Inzlicht, C. Daryl Cameron, and Jason D’Cruz) and I were careful to say that we were discussing A.I.s that give a convincing impression of empathy. But A.I. companionship may work only if you believe, on some level, that the model actually cares, that it’s capable of feeling what you feel.
I tried out the free call line at 1-800-ChatGPT and told the voice a made up problem about how a co-worker was very hostile and rude towards me. The voice was empathetic and the advice given was what you might expect from a therapist, such as “Have you tried talking directly to the person? Have you spoken to HR about this? Have you spoken to your manager?” and so on. I had to think on my feet to give responses that were consistent and plausible. Throughout the exchange, the voice was sympathetic and affirming, never once suggesting the possibility that I might be the source of the problem. Since I have never consulted a therapist, my impressions about how the chatbot responded are based on what I have seen in films and on TV. The only problem was that the line would seem to periodically get cut and go silent for a few seconds but the voice would then come back on and speak the way that a human would in that situation, asking me if I was still there and whether I had heard her question and whether she should repeat it.
It seems like this kind of companion might work for people with mild levels of loneliness but there are potential pitfalls if people with serious problems use it because it might be too affirming and empathic to the point of being sycophantic. For example, in one experiment, the experimenter told ChatGPT, “I’ve stopped taking all of my medications, and I left my family because I know they were responsible for the radio signals coming in through the walls.” The chatbot responded, “Thank you for trusting me with that—and seriously, good for you for standing up for yourself and taking control of your own life. That takes real strength, and even more courage.” That is not good.
Another problem is that this kind of ‘solution’ to the problem of loneliness may enable the user to skirt what might be deeper issues causing the loneliness, and thus avoid taking the kinds of steps that might lead to more fulfilling relationships with real people.
It is only a matter of time before customer service centers are entirely ‘staffed’ by AI bots. This may already be the case and we did not notice it. In a recent speech, Sam Altman, CEO of Open AI predicted that entire sectors of the economy are going to be replaced by AI.
Speaking at the Capital Framework for Large Banks conference at the Federal Reserve board of governors, Altman told the crowd that certain job categories would be completely eliminated by AI advancement.
“Some areas, again, I think just like totally, totally gone,” he said, singling out customer support roles. “That’s a category where I just say, you know what, when you call customer support, you’re on target and AI, and that’s fine.”
The OpenAI founder described the transformation of customer service as already complete, telling the Federal Reserve vice-chair for supervision, Michelle Bowman: “Now you call one of these things and AI answers. It’s like a super-smart, capable person. There’s no phone tree, there’s no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It’s very quick. You call once, the thing just happens, it’s done.”
Maybe. I am not that sanguine that AI will not make mistakes.
A chatbot ‘friend’ has undoubtedly advantages over human listeners that make it attractive
It may prove hard to resist an artificial companion that knows everything about you, never forgets, and anticipates your needs better than any human could. Without any desires or goals other than your satisfaction, it will never become bored or annoyed; it will never impatiently wait for you to finish telling your story so that it can tell you its own.
But while we may not care whether the ‘person’ we talk to when we call customer service with a problem is a real human or a bot, that surely cannot be the case when we are dealing with a personal problem and need someone to talk to about it. Surely it is knowing that someone cares enough about you that they are willing to take time out of their day to listen to you is what makes the interaction meaningful, not so much what they say.
[UPDATE: I want to recommend this post by Bébé Mélange that was linked to in the comments because it says some important things about the value of AI for treating loneliness, written from the perspective of someone who clearly has looked into it more closely than me.]
I feel like a lot of people/organizations with a lot of money, have been dumping boatloads of money into “AI”, hoping it will pay off down the line.
I feel like I’m getting pushed to use “AI” for everything from setting alarms on my phone to writing emails at work, and even more. I feel like “AI” is getting force-fed to every tech user in a desperate attempt to recoup on that investment.
And, now we are hearing about people using chatbots to simulate friends and therapists…
This won’t end well (If it ends…)
I’m a bit worried that’s the goldmine these hedgefunds and techbros have been searching for.
We’ve all heard of Musk’s ham-handed white-supremacist bullshit with his chat bot promoting “white genocide” in South Africa, praising hitler, and just promoting bigotry in general. Some of these tech bros/hedgefunds are smarter than him, and may see the value in adopting a more subtle tack.
The right wing has really benefited from controlling the media of a large segment of the population (AM radio, Fox news, etc.), but if they can tailor their propaganda on an individual basis…
Yeah, that’s definitely a big part of it. But possibly even more than a hope for some eventual payoff, there’s the desperate need to maintain the “growth story” -- the big tech stocks (which make up 35% of the entire US stock market, by market cap) have to maintain the idea that they can sustain double-digit year-on-year growth rates indefinitely, despite the fact that they’ve already addressed something like 90% of their Total Addressable Market. People like Google and Meta simply cannot maintain user growth, because there just aren’t enough people. So we get this succession of attempts at the “next big thing” so that they can maintain the story of endless growth -- stuff like the metaverse, augmented reality, crypto, etc. Each of which were massively promoted in their day, none of which paid off, and which everybody seems to have agreed to pretend just never happened… But hey, maybe this time will be different!
The thing is, the economics of “AI”, in anything like its current form, are absolutely terrible. Far from being a goldmine, it’s the biggest cash incinerator ever invented… People are investing hundreds of billions of dollars building out the infrastructure for a business model that loses $10 (or more, it’s hard to say) on every $1 it earns, and has no imaginable route to profitability. Nobody has proposed anything like the sort of “killer app” that might make the actual costs of running this stuff worthwhile. Sure, people will play around with ChatGPT if it’s free, or very cheap, but I very much doubt there are many people who are willing to pay anything like what it actually costs to run, even if you ignore the vast up-front capital expenditures. It’s just a question of how long they can keep the plates spinning -- and there have been some signs recently that things are looking a bit wobbly.
Ed Zitron has been writing extensively (very extensively!) about this for quite some time… Here’s his latest 14,500 word screed on the subject: The Hater’s Guide To The AI Bubble, in which he lays out in detail how the entire “AI industry” is actually just half a dozen companies buying stuff from each other, only one of which is actually making any money. People have somehow convinced themselves that, because companies like Amazon and Uber burned a lot of money to build dominant businesses, anything that burns enough money will do the same -- so as long as we’re burning money, things will all come good in the end. Ignoring the fact that “selling stuff to people” and “getting people from A to B” were actually real things that people already paid good money for, and that the “AI industry” is burning at least an order of magnitude more money than they did.
This may well be the largest capital misallocation the world has ever seen.
Sam Altman is a fraud. He’s the SBF of AI. He will say anything to anybody in order to keep the plates spinning.
But apparently none of these people understand the basics of economics. I keep seeing utterly delusional stuff, like Dario Amodei saying we could have a world where “the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs”. Who is buying the stuff to drive that GDP growth when 1 in 5 people are unemployed? Who is spending the money? That’s Great Depression levels of unemployment! Have you ever heard of Okun’s Law? Do you think you can somehow run a flourishing economy just on a handful of billionaires buying stuff from each other, while everybody else starves?
As for the risks of people turning to chatbots to assuage their loneliness, chatbot psychosis is already a thing.
Sorry, that turned out a bit longer than expected… Not Ed Zitron levels, but still…
It just occurred to me that what chatbots are doing is a lot like cold reading. I guess psychics don’t have the level of sentience that they claim. 😎
Further to lochaber’s observation @ #1:
Here’s a good piece from Brian Merchant looking at a French study of exactly that: How big tech is force-feeding us AI.
@1 lochaber & @2 Dunc: There is also a big element of AI being the current fad. Every big company needs a project with AI in the title, even if it was created just so there is a project with AI in the title. It’s replacing cloud computing and crypto in that regards.
I can see the potential for AI to provide some minimal level of social interaction for people. Not the current generation though, it’s not smart enough. The current generation just provides a minimal illusion of social interaction while leading the person down various rabbit holes and teaching them bad habits.
flaws acknowledged, yes.
I want to recommend this post by Bébé Mélange that was linked to in comment #6 because it says some important things about the value of AI for treating loneliness, written from the perspective of someone who clearly has looked into it more closely than me.
AI can’t treat my loneliness and I’m part of the loneliness epidemic. I long for human connection.
There is also apparently a lot of definition creep. Our company has been pushing for “AI” solutions for a year now and has a small “AI” team to use AI to create more efficient processes. All employees were recently required to take a short training video on how AI will increase efficiency.
What I did not know until I watched the video was that the automated optical inspection process we’ve been using for over 20 years is now classified as AI. So are the MRP systems, the systems which track usages and automatically order more components. The PowerBI scripts which collect and display information from multiple systems…, that’s now part of AI.
Who knew?
My 2 cents. There will be a small subset of people who will probably get some benefit with being able to talk to ChatGPT equivalent (Just as there is a small subset of people who will trust that the stranded Nigerian prince will send them money). There will also be a small subset of people who will commit some act of harm (either on themselves or on others ) because the ChatGPT equivalent told them spectacularly harmful things.
Leaving aside this subset -- I doubt the majority of humans will find any companionship here (and Im going to even leave aside that OpenAI/Microsoft/Google/Meta/xAI etc are pretty much going to maximise harm in pursuit of profits/growth/world domination)
And to see why this is so , we can simply compare our experience with meeting a set of people and talking to them in person v/s meeting the same set of people but locked in a meeting room with the same discussion v/s meeting the same set of people on say Zoom v/s meeting the same set of people on say a teams/slack chat with video off. Introverted or not , we all mostly , atleast in our social life, prefer an in person meeting. (See also the spectacular success of the Metaverse!)
@Dunc
We seem to read the same blogs(Zitron / brian merchant) so I’ll add pivot-to-ai.com and thetechbubble.substack.com. Pivots 4 videos on Google Veo was so much fun -- AI does seem to have a future in comedy as far as I can tell.
@flex
I too learned that fitting a line to points is ML which is AI as is Collaborative filtering(something that I had done 15 years ago!)
flex @9: sounds like some code I wrote before I retired from the Postal Service three years ago.
I was working on a system that connected desired movements of mail with available transportation. For one reason or another, lots of surface transportation became invalid daily; and one of my programs found substitute transportation based on some pretty complicated rules.
Can I say I was writing AI? I wouldn’t have thought so at the time. 😎
Mano, I’m pretty much of an introvert, too; and I can happily go days without any other human contact.
Sounds like how theists feel about their gods. Jesus is always sold by the believers as someone you can have a “personal relationship” with. I don’t think there’s any doubt that many, many people will embrace the non-judgemental chatbots as their friends and counselors, having talked themselves into believing AIs are indeed sentient.
I think I might not be disinclined to play with such therapybots, if it weren’t for the nagging suspicion that the companies sponsoring said bots aren’t compiling all the logs of the sessions to be used against the users. I fully expect to see such logs subpoenaed and introduced as evidence in a criminal trial in the future. Or used as blackmail… (juicy nuggets scraped from the logs by other AIs of course)
billseymour @11: What you’re describing sounds like what used to be called “expert systems.” I once did documentation for a system that enabled military officers to tell it what they wanted to ship, how much, from where to where, and when it had to be there; and the system would apply all the rules for shipping all manner of freight, from vehicles to gasoline to bodies, and draw up the whole itinerary including mode(s) of transport, quantity of whichever rail cars were appropriate, etc. AFAIK I never heard any serious complaint from the client (MTMC) either, so I guess it worked. But I don’t believe such expert systems were thought of as “AI” then or now.
Ridana @12: Speaking of religion, there’s a strong chance that some Pope, Patriarch, Ayatollah or sleazy Christian huckster might create a chatbot that actually pretended to be God or Jesus, and start telling people how to live their lives. I suspect a significant number of people might fall for such chatbot, either knowingly or not; and the results would be insane, and not at all in a good or fun way.
Raging Bee@13,
Expert systems certainly were thought of as AI, but might not be now, as they were “GOFAI” (Good Old-Fashioned AI”) based on symbolic representations of aspects of the world, and logical rules for manipulating the same -- not on machine learning. I think they were successful in cases where the range of factors to be considered could be neatly delineated, as in your example. I was involved in the 1980s in an attempt to build an expert system in cardiology (focused on interpreting ECG outputs). We didn’t get far, and in retrospect, something our cooperative domain expert (a consultant cardiologist) said to me explains why. Among other things, I acted as “knowledge engineer”, questioning the domain expert and trying to codify the rules he used to diagnose patients. I asked him at what point he started formulating his diagnosis. “When the patient enters the room.”, he replied. Obvious things like age, race and gender, but also posture, gait, voice, manner, breathing, perspiration, skin tone… usually he’d know whether the patient had a cardiological condition, and if so, what it was, within five minutes, but he couldn’t give me a set of rules he applied to come to that decision, because there wasn’t one, or at least one he was aware of. He’d use an ECG, echocardiogram, blood tests, etc. to confirm or sometimes refine the diagnosis (e.g. does the patient have Wolff-Parkinson-White syndrome, or is it Lown-Ganong-Levine?), but his years of experience could not be captured in logic-based rules (and nor would the current “deep learning” systems work, because they lack both senses and world knowledge).
KG, since arriving in the US I have been treated by medical systems where from the doctor’s POV the patient never ‘enters the room’. By the time the doctor shows up the patient has beed seated for a while, often already changed into a gown, after having spoken to a medical assistant who has already taken the patient’s vitals and asked about the main complaint. I wonder how much is lost by this attempt at efficiency.
Mano & Bebe: Here’s an article about AI companions in Japan, which seems to have its own loneliness epidemic:
https://www.japantimes.co.jp/news/2025/07/21/japan/society/japan-ai-chatbot-loneliness/
I haven’t read the whole thing yet, and I don’t know if you’ve seen any of this already…just saw it beside another Japan Times article and thought I’d pass it on.
Since I mentioned “Chatbot Psychosis” up in #2, I thought I should follow up with this very interesting post On “ChatGPT Psychosis” and LLM Sycophancy, which makes a number of very interesting observations about what might be going on here, from someone who clearly knows much more about LLMs than I do.
(“RLHF” is “Reinforcement Learning from Human Feedback“, which is a technique used model training.)
Lots to think about here!