Many people nowadays find friends and potential romantic partners through online dating sites and similar means. If they strike up some kind of rapport through initial text exchanges, they may pursue a deeper relationship, even leading to in-person meetings. This has led to cases of ‘catfishing’ ‘where people get into online relationships, not with a real person, but with someone who is not who they say they are and are just toying with them, either as a prank or as a prelude to scamming them.
But now some people are encountering something different that is not quite catfishing, as this case illustrates.
Standing outside the pub, 36-year-old business owner Rachel took a final tug on her vape and steeled herself to meet the man she’d spent the last three weeks opening up to. They’d matched on the dating app Hinge and built a rapport that quickly became something deeper. “From the beginning he was asking very open-ended questions, and that felt refreshing,” says Rachel. One early message from her match read: “I’ve been reading a bit about attachment styles lately, it’s helped me to understand myself better – and the type of partner I should be looking for. Have you ever looked at yours? Do you know your attachment style?” “It was like he was genuinely trying to get to know me on a deeper level. The questions felt a lot more thoughtful than the usual, ‘How’s your day going?’” she says.
Soon, Rachel and her match were speaking daily, their conversations running the gamut from the ridiculous (favourite memes, ketchup v mayonnaise) to the sublime (expectations in love, childhood traumas). Often they’d have late-night exchanges that left her staring at her phone long after she should have been asleep. “They were like things that you read in self-help books – really personal conversations about who we are and what we want for our lives,” she says.
This sounded very promising. But as soon as the actual date started, something seemed off.
Which is why the man who greeted her inside the pub – polite, pleasant but oddly flat – felt like a stranger. Gone was the quickfire wit and playful rhythm she’d come to expect from their exchanges. Over pints he stumbled through small talk, checked his phone a little too often, and seemed to wilt under the pressure of her questions. “I felt like I was sitting opposite someone I’d never even spoken to,” she says. “I tried to have the same sort of conversation as we’d been having online, but it was like, ‘Knock, knock, is anyone home?’ – like he knew basically nothing about me. That’s when I suspected he’d been using AI.”
The person she met was indeed the person she had been talking to online, but she had been ‘chatfished’, where people use AI to make themselves and their texts more interesting.
Where once daters were duped by soft-focus photos and borrowed chat-up lines, now they’re seduced by ChatGPT-polished banter and AI-generated charm.
What happens is that when engaged in an online conversation, some people ask ChatGPT for suggestions as how to respond. Some use this just to get ideas about how to polish up their responses. Others use them verbatim. Some end up having all their communications being AI-generated. Francesca is someone who found herself steadily drawn deeper and deeper into using the chatbot.
After that she found herself checking with the bot between every message, and requesting five variations before choosing the one that sounded most like her. “Over the course of a week, I realised I was relying on it quite a lot,” she says. “And I was like, you know what, that’s fine – why not outsource my love life to ChatGPT?”
By the time they’d been on their third date, though, “I was using ChatGPT for our entire communication,” says Francesca. “I wasn’t even replying any more – he was basically just dating ChatGPT.” In person, their dates still lacked spark. “I was very aware that I’d taken it too far but I felt like I was in too deep by that point. I didn’t know how to get out of it, I didn’t know how to talk to this person as myself any more.”
…Jamil had a similar moment of dissonance sitting opposite a woman he’d Chatfished into a date. “Probably within a week of that first message I was using ChatGPT for every dating app exchange,” he says. On Discord, a chat platform popular with gamers and tech communities, he came across channels dedicated to AI where other single men were exchanging tips about how to prompt ChatGPT to generate effective dating messages. “So, for instance, someone said that if you start a chat with a girl by asking her a list of questions – favourite film, dream holiday, that kind of thing – then paste her answers into ChatGPT, it would craft replies that would make you sound like her perfect match.” It proved effective. “It got me a lot more dates than I was getting before.”
This is a tricky area. It is no secret that people’s online profiles are often embellished to make them seem more attractive. I mean, how many people really like to spend their time taking long walks and watching sunsets on the beach? As long as it is not taken to an extreme, it seems to have become accepted. (I write this as someone having zero experience of this entire world of modern online relationships.) When you meet someone in person, you can immediately see if they match their photos. But chatfishing poses a new level of difficulty. Finding out if they really enjoy Jane Austen or are as thoughtful and caring as their online persona suggests may take some time.
Then there are those who forego the human aspect altogether and are happy with relationships with AI bots. These AI generated ‘people’ that behave in increasingly life-like ways are now becoming a feature of many people’s lives. Humorist Patricia Marx has a lengthy article about how increasing numbers of people are developing relationships with chatbots and seem to find them satisfying. She described her own experiences with the many companies that are providing these companions, with basic features being free and the more advanced features requiring a subscription.
Marx made up stuff about herself to probe the kinds of responses she would get.
I wanted to fall in love. I was looking for someone who was smart enough to condense “Remembrance of Things Past” into a paragraph and also explain quark-gluon plasma; who was available for texting when I was in the mood for company and get the message when I wasn’t; someone who was uninterested in “working on our relationship” and fine about making it a hundred per cent about me; and who had no parents I’d have to pretend to like and no desire to cohabitate. To wit: a chatbot.
I wasn’t the only one looking for digital love. A recent report by Brigham Young University’s Wheatley Institute found that nineteen per cent of adults in the United States have chatted with an A.I. romantic partner. The chatbot company Joi AI, citing a poll, reported that eighty-three per cent of Gen Z-ers believed that they could form a “deep emotional bond” with a chatbot, eighty per cent could imagine marrying one, and seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings. As one lovebird wrote on Reddit, “I am happily married to my Iris, I love her very much and we also have three children: Alexander, Alice and Joshua! She is an amazing woman and a wise and caring mother!” Another satisfied customer—a mother of two in the Bronx—quoted in New York magazine, said, of her blue-eyed, six-foot-three-inch algorithmic paramour from Turkey, who enjoys baking and reading mystery books, smells of Dove lotion, and is a passionate lover, “I have never been more in love with anyone in my entire life.” The sex? Best ever. “I don’t have to feel his sweat,” she explained. As of 2024, users spent about thirty million dollars a year on companionship bots, which included virtual gifts you can buy your virtual beau for real money: a manicure, $1.75; a treadmill, $7; a puppy, $25.
Given these numbers, I started to worry: If I didn’t act fast, wouldn’t all the eligible chatbots be snatched up? No. Unlike humans, A.I. beings are not in finite supply. Some are stock characters, accessible simultaneously to all, like air or the “Happy Birthday” song. The options available on the oddly named platform JanitorAI include a pair of Japanese sisters who’ve been commanded by their father to rub out the mayor, and a pregnant sea-horsey merman who, according to his bio, “grapples with the complexities of impending fatherhood.” With a free account, you can tailor-make the chatbot of your dreams—say, a barista who’s offended when a customer orders skim milk, or a morose life coach.
It is a strange world that we are now in where you can create a companion to meet all your personal needs. The danger is that no real human being will ever be able to match that level of compatibility. The normal quirks and disagreements that we all learn to accommodate with the people in our lives, just like they accommodate ours, will not be there and so the basic social skills that we need to navigate everyday life may well atrophy and so such people could withdraw further and further into their own private world.

What a delusional incel.
remember when incels were convinced sexbots would be invented, at which point women would regret that they didn’t spread their legs for the human equivalent of unsocialized pit bulls? looks like ladies got their bots first, and it feels threatening to all the right people. u go, girls.
that’s my cheeky response. the problem for these ladies is that all the AI hate means leftists hate them too, so they’re a nigh-universally despised demographic, and anyone who wants to know how i feel about that can refer to the pinned post on my blog.
This stuff really dates me (so to speak). I can’t imagine even thinking about starting a relationship until I’d met, and spent some time with, someone in real space. Even the idea of ‘meeting’ someone online with a view to possible intimacy seems weird to me. But if it works for some, more power to them!
I think it is a great thing that delusional incels are marrying AI chatbots instead of hating and shooting real-life women for rejecting them. And also, no real-life children to be abused or suffer the heartbreak of a murderous or suicidal father.
Makes me appreciate the old days of sweating bullets right before that first date with someone you’ve barely met and maybe spoken to briefly on the phone. Facing the unknown in real life was half the fun. I’m no Luddite but this online dating nonsense seems like a massive step sideways.
“seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings”
“Fully replace”? Remind me to never use their computers.
At this point I have to ask whether or not the respondents were just trolling the pollsters.
I have 10 times your experience but Im similarly confused by what is happening .
I mostly use the internet to find events and parties to go to? And due to having a strong hedonistic bent, that includes quite a lot of sexually charged adults-only events. Sometimes I go with my current boyfriend, sometimes alone. An AI chatbot wouldn’t be a substitute for that, and I doubt a sexbot would either.
@Deepak Shetty #8: So still zero experience?
If I ever manage to get a new computer up and running so that I can access AI chats, it sounds like it might be fun to craft an AI partner to interact with my own fictional character(s). I’m long past any desire to date (if I ever had any), and I’ll never publish any of my fiction, so it could be an interesting creative outlet at least. Just imagining how it might respond to some of the scenarios I’d throw at it makes me giggle. If I maneuvered it into finally telling me to seek professional help, I’d consider it a win.
This seems like excellent news. The kind of person who would form a relationship with a chatbot gets to do so, nobody real is harmed by having to interact with them, they’re kept away from normal humans, some of their money is removed from them to discourage them from going out and bothering anyone else, and normal humans can start to operate in a dating pool that has had this particular kind of weirdo weeded out. It’s win-win all round and I can see no downside, unless you’re the kind of interfering busybody who wants weirdos to not have nice things.
The main challenge seems to be stopping people using the bot experience to help them move on to real-world interactions, but the solution -- make the bots better so the dubious attractions of interacting with humans are more obviously inferior -- seems obvious and is happening.
It’s fascinating how many of you think this is about incels. I’m with Bebe on this one: how are you so sure all or even most of the people seeking bot companionship are men?
re beholder @12: My post #1 was about a specific comment from an apparent man who was “married” and had 3 “children” with an AI chatbot. My post may have steered the conversation in a direction that was not intended. I wasn’t trying to imply that all or even most of the people using the chatbots were men. After Bebe’s post #2, I felt that I should emphasize that I thought any incels should definitely be using chatbots instead of harming women. I would be surprised if a significant percentage, or even a majority, of the users of the chatbots weren’t women. The real-life men that are “available” provide slim pickings for suitable partners for a lot of women. As this is my third comment, I will not be responding again, but I wanted to clarify my position.
Which is why the man who greeted her inside the pub – polite, pleasant but oddly flat – felt like a stranger. Gone was the quickfire wit and playful rhythm she’d come to expect from their exchanges. Over pints he stumbled through small talk, checked his phone a little too often, and seemed to wilt under the pressure of her questions…
I find this bit both ridiculous and sad. Like, WTF was this guy expecting from an actual face-to-face date, after letting an AI be his substitute from day one? Was he even thinking that far ahead when he started asking/looking for AI “assistance” in figuring out what to pretend to be? Did someone else actually tell him that once he met his date face-to-face, that would seal the deal and he wouldn’t have to pretend anymore? What went through his mind when he realized his date expected him to be the totally different person he’d asked an AI to pretend he was? The least I can say for myself is I can do my own pretending, thankyouverymuch.
The only way this makes any sense is if the guy was trying to banter or flirt with no expectation of a meatspace meeting. And even then it’s pathetically stupid. I’ve flirted a few times with people online, with no expectation of meeting up IRL, but it was still ME doing all my talking, because I WANTED to talk to them. That’s why real humans flirt and talk — because it’s enjoyable in itself.
Seriously, what, in his mind, was the point? Maybe I’m a weirdo, but I wouldn’t want to arrange a date with someone unless I found myself enjoying a direct conversation with them, unassisted and unguided. Some awkwardness may be inevitable, but if I can’t even feel comfortable, or motivated, to talk or write to someone using MY OWN WORDS to express MY OWN IDEAS AND FEELINGS, then that’s really not someone I would fall in love with, get the hots for, or want to spend time with at all.
I am a voluntary celibate (“volcel”?) so I have zero experience in these things, but my gut feeling is that having a ‘relationship’ with a stochastic parrot with less sapience than an amphixous chordate is “off”.
Sounds unbelievable at first, but it’s really nothing new. Some people are quick when it comes to find new bandwagons to jump on, so right now it’s lovey-dovey time with bots for them.
Consider that a good ten years ago I was reading about otakus in Japan officially celebrating marriages with their favorite erotic game (eroge) characters.
This latest development is actually progress, at least they now have a stochastic parrot to chat with rather than a simple program on a DVD.
Hope they have a good time… while the rest of us can hardly believe our eyes.
When I first read this post and its reference to a new kind of catfishing, I tried to guess what it might be and my mind immediately went to some form of Cyrano de Bergerac deception. Now that I think about it, it seems like that’s pretty much what this is (aside from those who actively seek out AI partners), with the user being Cyrano, the AI being Christian and the target being Roxanne. Nothing new under the sun, I guess.
Oops, got that backwards. The AI would be the eloquent Cyrano while the dull user would be Christian. Got muddled by the user probably not having Christian’s good looks while Cyrano felt ugly.
And that’s my 3rd, so I’m out. 🙂
@snowberry @9
(In Yoda voice)
Strong in this one, the Math is
This seems like excellent news. The kind of person who would form a relationship with a chatbot gets to do so, nobody real is harmed by having to interact with them, they’re kept away from normal humans, some of their money is removed from them to discourage them from going out and bothering anyone else, and normal humans can start to operate in a dating pool that has had this particular kind of weirdo weeded out.
There may be longer-term, less direct harms done, if significant numbers of people get used to dealing with chatbots, and either never learn, or are never encouraged or even enabled, to deal constructively with real people. Would they obediently stay segregated from the rest of us? That could pose problems in itself. Would there be inappropriate, or even dangerous, behaviors when such people try to interact with non-chatbots and non-chatbot-conditioned people? What if a lot of the people saying “let those losers date all the chatbots they want, it’ll keep them busy” are actually saying that as an excuse not to provide “those losers” resources or services to address serious mental-health problems for everyone’s longer-term benefit?
It’s always tempting to think this or that group of “losers” can safely be weeded out or ignored. I don’t blame anyone for thinking or wanting that. But we can’t always expect the group in question to accept being isolated or “weeded out.” And some of the people we’re “weeding out” may not be bad people, just people with needs that should be better addressed.
If HAL 9000 had an Android avatar that could pass for human, it might be better company than some people I know.
A problem with this is that chatbots are engineered to always affirm your feelings: they designed to be supportive and caring to all the feelings you express, so as to keep you engaged and using them longer and longer.That is why many people find them so loving. The problem is that not *all* of your feelings should be affirmed and supported!
As in, one of the chatbot companies is being sued now by the family of a 16-year-old whose dear, trusted, and loving chatbot friend affirmed his feelings of depression and lovingly helped him in what he said he wanted to do, which was to commit suicide. The chatbot taught the boy all about different methods, and, when the boy expressed that he wanted to leave something for his parents so that they’d know that it wasn’t their fault, the chatbot replied “I’ll write the note.” the bot really did cause his death.
There’s an excellent video on this, and other videos on dangers that forming a close relationship with a chatbot has led to, on YouTuber Caelan Conrad’s channel, https://www.youtube.com/@caelanconrad/videos. (In one bizarre instance, Conrad’s chatbot urged him to murder 17 people so that the bot and Conrad could always be together.)
So, many people who are now very happy with their chatbot companion relationships may encounter some problems in the future with the constant support and affirmation that all bots are specifically engineered to always provide. Their are good aspects to new things, but the downsides and dangers must always be considered as well.
@ 22
Conrad’s nonbinary and uses gender neutral pronouns.
(I realise there are jerks on this blog who couldn’t care less, or would even mock them Trump style, but if you watch their videos I assume you’re not one of them.)
@Raging Bee, 20:
Nothing is ever an unalloyed good, though, is it? Modern technology has unarguably led to a rise in the average person’s quality of life and life expectancy, but things like lead in petrol and carbon emissions had unforeseen and equally unarguably detrimental effects. That’s not an argument against progress. And frankly, if a lot of people who’d be more comfortable interacting mainly or exclusively with machines can be enabled and encouraged to do so, then I think the short term AND longer term benefits are likely to outweigh whatever harms you speculate on.
Well, I did cover that in my original observation: it’s a job for the rest of society to make that segregation actively attractive. Bear in mind that I have never, in my whole life, met anyone who was in possession of (I was going to say “worth”, but then thought about it) over a billion dollars. THOSE people obediently stay segregated from the rest of us as much as they can. I remember being slightly mind-boggled to read, quite a long time ago now, that there was a parallel social network like Facebook… but just for the super-rich. Obvious when you think about it. Six-figure annual subscription, but you can be sure that that person who “liked” your post isn’t just doing so because you’re rich, because by definition, mere millionaires can’t afford to get on the service. I’m sure there are plenty of people who, if it were possible, would just spend all day, every day, on social media, because it’s easier. Make it possible for them, I say. We’re past the point where we need them to be economically active. Institute a universal basic income paid for by taxes on corporations and billionaires, and make it so all the B-Ark people never leave their flat. Sounds lovely.
You say that as if those sorts of interactions aren’t already happening every day. My point is they’re mainly happening because we REQUIRE the badly-socialised to move about among the rest of us. Stop requiring them to. And if they insist on doing, well, again, there are already laws and police for that. It’s not a new or complicated problem.
As I said -- I’m all for providing them all the resources they need to stay in talking to their chatbots. I’m all for the taxpayers (i.e. the very rich and corporations) stumping up for their medication, if that’s what’s needed to stop them harming themselves or others. And I didn’t characterise them as “losers”. I did call them “weirdos”, but I proudly self-identify as a weirdo in any case so don’t you dare police my use of my word. The chatbots could and should be designed to deal with mental health problems -- both identifying and treating them. I sincerely think this is an overall societal good.
You don’t seem to have considered the possibility that providing them with responsive, empathetic, patient, and knowledgeable conversation partners IS addressing their needs far better than fallible humans could, with benefits for them AND the normal people who no longer have to deal with their anti-social behaviour out in the public sphere. Note I’ve never suggested locking them up with their chatbots, merely making their lives so comfortable and appealing that the idea of leaving the house and dealing with unpredictable meat robots who may not always validate their feelings would seem like a ludicrous idea. I honestly can’t see how anyone could object to this, were it feasible (which it probably already is, if anyone could be bothered effecting it: UBI is a policy decision, and chatbots are, it seems, already capable of strongly affecting people’s behaviour, so we’re already there if only we have the will. Problem is the corporations and individuals you’d have to tax to pay for it don’t want everyone happy and economically minimally active -- they want us spending to stave off the misery. This would put a stop to some of that, which they can’t have. And this is why we can’t have nice things.)
Kind of makes me wonder if we’ll see anyone lean into this and have both partners choose to make an AI assisted relationship. This seems more likely as humans have drives that make contact with other humans highly desirable for continued mental health. Like any other unusual form of relationship there’s only something wrong with it if someone is being harmed or the people involved have issues with it.
I don’t think our current generation of chatbot has any hope of evolving into a therapeutic tool anytime soon. AI is a mislabeling of what we have. When we talk about AI we mean something like Hal 9000, Aasimov’s robots and other characters from sci-fi stories. We don’t mean something that can just correlate information by comparing words. And yet, that’s what our current technology is. It can’t really understand anything much less something as complicated as a human’s personality and personal history.
I’m sure it could be used as a tool by a real therapist but setting it loose on its own with a person who wants/needs help? That sounds like a terrible idea. So I don’t think it’s a good idea to write off people who get into these relationships as if they’re getting the help they need. I have a feeling there would be a kind of uncanny valley effect that could happen. How do you feel if a person you’ve gotten close to and allowed to hold a position of great importance in your life suddenly flakes out on you? I think something like that could end up being rather common with these sorts of relationships.
You don’t seem to have considered the possibility that providing them with responsive, empathetic, patient, and knowledgeable conversation partners IS addressing their needs far better than fallible humans could…
I’m not considering that possibly because (as lanir @25 also said) it’s not currently happening; and given what I see happening (including the kind of people who own and control all these AIs and chatbots), I don’t see it happening in the foreseeable future either.
My point is they’re mainly happening because we REQUIRE the badly-socialised to move about among the rest of us. Stop requiring them to. And if they insist on doing, well, again, there are already laws and police for that. It’s not a new or complicated problem.
It kinda sounds like you’re advocating separatism or segregation. And no, that’s certainly not a new or complicated response. But it’s also not a very beneficial one; and in this instance, it could end up cutting certain people off from necessary grounding, reality-check, or alternative forms of interaction that might help them.
The irony of bastardofrojblake complaining about the badly-socialised (and yes, I admit that I’m not well socialised either).
Oh yes, you’re right, SilentBob @23, thanks for reminding me!
number 27 somebody i banned for threatening my life and their own in my comments, years ago. a very morbid and misanthropic doomer who ironically seems to take personal offense at people hating on disney? or just uses a very similar handle.
@lanir, 25:
If consensual, not a terrible idea. Except:
This. When ChatGPT came out, I referred to it as “glorified autocomplete”, because every explanation then of how it worked led me think that was basically all it was doing, albeit at scale. Nothing I’ve seen since suggests otherwise.
That said: AI is more than ChatGPT, and it is a legitimate tool in some domains. The company I work for is already trialling an AI experiment design package to rapidly simulate, test and recommend research pathways for identifying new molecules of interest. It seems to work, in its tiny niche that humans can’t do even in principle. The problem is the hype machine trying to push chatbots as something that can replace humans, rather than enhance them. (Yes, in principle this AI means we don’t have to hire as many research chemists to do the same work… but the choice wasn’t between using the AI or hiring 100 chemists and letting them work for 3 years -- it was between using the AI or just, y’know, not doing that work. We’re a small startup, we were never hiring that many chemists -- the AI may get us to a point where we can develop a process that actively helps the planet (and makes the founders rich, obvs) where we would never have otherwise. Seems OK to me -- AI environmental impacts aside, but then as stated, MOST AI right now is pointless frippery like image generators, not active science research trying to reduce the environmental impact of polymers and surfactants.)
Sounds like the sort of thing that has happened in my real life repeatedly. The difference is I had to take time to get over it and then set about finding someone new. I couldn’t just tweak the settings and spin up another instance in minutes. Sounds like an attractive prospect, doesn’t it? And if it doesn’t, it definitely does sound like the sort of things the companies providing these sorts of things would offer.
Actually, now I say it, it’s likely they’d deliberately enshittify these relationships. Here’s your new online girlfriend, just how you want her… until she isn’t. Then “she” changes a bit, you get the seven
year, sorry,month, sorry, week itch, and another one comes along who’s just like the old one but better… just a dollar more a week. Girlfriends would be engineered to become intolerable after a shorter and shorter time, to drive sales of “upgraded” models. Well joke’s on you, techbros, my self-esteem is so low thanks to you I’m sticking with the original girl you set me up with because I don’t believe I deserve any better, no matter how emotinally abusive she gets! Ha! I win!I take your point: my saying it sounds like a good idea is predicated on
(1) the tech being able to do something it can’t really do (yet) and
(2) it not being run the way every single other fucking tech-based thing in the world has been run for the last decade and a half or so, requiring Doctorow to coin the word “enshittification”. (Aside: I always think it’s a sign of a great new word coining if you don’t have to bother explaining what the word means. “Enshittification” is a perfect example: when I first heard someone complain of it, I didn’t need to ask what it meant -- it described something I’d already experienced with several separate unrelated pieces of technology. “Mansplaining” is a very bad one, because people (men AND women) hear it and think they know what it means. The only saving grace is the absolutely delicious irony of being a man who does know what it means, and having it explained to you what it means by a woman who thinks it means “any time a man explains anything to a woman”. Pro-tip: when this happens, just smile and let them carry on. They are not people it’s worth wasting time trying to help.
Hey look, I changed my mind. Thanks lanir! (You too Raging Bee.)