The most ghoulish use of AI so far


A lot of people think I’m batshit crazy, says Justin Harrison of Grieftech.

I don’t. I think he’s a delusional ghoul.

Harrison has cobbled together a chatbot that uses an imitation of his late mother’s voice and predictive text built from her online communications, and he thinks that it is a cure for grief, because it enables him to talk to his “mother”. It doesn’t. There is no person there. It’s a kind of selfish version of grief, where he can deny her death and pretend it’s OK because his superficial, fake emulation of his mother can pay attention to him. It’s gross and creepy.

In the last few years, I’ve lost my mother and a brother; in years gone by, I’ve lost my father and a sister. They’re dead. The grief comes from the loss of living, human, thinking, behaving human beings who can’t be resurrected by some fraud with a collection of words they may have uttered. But this shallow idiot thinks a chatbot is a substitute.

Harrison is being interviewed, and he thinks he’s being clever by throwing some publicly recorded videos of his interviewer into the chatbot’s database, and then conversing with the computer. The interviewer is not impressed. So Harrison and some other team member argue with him to say that the computer used a spot-on turn turn of phrase. I guess if all we are is a series of turns of phrases, then the simulacrum is perfect. Except we aren’t. There’s no person, no thinking mind, behind the chatbot.

Then the interviewer goes off to talk to a series of people: one who imagined seeing a dead person after taking drugs, another who dreamed that they were visited by the ghost of their father, a medium who claims, with many weird jerky expressions, that they can communicate psychically with a friend. They’re all the same thing: frauds, liars, or deluded people who have convinced themselves that their loved ones are nothing but superficial reflections of their own minds. Justin Harrison is just more of the same, a phony like all the other phonies who have leeched off other people’s honest grief for profit.

After I’m dead, at least I’m reassured that no ghoul is going to be tormenting me with banalities; I’ll be gone. Don’t be fooled that my chatbot copy’s banalities are coming from me, though.

Comments

  1. kome says

    Psychics and mediums have been pulling this grift for centuries, and now techbros have found a way to automate it. Say what you will about that ghoulish piece of garbage Sylvia Browne, but at least she had to interact directly with the people she was exploiting.

  2. StevoR says

    There was an episode of Star Trek TNG that already discused pretty much this exact scenario. It didn;t end well

    A disgusted Picard has Data escort Marr back to her quarters. In her quarters, Marr asks Data how long he will function and he replies that he was programmed to function for an eternity. Relieved, Marr tells Data that as long as he functions, her son is alive. Speaking to him as if he were her son, Dr. Marr pleads to Data to let “Renny” know that she destroyed the (Crystalline – ed) Entity for him, in the hopes that her deed will give her son’s spirit a sense of peace. Data informs her that her son would not have approved of her destroying the Entity, stating that he loved her work as a scientist but that in her grief over his death, she destroyed the very reason her work is so important and that he cannot help her. Reality sets in for a horrified Dr. Marr, as she silently reflects on what she has done.

    Source : https://en.wikipedia.org/wiki/Silicon_Avatar

  3. profpedant says

    After my father died there were several times that I ‘saw my Dad’ in my peripheral vision. I knew perfectly well that his ‘being there’ was my brain misinterpreting what was actually in my peripheral vision, but it was nice to be able to (sort of) see my father again. No magic involved, just a pleasant misinterpretation of reality.

  4. HidariMak says

    I imagine that the porn industry will be early adopters of this. Pay by the minute to talk with a “Debbie Does A.I.” chatbot, for starters.

  5. acroyear says

    Once again, “Max Headroom” (the series itself, not the character) was hugely ahead of its time.

    An episode in season 2 was called “Deities” and it involved an ex- of Edison Carter who ran a ‘religion’ based on a simpler idea of the tech that created Max Headroom from Carter’s memories and personality. Of course, they couldn’t scale up, so the whole thing (like all religions) was something of a money scam preying on the vulnerability of the elderly.

    Well, here we are – using tech to give a very rough (and poor) simulation of a loved one to scam money from the elderly.

    “Oh, that’s WONDERFUL, isn’t it?”

  6. says

    Billions of people believe that they are talking to god/allah/jesus. What a business opportunity.

    Coming up next: Churches offering Zoom or MS Teams meetings with Jesus Christ, who will of course assure everyone listening that [church organizing the meeting] is the true representative of his Word and Will.

  7. Larry says

    cag @ #3

    Have you been trying to talk with Jesus but you get nothing back. God’s not returning your calls? Well, suffer no more! ChristOnLine®, a new app from Ronco, maker of Mr. Microphone, uses AI on your phone or device to allow you to actually speak to and interact with your deity of choice. Available in the app store now for only $29.99/month. Install it today!

  8. AstrySol says

    I think the need for psychological support is real for a non trivial number of people, so if there is a way that can provide some kind of relief there honestly, IMO it is not necessarily bad on its own.

    However, tech bros tend to be cheapskates, make half-assed products, skirt regulations, lie about them and charge exuberant prices. And I think that is the problem.

    It’s like quality magic shows vs whatever is done by those con artists.

  9. Tethys says

    It’s slightly less creepy than the Norman Bates method of keeping his Mom around. It’s hardly surprising that a chatbot parrots her in a “spot on turn of phrase”, given that the bot was given his Mother’s texts as a training set.

    Pretending your Mother hasn’t died is not a healthy method of dealing with grief.

  10. Jean says

    That’s similar to the other deluded people who want to “upload” themselves into a computer. Just because you may at some point be able to create a simulacre of some person that could fool others does not mean you have a real self-conscious individual.

  11. tacitus says

    Everyone processes their grief in their own way. My mom lost her husband of 66 years last year — they were very close — and continued to “feel” he was alive for several months afterward even though she was well aware that he was dead.

    As long as you’re not duping anyone into believing their loved one is still alive, I suspect an AI simulacrum of their lost loved one will be quite popular, and significantly beneficial to some. I doubt my mom would have wanted one, but her sister-in-law lost her husband a few years ago and her life essentially ended the day he died since she never got over his loss. All she ever wanted to do was be with him, and she became a recluse after he died.

    Ideally, robust mental health care is the answer of course, but we’ve never had that level of care in the US or UK in all the decades I’ve been around, so if affordable AI companions (simulating loved ones or not) become a reality, there are millions of elderly, lonely, and housebound individuals who could stand to benefit from the company.

    No doubt there will be plenty of research on their potential uses in healthcare in the years ahead, but not everyone is as fortunate as my parents have been to have access to friends and family to help them and encourage them through an increasingly complex world as their faculties start to decline.

  12. Jazzlet says

    I have had many dreams about deceased relatives over the years, but that’s all they were dreams. Which is fine by me as a lot of them were not pleasant dreams, the one where my mother was eating and talking, missed her mouth with the fork which then hit her cheek was particularly disturbing

    I mean it

    Sure?

    as her cheek fell apart like a slow braised cut of beef while she continued to eat and talk. That dream happened something like forty years ago, but it still disturbs me when I remember it.

  13. seachange says

    Universities are going from tenured professors to adjunct professors to TAs all while charging more to fill their proliferating administration’s already fat pockets.

    You already have some recorded genetics lessons. We here have been privileged to access some of them.



  14. says

    The grief comes from the loss of living, human, thinking, behaving human beings who can’t be resurrected by some fraud with a collection of words they may have uttered. But this shallow idiot thinks a chatbot is a substitute.

    Does he? Or is he just so emotionally stunted that he can’t tell the difference? Is he so lacking in humanity that, to him, it IS a substitute?

  15. John Morales says

    [OT]

    In the news: https://www.theguardian.com/world/2024/sep/26/nazca-lines-peru-new-geoglyphs

    Archaeologists using artificial intelligence (AI) have discovered hundreds of new geoglyphs depicting parrots, cats, monkeys, killer whales and even decapitated heads near the Nazca Lines in Peru, in a find that nearly doubles the number of known figures at the enigmatic 2,000-year-old archaeological site.

    A team from the Japanese University of Yamagata’s Nazca Institute, in collaboration with IBM Research, discovered 303 previously unknown geoglyphs of humans and animals – all smaller in size than the vast geometric patterns that date from AD200-700 and stretch across more than 400 sq km of the Nazca plateau.

    […]

    The use of AI combined with low-flying drones revolutionised the speed and rate at which the geoglyphs were discovered, according to a research paper published this week in the Proceedings of the National Academy of Sciences (PNAS).

    The paper said while it “took nearly a century to discover a total of 430 figurative Nazca geoglyphs”, using an AI system covering the entire Nazca region it “took just six months to discover 303 new figurative geoglyphs”.

    The AI model efficiently spotted many of the smaller relief-type geoglyphs which were harder to see with the naked eye.

  16. gijoel says

    Blasphemy!!??!?! Repent now or Roko’s basilisk will torture your chatbot at some indeterminate point in the far future. :)

    On a personal note when I was a teenager my mother floated the idea of freeze drying after her death. The plan was that my brother and I would share her corpse and keep her in our homes for six months a year. She got really angry when we both said no.

Leave a Reply