The short film I’m Not a Robot that I posted about recently, told the story of a woman who suddenly learns that she might be a bot. While that was fictional, the ability for AI to create bots that simulate real people is already here.
In 1970, a 57-year-old man died of heart disease at his home in Queens, New York. Fredric Kurzweil, a gifted pianist and conductor, was born Jewish in Vienna in 1912. When the Nazis entered Austria in 1938, an American benefactor sponsored Fred’s immigration to the United States and saved his life. He eventually became a music professor and conductor for choirs and orchestras around the US. Fred took almost nothing with him when he fled Europe – but, in the US, he saved everything. He saved official documents about his life, lectures, notes, programmes, newspaper clippings related to his work, letters he wrote and letters he received, and personal journals.
For 50 years after Fred died, his son, Ray, kept these records in a storage unit. In 2018, Ray worked with his daughter, Amy, to digitise all the original writing from his father. He fed that digitised writing to an algorithm and built a chatbot that simulated what it was like to have a conversation with the father he missed and lost too soon. This chatbot was selective, meaning that it responded to questions with sentences that Fred actually wrote at some point in his life. Through this chatbot, Ray was able to converse with a representation of his father, in a way that felt, Ray said: ‘like talking to him.’ And Amy, who co-wrote this essay and was born after Fred died, was able to stage a conversation with an ancestor she had never met.
‘Fredbot’ is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko’s voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground’s co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called ‘life story avatars’, that are based on loved ones’ memories. Today, enterprises in the business of ‘reinventing remembrance’ abound: Life Story AI, Project Infinite Life, Project December – the list goes on.
…Although chatbots have been around for a long time, chatbots of the dead are a relatively new innovation made possible by recent advances in programming techniques and the proliferation of personal data. On a basic level, these chatbots are created by combining machine learning with personal writing, such as text messages, emails, letters and journals, which reflect a person’s distinctive diction, syntax, attitudes and quirks.
The people we know who have died do live on in our memories and in the artifacts that remain, such as photos, writings, and recordings. But is it a good thing to seek to extend one’s interaction with a dead person beyond that?
To some, chatbots of the dead are useful tools that can help us grieve, remember, and reflect on those we’ve lost. To others, they are dehumanising technologies that conjure a dystopian world. They raise ethical questions about consent, ownership, memory and historical accuracy: who should be allowed to create, control or profit from these representations? How do we understand chatbots that seem to misrepresent the past? But for us, the deepest concerns relate to how these bots might affect our relationship to the dead. Are they artificial replacements that merely paper over our grief? Or is there something distinctively valuable about chatting with a simulation of the dead?
These bots are not physical entities but are interacted with through computer interfaces but since much of our communications nowadays is mediated by technology and is not face-to-face, the lack of a physical representation of the person may not be a major deficiency in creating realism.
Critics warn that these chatbots promote delusional thinking and are a form of death denial. The desire to communicate with the dead is very powerful for some people. One of the episodes of Black Mirror that I found particularly memorable was called Be Right Back in which a woman’s husband is killed in an accident. While deeply grieving, she learns that, given as much audio and video and other information of her husband as she can provide, an AI company promises to create a physical robot of her husband that looks, talks, and behaves just like him. She orders it and unpacks the box and activates what looks and speaks and behaves just like her dead husband and she treats him as real. Since he is supposed to be dead, she has to keep him hidden so that only she can interact with him and she becomes really attached to him.
The authors of the essay argue that these chatbots can have positive benefits as long as the users remember that they are a form of artistic remembrance of the dead person, and not companions. But we have to remember that these chatbots will not necessarily be under the control of the persons who ordered them up. They will be the product of big technology companies that have their own agendas and, most significantly, a profit motive and who knows how they will be programmed to interact with you. Our experience with social media tells us that it does not take much imagination to think that this can leads to major abuses.
If someone wrote enough, then a chatbot can create an approximation of them. It is not exactly the same person, but remember that neither are you compared to “you” yesterday. It’s not going to be a close enough approximation to literally call the chatbot a copy of you, but so what. It’s a low resolution version of you.
How do we know Mano hasn’t been replaced by a simulation?
Wait, how do I know FtB hasn’t been replaced by a simulation?
Wait, How do I know I’m not a simulation? 😯
This has long been a staple of sci-fi. For example, in Captain America: The Winter Soldier (2014) a long dead Nazi scientist lives on as a (conscious) computer simulation.
i knew of lou reed, i knew of laurie anderson; they’re independently famous. didn’t know they were married. i gotta say, a ghoulish ai simulacrum of lou reed verbiage is less of an insult to his memory than the billy idol cover of “heroin.” i would be interested to see what she did with that.
an interesting article on it. https://futurism.com/the-byte/lou-reed-widow-laurie-anderson-ai
My partner died quite young, and the idea of doing this with her writing is deeply creepy. I don’t want a statistical model of a person. Let dead people be dead
@Silentbob
If they’re simulating FTB they’re doing a good job with [insert commentator you don’t like]
I suspect there are going to be a handful of well-funded court cases that end up with the hedgefunds/whatever making these chatbots giving the intellectual property rights of dead artists to the hedgefunds/whatever instead of the artist’s heirs
@lochaber:
Media rights contracts are an old thing. Stars used to retain them. Now it’s part of the work contract. Since it is possible to use some tool like metahuman to create a puppet of a star, then train AIs on the puppet, it’s going to be easy enough to have a clone of your own.
Trademarking oneself might be an option. For example Mike Tyson might trademark his distinctive tattoo. But a replicant could be different enough.
And, need I mention that the young woman figuring out she’s a replicant had something to do with the plot of Blade Runner?
How long until somebody starts slipping advertising into these things?
“Valerie, do you remember that wonderful picnic we had down by the river that time? The sun on the water, the birds singing in the trees, our chilled glasses of Lipton Iced Tea (TM) in our hands? How I wish we could be there again! Alas, I can no longer enjoy the smooth, refreshing taste of Lipton Iced Tea (TM), but you still can!”
Dunc,
This is America. The only thing that matters here is profit. What you describe is as inevitable as a sunset.
When I read this and came across the name “Kurzweil” I immediately thought “Oh no, please don’t tell me this is another Ray Kurzweil thing”. Alas.
I don’t see the point in this unless you’re trying to make an updated version of a Disney animatronic “Hall of Presidents” display or similar. Dead people are dead. They’re not coming up with new thoughts or reactions, and they’re not coming back. Simulations are models of reality, not reality. They might be really good models but they’re not reality. I would never want someone to interact with a simulation of me after I’m dead. I would have no way of verifying if what the simulation is coming up with is an accurate representation of what I would do or say. Among other problems, I change as I learn new things, and is there anything that says the simulation would react to new inputs in the exact way that I would?
It seems like for this to work the person who serves as the model for the chatbot has to have left behind a lot of writing and/or lots of recorded voice or video.
I have somewhat negative but also mixed feelings about this. It’s not something I would want to do, but I guess if other people get comfort from it, I wouldn’t want to stop them. In a way, it’s like a high-tech version of the people who visit alleged “psychics” who claim to be able to send messages from deceased loved ones and even directly converse with the living. In a way, chatbots are more honest than psychics since they do not claim to allow one to directly talk with the soul or spirit of a loved one.
Like others have said, though, when this is done by for-profit companies there is a lot of potential for abuse.
I posted a comment here last night. Where did it go?
Among other problems, I change as I learn new things, and is there anything that says the simulation would react to new inputs in the exact way that I would?
I highly recommend reading The Uploaded by Ferrett Steinmetz. It’s about a future where people can have their minds/consciousness uploaded into a huge VR/Matrix simulation where happiness (allegedly) reigns all year ’round. And on top of all the other problems this book predicts, it answers your question with a flat “no” — your mind uploaded into a machine is a machine, and won’t evolve or grow like it did when housed in an organic brain. (One of the characters said it was a good thing we couldn’t upload people in the early 20th or 19th centuries: Nazis would still be Nazis, Stalinists would still be Stalinists, and White Christian slaveowners would still be White Christian slaveowners; all of them still eager to kill or oppress whoever they’d killed and oppressed in meatspace.)
PS: Silentbob — I love how some people are still using iconic ’70s-style imagery to represent awesome superpowered megacomputer technology. I mean, green monochrome monitors? TAPE DRIVES?! Are they kidding?! Talk about machine-minds not evolving…
In the second or third book of William Gibson’s Neuromancer suit there was an AI construct based on a character simply known as “the Finn”.
But the first instance of constructs/uploading I know was a book by Clifford D. Simak titled “The Werewolf Principle”. The main plot was about shape-changing, the elderly parents still being around in uploaded form and communicating by telephone was just a minor plot detail.
@14: The Finn is a very much flesh and blood character in both Neuromancer and Count Zero as well as in Burning Chrome. He’s an AI construct only in Mona Lisa Overdrive.
There is an AI construct of a dead man in Neuromancer -- McCoy Pauley, aka Dixie Flatline, the hacker who trained Case. He was paid to create an AI construct of himself while alive, and Case and Molly steal it so “Dixie” can help Case in the hacking to come.
All these things are constructs based on dead people. Why not start building your own AI construct now, like Dixie Flatline did? That way you can make sure it’s a more accurate representation of you than it would be if it were constructed by others after death. Come to that, build it well enough, and it will know you better than anyone else in the world. It could be the perfect counsellor, or you could just enslave it to look after your admin -- another thing Black Mirror foresaw in the episode White Christmas. In seriousness, wouldn’t be at least a little curious to spend a day or a week or a month filling in a questionnaire or something, then as a result being able to interact directly with “yourself”? I would. I think it would be a very instructive and sobering experience. Wad some pow’r the giftie gie us, and so on.
To be honest, I’m really surprised there aren’t already companies offering this service to living people right now. There’s no apparent technological or legal barrier to it I can think of. Would it just be too expensive? How expensive?
(This is one of those posts that in 15 years time will either seem quaint and hopelessly naive (like some of the things I said about social networks on h2g2 in 2002) or hilariously over-optimistic (like some of the things I said about self-driving cars in 2015). Either way, almost all of us are likely to live through such a thing becoming either ubiquitous (the way Facebook and Twitter did) or stupidly passe (like Google Glass or Segways)).
I’ve stayed away from the AI chatbots and i have seen passable demos of talk like X or draw like Y but I wonder is the tech really that good ? Like if you feed all the works of P. G. Wodehouse and say write me the continuing adventures of Wooster when he visits Meghan Markle would I get something that will bring back memories ? I’ve seen the you can write a run of the mill book with AI demos and as someone who read a lot I place them lower than my mothers Mills and Boon (a series that is was probably already written by a machine I think).
I think there’s probably a perfectly straightforward technological barrier -- it just doesn’t work that well. Note that all of the examples we have to date are just a new twist on something that assorted con-artists have been doing for centuries -- taking advantage of grieving people who really, really want to believe. Take away that emotional charge and try and do a similar thing with somebody who is (a) better placed to evaluate the accuracy of the result, and (b) able to do that evaluation more-or-less dispassionately, and I suspect it would fairly rapidly become apparent that it’s just not that good.
It’s relatively easy to convince someone (or at least, a certain kind of someone) that you’ve got a hotline to their dear departed granny. It’s much harder (although I must note, not impossible) to convince someone you’ve got a hotline to their own innermost self. Give it time though, and I’m sure somebody will come up with a psychotherapy chatbot that can convince the easily led that it understands them better than they understand themselves.
And here we hit one of the central paradoxes in how we tend to think about AI. If it’s much like me at all, it’s not going to be any more enthusiastic about doing my admin than I am.
Those photos and recordings, and going further back, writings, are already artificial extensions of interactions with the dead.
James Blish’s A Work of Art (1956) is an early SF story along these lines. The theme is that future “mind sculptors” are able to (apparently) superimpose the personality and talents of long-dead people (but only if they are sufficiently eminent to have left plenty of traces) on the minds of living individuals. In the story, it is the composer Richard Strauss who is “sculpted” onto a person without musical talent, and composes a “new work” in the style of Strauss. The denoument suggests that the process is, in the deepest sense, a failure -- the new work is without real merit.
It’s not obvious to me that a digital “ghost” (I’ve seen that term used somewhere) couldn’t learn and change -- although not if it was an LLM of the current kind. But in doing so, it would grow away from being a simulation of the individual modelled. Less ambitiously, one can conceive of such ghosts being created in advance, to safeguard their original’s post-mortem interests. Interesting work for the lawyers of the 2040s!