My impression of the ex-CNN news announcer, Jim Acosta, was that he at least had some principles. He quit cable news, after all, and that’s a positive mark in my estimation. Unfortunately, he has now indulged in the cheapest, sleaziest, most ghoulish stunt of his career.
If you are sufficiently prolific on the internet, people can take your stored writings and videos and build a model of “you”. For instance, I would be good candidate for this kind of program — over 30 years of nearly daily commentary, all stored in online databases, you could probably make a decent predictive model of my internet behavior. Would it be “me”? No. It would be a crude simulacrum of just my public persona. You could also take the voluminous writings of St Augustine or Albert Einstein and make a similar model, but it would all just be window dressing and wouldn’t actually “be” the person.
Some grieving parents compiled the internet output of one of the students killed in the Parkland shooting into a video talking head. I can sort of understand the desire — they want to hear their child’s voice again — and it’s the same sort of impulse that would make someone preserve an answering machine voice message so they can hear a loved one again after their demise. It’s not the person, though, it’s an echo, a memory of someone.
So Acosta “interviewed” the model of a dead student.
Jim Acosta, former chief White House correspondent for CNN, stirred controversy on Monday when he sat for a conversation with a reanimated version of a person who died more than seven years ago. His guest was an avatar of Joaquin Oliver, one of the 17 people killed in the Marjory Stoneman Douglas high school mass shooting in Parkland, Florida, in 2018.
The video shows Oliver, captured via a real photograph and animated with generative artificial intelligence, wearing a beanie with a solemn expression. Acosta asks the avatar: “What happened to you?”
I feel like asking Acosta “What happened to you?”
“I appreciate your curiosity,” Oliver answers in hurried monotone without inflection or pauses for punctuation. “I was taken from this world too soon due to gun violence while at school. It’s important to talk about these issues so we can create a safer future for everyone.” The avatar’s narration is stilted and computerized. The movements of its face and mouth are jerky and unnatural, looking more like a dub-over than an actual person talking.
Ick. Why not dig up his corpse, attach marionette strings, and have a conversation with it? That wasn’t Joaquin Oliver. The only insight you are going to get from it is possibly the interpretations of the person who compiled the code.
Here’s another example:
Others have likewise used AI avatars to simulate the speech of victims of crimes. In May, an AI version of a man who was killed in a road rage incident in Arizona appeared in a court hearing. Lawyers played an AI video of the victim addressing his alleged killer in an impact statement. “I believe in forgiveness, and a God who forgives. I always have and I still do,” the victim’s avatar said.
The presiding judge responded favorably. “I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness,” he said. “I feel that that was genuine.”
Jesus. That was not evidence before the law — that was an appeal to the judge’s sentimentality, and it worked.










