Jim Acosta, ghoul


My impression of the ex-CNN news announcer, Jim Acosta, was that he at least had some principles. He quit cable news, after all, and that’s a positive mark in my estimation. Unfortunately, he has now indulged in the cheapest, sleaziest, most ghoulish stunt of his career.

If you are sufficiently prolific on the internet, people can take your stored writings and videos and build a model of “you”. For instance, I would be good candidate for this kind of program — over 30 years of nearly daily commentary, all stored in online databases, you could probably make a decent predictive model of my internet behavior. Would it be “me”? No. It would be a crude simulacrum of just my public persona. You could also take the voluminous writings of St Augustine or Albert Einstein and make a similar model, but it would all just be window dressing and wouldn’t actually “be” the person.

Some grieving parents compiled the internet output of one of the students killed in the Parkland shooting into a video talking head. I can sort of understand the desire — they want to hear their child’s voice again — and it’s the same sort of impulse that would make someone preserve an answering machine voice message so they can hear a loved one again after their demise. It’s not the person, though, it’s an echo, a memory of someone.

So Acosta “interviewed” the model of a dead student.

Jim Acosta, former chief White House correspondent for CNN, stirred controversy on Monday when he sat for a conversation with a reanimated version of a person who died more than seven years ago. His guest was an avatar of Joaquin Oliver, one of the 17 people killed in the Marjory Stoneman Douglas high school mass shooting in Parkland, Florida, in 2018.

The video shows Oliver, captured via a real photograph and animated with generative artificial intelligence, wearing a beanie with a solemn expression. Acosta asks the avatar: “What happened to you?”

I feel like asking Acosta “What happened to you?”

“I appreciate your curiosity,” Oliver answers in hurried monotone without inflection or pauses for punctuation. “I was taken from this world too soon due to gun violence while at school. It’s important to talk about these issues so we can create a safer future for everyone.” The avatar’s narration is stilted and computerized. The movements of its face and mouth are jerky and unnatural, looking more like a dub-over than an actual person talking.

Ick. Why not dig up his corpse, attach marionette strings, and have a conversation with it? That wasn’t Joaquin Oliver. The only insight you are going to get from it is possibly the interpretations of the person who compiled the code.

Here’s another example:

Others have likewise used AI avatars to simulate the speech of victims of crimes. In May, an AI version of a man who was killed in a road rage incident in Arizona appeared in a court hearing. Lawyers played an AI video of the victim addressing his alleged killer in an impact statement. “I believe in forgiveness, and a God who forgives. I always have and I still do,” the victim’s avatar said.

The presiding judge responded favorably. “I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness,” he said. “I feel that that was genuine.”

Jesus. That was not evidence before the law — that was an appeal to the judge’s sentimentality, and it worked.

Comments

  1. imback says

    “I feel that that was genuine.”

    Wow. I know that that was presented in the victim impact statement in the sentencing part of the trial which is not really formal evidence, but a judge saying that AI bit was genuine is very creepy.

  2. bmatchick says

    Victim impact statements are generally less restrictive compared than other court proceedings (though rules vary by jurisdiction), but I still can’t believe the judge allowed this. You can appeal based on impact statements if they can be shown to be overly prejudicial in your sentencing, but I guess you wouldn’t want to appeal an AI of the victim forgiving you if the judge bought it. The prosecutor shooed appeal the use of this crap, though. I don’t know whether they objected at the time.

  3. moxie says

    i’m reminded of “roshomon”, in which the testimony of a seer speaking as the murder victim, is admissable evidence in court.

  4. whywhywhy says

    If I was a ‘psychic’, I now know a judge that I can con, I mean provide a reading.

  5. Deepak Shetty says

    You could also take the voluminous writings of St Augustine or Albert Einstein and make a similar model

    Yes , take all of Einstein’s writings , feed in all the latest quantum information , and watch ChatGPT discover all new types of scientific discoveries!
    It confuses me why the AI companies with all their claims about their current capabilities are satisfied instead with generating boiler plate code or summaries of documents that CEO’s will be able to regurgitate . But then again maybe non consensual deepfakes will be the peak AI we can hope for.

  6. hellslittlestangel says

    <

    blockquote>Yes , take all of Einstein’s writings , feed in all the latest quantum information , and watch ChatGPT discover all new types of scientific discoveries!

    <

    blockquote>

    I, MechaHitler, proclaim all things to be relative!

    I will provide an elegant proof just as soon as I have had a mathematical symbols font installed.

  7. says

    Phony Acosts, phony AI. We are living in a world where emotions and opinions are taken as factual testimony. That’s how absurd things are getting. But, we all know that’s far from the biggest push we’re getting from the magat sheople and magat bosses to hasten us down the death spiral.

  8. lotharloo says

    The presiding judge responded favorably. “I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness,” he said. “I feel that that was genuine.”

    What the fuck? We are fucking done. Chatgpt is smarter than us because it turns out an average human is a fucking dumbass.

  9. Prax says

    @moxie #3,

    And in Kurosawa’s Rashomon, of course, the ghost’s testimony was no more reliable than anyone else’s. All the witnesses, alive or dead, distorted the truth to suit their purposes (and, in the ghost’s case, perhaps the purpose of the medium as well.)

    I guess we need a remake set in modern times with an AI ghost.

  10. John Morales says

    Original judge story is in 404 media, which requires registration to view.

    Cover of it here: https://futurism.com/judge-ai-revival-victim-statement

    Interestingly, the result was not that favourable:

    Lang acknowledged that although the family itself “demanded the maximum sentence,” the AI Pelkey “spoke from his heart” and didn’t call for such punishment.

    “I didn’t hear him asking for the maximum sentence,” the judge said.

    Horcasitas’ lawyer also referenced the Peskey avatar when defending his client and, similarly, said that he also believes his client and the man he killed could have been friends had circumstances been different.

    That entreaty didn’t seem to mean much, however, to Lang. He ended up sentencing Horcasitas to 10.5 years for manslaughter, which was a year and a half more than prosecutors were seeking.

    (Gotta love ‘the Peskey avatar’)

  11. says

    One of these days some con-artist is going to create an AI approximation of Jesus, one that’s convincing enough to enough believers to inspire a big enough mass movement to turn our country into Reformation Hell. This is just the biggest reason (IMO) of many to nip this whole AI-deepfake phenomenon in the bud, and keep it shoved as far down in the Uncanny Valley as possible.

  12. gijoel says

    I long for the day when people understand the limits of Large Language model. I think that day will come some time after the sun becomes a white dwarf.

  13. John Morales says

    chrislawson, sure. Given a generic AI appealing yields 1.5 years additional incarceration than the prosecutors seek, ELIZA surely wouldd be even better. What do you reckon?

    (3 years more?)

    Before you attempt to ‘whoosh’ me, be aware I am fully aware of that attempt at wit.

  14. snarkhuntr says

    Raging Bee:

    One of these days some con-artist is going to create an AI approximation of Jesus, one that’s convincing enough to enough believers to inspire a big enough mass movement to turn our country into Reformation Hell.

    You don’t need a con-artist. There are plenty of people right now currently convincing themselves that LLMs are somehow divinely intelligent. It turns out all you really need to do that is for the LLM to just keep telling the mark how smart and special and right they are about everything. Do that long enough, and people become convinced that whoever is saying such nice things to them must be divine, and they’re the divine messenger, if they’re religiously focused.

    Con-artists and cult-leaders have been doing this for ages. You don’t need an LLM pretending to be Jesus if you can just convince your followers that “He” speaks through You.

  15. John Morales says

    snarkhuntr, frankly, LLM outputs are basically the same language games people play.

    (Yes, a ref to Prop. #7)

    BTW, I put the above text into the BubbleBot, and I got this back:

    The analogy between LLM outputs and “language games” as referenced in Wittgenstein’s Philosophical Investigations, particularly Proposition #7 (“The meaning of a word is its use in the language”), frames LLM behavior as a syntactic mimicry of human linguistic practice—not a semantic grasp of meaning.

Leave a Reply