How dangerous are deepfakes?


We have got used to the existence of ‘deepfakes’, computer generated images and videos that are almost indistinguishable from the real thing. This has caused some serious concerns about the possibility of deepfakes becoming a powerful tool for disinformation and mischief, especially in the political arena, since it is possible to have people seem to say and do things that are damaging to themselves with the viewer being none the wiser that they have been conned.

But how dangerous is this?

In the November 20, 2023 issue ofThe New Yorker, Daniel Immerwahr reviews some recent books that look at the dangers posed by deepfakes and concludes that the fears may be overblown, and that even when deepfakes are explicitly political, most of it is used for parody and otherwise humorous purposes, and not meant to convince us that we are watching the real thing,

Fakery in the visual realm goes back to the earliest days of photography, where a lot of editing was done in darkroooms to get the effect sought.

In “Faking It” (2012), Mia Fineman, a photography curator at the Metropolitan Museum of Art, explains that early cameras had a hard time capturing landscapes—either the sky was washed out or the ground was hard to see. To compensate, photographers added clouds by hand, or they combined the sky from one negative with the land from another (which might be of a different location).

From our vantage point, such manipulation seems audacious. Mathew Brady, the renowned Civil War photographer, inserted an extra officer into a portrait of William Tecumseh Sherman and his generals. Two haunting Civil War photos of men killed in action were, in fact, the same soldier—the photographer, Alexander Gardner, had lugged the decomposing corpse from one spot to another. Such expedients do not appear to have burdened many consciences. In 1904, the critic Sadakichi Hartmann noted that nearly every professional photographer employed the “trickeries of elimination, generalization, accentuation, or augmentation.” It wasn’t until the twentieth century that what Hartmann called “straight photography” became an ideal to strive for.

Immerwahr says that people do not accept what they see at face value but look for some kind of corroboration.

One of the most thoughtful reflections on manipulated media is “Deepfakes and the Epistemic Apocalypse,” a recent article by the philosopher Joshua Habgood-Coote that appeared in the journal Synthese. Deepfake catastrophizing depends on supposing that people—always other people—are dangerously credulous, prone to falling for any evidence that looks sufficiently real. But is that how we process information? Habgood-Coote argues that, when assessing evidence, we rarely rely on our eyes alone. We ask where it came from, check with others, and say things like, “If Gal Gadot had actually made pornography, I would have heard about it.” This process of social verification is what has allowed us to fend off centuries of media manipulation without collapsing into a twitching heap.

And it is why doctored evidence rarely sways elections. We are, collectively, good at sussing out fakes, and politicians who deal in them often face repercussions.

One researcher who was tasked with finding techniques to identify doctored images asked his students to go out and find examples of manipulated photos on the internet. They found plenty but instead of sophisticated attempts at deception, almost all of them were memes.

This is an awkward fact about new media technologies. We imagine that they will remake the world, yet they’re often just used to make crude jokes. The closest era to our own, in terms of the rapid decentralization of information technology, is the eighteenth century, when printing became cheaper and harder to control. The French philosophe the Marquis de Condorcet prophesied that, with the press finally free, the world would be bathed in the light of reason. Perhaps, but France was also drowned in a flood of pornography, much of it starring Marie Antoinette. The trampling of the Queen’s reputation was both a democratic strike against the monarchy and a form of vicious misogyny. According to the historian Lynn Hunt, such trolling “helped to bring about the Revolution.”

A review of 15,000 deepfake videos found that 96% of them were pornographic, many of them just putting the faces of famous female actors onto the bodies in sex videos.

Immerwahr says that if your goal is to make some kind of political point. producing sophisticated deepfakes may simply not be worth the effort.

The state of the media today is clearly unhealthy. Distressing numbers of people profess a belief that covid is a hoax, that the 2020 election was rigged, or that Satan-worshipping pedophiles control politics. Still, none of these falsehoods rely on deepfakes. There are a few potentially misleading videos that have circulated recently, such as one of Representative Nancy Pelosi slurring her speech and one of Biden enjoying a song about killing cops. These, however, have been “cheapfakes,” made not by neural networks but by simple tricks like slowing down footage or dubbing in music.

Simple tricks suffice. People seeking to reinforce their outlook get a jolt of pleasure from a Photoshopped image of Pelosi in a hijab, or perhaps Biden as “Dark Brandon,” with glowing red eyes. There is something gratifying about having your world view reflected visually; it’s why we make art. But there is no need for this art to be realistic. In fact, cartoonish memes, symbolically rich and vividly expressive, seem better suited for the task than reality-conforming deepfakes.

So the danger posed by deepfakes may be overblown, at least for now.

If by “deepfakes” we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren’t deep, and the deeps aren’t fake. In worrying about deepfakes’ potential to supercharge political lies and to unleash the infocalypse, moreover, we appear to be miscategorizing them. A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.

Manipulated media is far from harmless, but its harms have not been epistemic. Rather, they’ve been demagogic, giving voice to what the historian Sam Lebovic calls “the politics of outrageous expression.” At their best, fakes—gifs, memes, and the like—condense complex thoughts into clarifying, rousing images. But, at their worst, they amplify our views rather than complicate them, facilitate the harassment of women, and help turn politics into a blood sport won by insulting one’s opponent in an entertaining way.

It appears that people are better at not being influenced by deepfakes than we thought.

Comments

  1. Pierce R. Butler says

    Then howcum people fall so often for the cheapest (and oldest) of cheapfakes -- the spoken word?

  2. sonofrojblake says

    This article seems bizarrely complacent and self-contradictory.

    We have got used to the existence of ‘deepfakes’, computer generated images and videos that are almost indistinguishable from the real thing

    We’ve got used to a thing that exists.

    If by “deepfakes” we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist

    The thing we’ve got used to the existence of… doesn’t exist. Well… “barely”. So… yeah.

    We imagine that they will remake the world, yet they’re often just used to make crude jokes

    So yeah, nothing to worry about then.

    France was also drowned in a flood of pornography, much of it starring Marie Antoinette. […] According to the historian Lynn Hunt, such trolling “helped to bring about the Revolution.”

    So yeah, no need to worry about democratising access to the tools to do this sort of thing, the last time this happened all that happened was a bloody revolution in which the ruling class were publicly beheaded. If that’s not “remaking the world”, what the fuck is?

    We ask where it came from, check with others, and say things like, “If Gal Gadot had actually made pornography, I would have heard about it.”

    This is HILARIOUS. Joshua Habgoode-Coote sounds like the sort of ivory-tower philosopher people make jokes about. He seems to think anyone gives two shit whether Gal Gadot made porn or not -- I certainly don’t. On the other hand, if I want a wank to Wonder Woman, previously I’d have had to hope that Porn-Actresses-R-Us had a lookalike on their books. Now, if I want I can see the ACTUAL Gal Gadot doing whatever it is I like. And yes, the rational part of my brain knows it’s not actually her… and we all know how involved THAT bit of the brain is in the search for porn. The deepfake WORKS -- it’s convincing enough. The supposed effect on Gal Gadot’s mental health isn’t of any interest to the searcher’s libido or the producer’s bottom line.

    A review of 15,000 deepfake videos found that 96% of them were pornographic

    So… the tiny sample used for research turned up SIX HUNDRED non-porn deepfakes. That seems like a large number.

    Immerwahr says that if your goal is to make some kind of political point. producing sophisticated deepfakes may simply not be worth the effort.

    This seems an incredible claim for two reasons:
    1. “the effort” is a moving target. The cost, in money and time and skill, to do ANYTHING in this field is PLUMMETING all the time. It’s not worth the effort today to spend a month and $50,000 on hardware and employing two hundred special effects artists. But what about next summer, when two teenagers with laptops they bought from Costco can follow a Youtube tutorial and achieve a comparable result?
    2. it’s not “worth the effort” to make a political point? Does nobody remember the damaging “47% of the US population are freeloaders” talk that Mitt Romney was caught on camera saying to a private audience of millionaires -- the release of which could be argued to have contributed to him losing an election? Does nobody remember that catastrophic “basket of deplorables” speech the criminally complacent Hillary Clinton gave IN PUBLIC, an error of judgement that arguably contributed to her losing the election? This stuff matters. If Clinton could be kept out of the White House by things she was actually fucking stupid enough to say out loud in front of cameras, imagine how much worse it would have gone for her if Trump’s team had been able to produce Romney-style footage of her confiding in her supporters that… well, fill in an opinion she could have credibly been expected to hold but conceivably would have been uncharacteristically able to stop herself from blurting out in public because even she knew it wasn’t something the voters would want to hear her say.

    Sure, right now the baby steps of harming your opponent look crude and amateurish, but right now we’re at Willis O’Brien moving Kong around level -- MAYBE Ray Harryhausen messing about with clay models in his basement. Before very long we’re going to be at Stan Winston building a 25ft animatronic and ILM backing that up with a sophisticated computer model, just to pick a THIRTY YEAR OLD example of a special effect that, to me, still holds up well even now.

    It appears that people are better at not being influenced by deepfakes than we thought.

    For now.

  3. ardipithecus says

    How many people does a deepfake have to fool to be effective? How many swing voters in a swing state would need to have their voting preference altered to affect the election there?

    “Doesn’t fool most people” is not a good argument. They only need to influence a small percentage. A deepfake only needs to influence a very small number. Each repetition of the messaging can influence a few more. Repeated often enough…

  4. seachange says

    I’m with #4 sonofrojblake here. The article is deeply self-contradictory.
    I think the author’s purpose is to demonstrate how snooty and tragically over-educated they are?

  5. John Morales says

    Since deepfakes exist, it has become possible to claim actual real video is a deepfake. So, another source of plausible deniability.

  6. Holms says

    In the November 20, 2023 issue ofThe New Yorker, Daniel Immerwahr reviews some recent books that look at the dangers posed by deepfakes and concludes that the fears may be overblown, and that even when deepfakes are explicitly political, most of it is used for parody and otherwise humorous purposes, and not meant to convince us that we are watching the real thing

    Is that supposed to be reassuring? How?? It says deepfakes aren’t a problem because… they haven’t been a problem yet. Are we to simply trust that because ‘most’ of the fakery so far has not been an attempt to deceive, future attempts will also be equally rosy? Silly.

  7. says

    One researcher who was tasked with finding techniques to identify doctored images asked his students to go out and find examples of manipulated photos on the internet. They found plenty but instead of sophisticated attempts at deception, almost all of them were memes.

    So, the ones they found were obvious. How about the ones they never spotted because they were too sophisticated to notice? How many of those exist? Where would you even go to look for them?

    I have to agree with #8 above: You can’t judge the danger when nobody has tried yet. Once we get out first genuine attempt to manipulate an election through deepfakes, then we’ll see where we’re at.

  8. sonofrojblake says

    This line is still baking my noodle:

    We ask where it came from, check with others, and say things like, “If Gal Gadot had actually made pornography, I would have heard about it.”

    We ask where it came from -- answer: the internet. That’s where EVERYTHING comes from, right? Do you need a more comprehensive answer, really? Specifically -- do you care? How much does Gal Gadot’s reputation matter to you anyway? (I appreciate this answer would be different if you were, e.g. her husband, her agent, or a casting director.)

    We check with others -- hey, did Gal Gadot do a porn? Possible answers:
    -- “Who?”
    -- “I don’t know.” (as in -- literally all I know about her is that she played Wonder Woman -- maybe she did porn before that?)
    -- “I don’t care.”
    -- “I guess?”
    -- “Yeah, I’ve seen it, she’s hot” etc. (never underestimate the credulity of the public)
    -- “Probably not? I’ve seen a couple deepfakes though. One looks like that scene in “Game of Death” where they literally taped a cardboard cutout of Bruce Lee’s face to the front of a body-double’s head, but there’s one that really legit looks like it’s her. Y’know, apart from all the tattoos.”
    -- “Probably. The woman clearly has absolutely no shame and no self-awareness. Did you SEE that “Imagine” video thing she did during Covid? Fuuuuuuck.”

    We say things like “I would have heard about it.” Hey dumbass -- this video here, that shows her doing porn -- THAT IS YOU HEARING ABOUT IT. You’re kidding yourself if you think people are going to shift themselves to do research.

    I do wonder if deepfakes will affect how slut-shaming works… or if it can even be said to work any more. Kim Kardashian is, as far as I know, famous more or less solely because of a sex tape -- “shame” didn’t seem to affect her. I mean, she’s done stuff since, but that was the reason people talked about her at all in the first place. Pamela Anderson suffered from the release of a tape she did -- shame really did seem to affect her. Both of these were media events because they were real. But consider a world where there are actual or potential sex tapes of everyone and anyone -- just shrug. It’s a thing that happens. It can have no effect on your reputation, surely, any more than there being a cartoon of you on a website? I honestly don’t know, and as a man I can’t really comment, because the effect of having a sex tape of me out there would be societally different than for a woman. I’d be interested to hear a female perspective on it.

    Crudely, though, which would you prefer, if you had a career as a female actor whose appeal rests in part on your physical attractiveness:
    1. deepfakes are common, and there are hundreds of you doing the rounds
    2. deepfakes are common, and while there are hundreds doing the rounds of your similarly attractive co-star in your last movie, a search for your name turns up nothing.

    We live in a strange, strange world.

    And all of this is a sideshow, ultimately, because deepfakes of actors doing things they never did is TRIVIA (unless you’re them). Deepfakes of politicians or similar public figures is what actually matters, and IMO absolutely has the potential to sway elections. It seems astonishingly complacent to posit otherwise.

    Consider the consequences if, the day before a US Presidential election (so too late to do any fact checking) a video went viral on Tiktok showing the incumbent in a conference room with some other identifiable bigwigs, discussing how they’re going to enact what will effectively be a federal ban on guns -- a script where they talk about how they’ll get round the 2nd amendment, entreaties to keep it secret until after the election etc. Or if you like, a video of the challenger talking about an explicit federal ban on abortion under all circumstances, no exceptions. Policies, in other words, that while on their face extreme, could be credibly something they’d talk about. Timing would be everything to make it blow up before the corrections could get out… but tell me you don’t think that could happen. Tell me you don’t think something similar is being plotted right now.

  9. Jörg says

    Meanwhile, standard deception is in full swing.
    Judd Legum: “Kushner’s Mexican connection
     
    “… Univision will play an important role in how Latino voters inform themselves about the 2024 presidential election. About half of the country’s 60 million Latinos get news from Univision. And Latino voters have emerged as a key swing constituency. Trump won about 28% of the Latino vote in 2016 and 38% in 2020. Some polls show him on pace to exceed that percentage in 2024. …

    In 2021, Univision merged with Televisa, a Mexican media company. …

    The co-CEO of TelevisaUnivisio Mexico is Bernardo Gómez, a close associate of Trump’s son-in-law, Jared Kushner. …”

  10. says

    I admit I am concerned that some of my early renders of uparmored pope, and Margaret Thatcher fighting zombies, might be mistaken for reality.

    Joking aside, all AI fakes will do is eventually increase skepticism, which is a good thing. People need to learn not to just believe anything they hear or see.

  11. lanir says

    I think some of these points sound rather naive. All you have to do to realize that deepfakes could have a significant effect is look at other things that have had an effect and how they work. Swiftboating for example, or Comey’s late contribution to Trump getting elected president. Getting a narrative out there can be quite effective.

    Another way to get a handle on the likely effectiveness of deepfakes is to consider them in terms of marketing. Now you have to take this stuff with a grain of salt. Marketing truisms are at least partly tuned to sell more marketing obviously. But if you think about it, people have known that advertisers are lying to them for generations. It hasn’t stopped buyers from purchasing advertising and it hasn’t stopped consumers from falling for the tricks in advertising, even if they know about those tricks. This is probably a good starting place for visualizing how consumers will regard deepfakes in the future.

    Creating deepfakes is only going to get easier as time goes on. The argument at the end that the cost vs reward comparison isn’t in their favor is a time-sensitive take on the matter. It will expire at some point. And it will expire at different times for different organizations and different audiences. Governments of wealthy nations that can afford to invest in early quantum computing efforts might find it expires a lot sooner for them than anyone else. The CIA faking a “let them eat cake” moment to hurry along regime change? That might happen any day now, with or without quantum computing. A fake tv spot where a celebrity endorses some random product? Not happening anytime soon because there’s not enough money in it. Real celebrities are and will remain cheaper.

    Won’t dig into it but backlash when people realize your thing is a fake are also part of the cost vs reward analysis. The first consequential deepfakes that are exposed will generate the largest backlash. But I think that will quickly diminish. Companies advertising how “green” they are and later being exposed as big polluters comes at a cost. But overall I think it’s sadly likely to be a net positive for them. The later exposure simply doesn’t hurt them enough because we’re all used to advertisements lying to us.

  12. JM says

    @16 lanir: There is another point on the cost that will effect things also. If you look at the fake right wing stuff that gets passed around, a good portion started as jokes or memes. Somebody took them seriously or maliciously copied them and started passing them around right wing circles.
    The thing about this is that there are more people spending more effort making jokes then what the right wing can usually muster for fakes.

  13. says

    …all AI fakes will do is eventually increase skepticism, which is a good thing. People need to learn not to just believe anything they hear or see.

    Skepticism doesn’t do anyone any good if no one has, or believes they have, any reliable sources of information to turn to for research, fact-checking or verification. If everyone’s learned not to believe what they hear or see, they will simply give up finding the truth about anything, stop asking questions they don’t think will ever be answered, and believe, by default, pretty much what they’re used to hearing, from whoever they’re used to hearing it from. And for a lot of Americans, that will mean just passively believing whatever they hear on Fox, without really thinking, and possibly without even caring whether it’s true, since they have no way to verify it anyway. “People need to learn not to just believe anything they hear or see” is something I’ve heard said by con-artists and robber-barons when they’re blaming their marks for falling for their BS. It’s just another way of “teaching” people to be helpless, cynical and paralyzed.

  14. Jörg says

    Re #18, sorry; to to state more precisely, fake Chancellor Scholz in the video declares the German government’s intention to apply for the outlawing of the AfD at the Federal Constitutional Court on June 2, 2024.

  15. John Morales says

    RB, fair point.

    Skepticism doesn’t do anyone any good if no one has, or believes they have, any reliable sources of information to turn to for research, fact-checking or verification.

    Not in its crudest form, no. Obviously, it’s necessary, but not sufficient.

    Add in Ockam’s Razor, add in other heuristics, add in multiple independent sources, add in consilience, and modelling, and so forth.

    (PRATTS are right out!)

  16. lanir says

    @JM: Good point. Right wing jokes are kind of a large target so I’m not certain I’m reading it quite the way you mean it. But I do think that portraying it as a joke is an effort to lessen backlash. It lets people who support your views stand with you without seeming to take on the burden of whatever awful things you did. They can just act as if they were in on the joke and you’re being a humorless grouch.

  17. Dunc says

    Raging Bee, @ #19:

    If everyone’s learned not to believe what they hear or see, they will simply give up finding the truth about anything, stop asking questions they don’t think will ever be answered, and believe, by default, pretty much what they’re used to hearing, from whoever they’re used to hearing it from.

    So, pretty much exactly as things are now then?

    And for a lot of Americans, that will mean just passively believing whatever they hear on Fox, without really thinking, and possibly without even caring whether it’s true, since they have no way to verify it anyway.

    Actually, you can strike the “pretty much” bit…

  18. Holms says

    With perfect timing, a story from the (Australian) ABC program Media Watch:
    https://www.youtube.com/watch?v=ziVHNDFxtno&t=607s
    Three very wealthy Australians were impersonated by AI by repurposing old interview footage with new lip movements and new words put in their voices. The purpose? A scam investment.

    Have a look at the footage before reading my spoiler (rot13) below and see if you can tell where the fakery begins in the interview.

    Spolier: gur jubyr vagreivrj vf snxr.

  19. Pierce R. Butler says

    Associated Press weighs in: Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead

    Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

    Viewed millions of times online since the war began, these images are deepfakes created using artificial intelligence. If you look closely you can see clues: fingers that curl oddly, or eyes that shimmer with an unnatural light — all telltale signs of digital deception.

    The outrage the images were created to provoke, however, is all too real.

  20. KG says

    A review of 15,000 deepfake videos found that 96% of them were pornographic, many of them just putting the faces of famous female actors onto the bodies in sex videos.

    Oh, well, that’s OK then. I mean, violating the rights of “famous female actors” doesn’t matter, does it? If they didn’t want to be deepfaked into porn, they shouldn’t have gone into acting and been successful, should they? Deepfakes are also being used by criminals sampling individuals’ voices from social media, then creating fakes of them begging their families or friends to pay a ransom, and to fool people into online “relationships” that end up costing them both large amounts of money and psychological damage in so-called “romance fraud”.

Leave a Reply

Your email address will not be published. Required fields are marked *