It looks like Reels, Meta’s attempt to match the popularity if Instagram, had a malfunction in its algorithm that resulted in some people having their feeds flooded with ultra-violent imagery.
Mark Zuckerberg’s Meta has apologised after Instagram users were subjected to a flood of violence, gore, animal abuse and dead bodies on their Reels feeds.
…One user on the subreddit wrote: “I just saw at least 10 people die on my reels.”
There were also references by users to a video of a man being crushed by an elephant. Others flagged footage of a man being dismembered by a helicopter and a video where “a guy put his face into boiling oil”. Several users posted videos of their Reels feeds dominated by “sensitive content” screens that are designed to shield users from graphic material.
A list of violent content on one user’s feed, published by the tech news site 404, included: a man being set on fire; a man shooting a cashier at point-blank range; videos from an account called “PeopleDeadDaily”; and a pig being beaten with a wrench. The user in question had a biking-related Instagram account, 404 Media reported.
The people reporting this did not seek out these images, but had them foisted on them. But the existence of such videos means that there is a market for them. What kind of person watches, let alone produces, these kinds of sickening images? I try not to be judgmental about what other people like and dislike and think of myself as fairly tolerant of people’s proclivities even if they tend towards the outre. But that does not mean that I necessarily approve of them. If I learned that someone I knew enjoyed watching these kinds of things and actively sought them out, I would give that person a wide berth.
More than that: let your other friends know so they too can decide.
It’s complicated. I know people who love horror as a genre including gory horror and I promise they’re perfectly normal people who wouldn’t harm a fly. But they know it’s actors and special effects. They would never take pleasure in real suffering. Actual “snuff” videos are different.
But if there wasn’t that desire to be scared and shocked, horror as a genre wouldn’t exist in the first place. If you know what I mean.
(Mano’s a special case as he likes his murders entirely bloodless 🙂 )
Could it have been that these people somehow got the videos that were algorithmically selected for moderation?
More to the point, where did that algorithm find those videos in the first place? On Meta pages that Fluckerberg never got around to policing?
And how does an algorithm malfunction that badly? Did someone type “killing videos” instead of “kitten videos” in some totally honest understandable mistake?
People in the tech sector have been voicing concerns about explicit violence and other deranged crap floating around on the Internet for MANY years now. Weren’t there any sort of safeguards in place at Meta? Or did Fluckerberg take them all down to suck up to #KKKrasnov?
silentbob @#2,
I do not think these videos were with actors. The impression I got was that these were real events.
@ #4: The problem is that content moderation just doesn’t scale very well. When you’ve got hundreds of millions of people uploading video 24/7, it only takes a tiny proportion of those to be uploading problematic content to overwhelm the moderators unless you have really, really large numbers of them.
Now, you can certainly make an argument that Meta should be employing more content moderators, and I’d also argue strongly in favour of treating them better (the stories of what content moderators have to endure are horrifying), but it’s still just a numbers game with the numbers tilted very, very strongly against successful moderation.
Nobody has a good solution to this. It seems likely that there is no good solution.
See Masnick’s Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well and Content moderation, AI, and the question of scale for more detail on the problem.
With significant numbers reporting things like “I just saw at least 10 people die on my reels” it’s clear that the problem is not one of under-resourced moderation being overwhelmed by sheer numbers — even from a tiny proportion of any given numbers of millions of videos.
Either the proportion of snuff videos submitted is suddenly huge (in which case millions of people would have seen the same snuff parade) or the content tagging algorithm is recognising a snuff video and yet tagging it something else entirely so that only a smallish number have it served up within their algorithmic interests feed.
The algorithm is supposed to pre-screen out the abusive stuff even before a human is tasked to review it which only happens after lots of reports. And the algorithm failed.
Meta changed its algorithm and didn’t test it. Meta is responsible for the change its owners and managers asked for, but they will of course hide behind their algorithm as an excuse.
Back to Mano’s topic, I think it’s fair to say that liking gore (consent implied) is categorically different to creating or enjoying a snuff video (consent explicitly excluded). You can’t trust someone like that to have any respect for you or anyone else you might conceivably care about.
This third comment is specifically to remove any temptation for me to reply to any grey-area reply guy. I’m with Mano’s OP. And I’m three and out.
OT -- I am posting this as a palate cleanser. You need brighter news!
.
“James Harrison (blood donor)” -- Wikipedia
https://en.m.wikipedia.org/wiki/James_Harrison_(blood_donor)
-His blood contained antibodies that saved children with Rh-incompability. His blood donations are estimated to have saved the lives of 2.4 million babies. He passed away peacefully February 17th at 88. He had a real-life superpower. No god was involved.
Dunc @6:
Don’t need stories about the moderators. Just a brief exposure, from life experience and reading, to what some humans are capable of. Whatever horrible thing one can imagine, some sociopath has done and filmed, and many more sociopaths are happy to purchase for their viewing pleasure. I wouldn’t last two minutes as a moderator.
Shit, even watching a (very good) dramatic film about the subject leaves me in a place I really don’t care for. And that’s without any explicit violence.
I would never make it as a moderator either. And like I told my children, if you search for disturbing things on the internet, you’ll find them. There are things you just cannot unsee. There is an image I saw taken from the Syrian war that will stay with me forever. I would describe it, only you can’t unthink things either, and mere thoughts can be traumatic too. I don’t see any benefit in sharing it.
-Might it be possible to recruit a ‘pro-social psychopath’: this is a basically *good* person, but with a high tolerance for what would make other people very upset? What we are looking for is someone whose emotional response is like a flat line, yet simultaneously has enough empathy to understand what things most people find distasteful.
I do not know if such individuals are common, I suspect not.
birger @12: I think ‘psychopath’ and ’empathy’ are mutually exclusive. What might be possible is ‘trained psychopath’. Like Dexter?
birger: Maybe a sociopath who is okay with seeing all manner of evil shit, but who also knows the rest of us don’t like it, and is willing to do the job just for the paycheck?
There are probably nowhere near enough people with whatever psychological peculiarities could make that job tolerable, and those that there are are making far more money in corporate management. Remember, in the current model, this is a badly paid job done by poor people in the Global South.
EigenSprocketUK, @ #7: my comment wasn’t intended as an explanation of how this stuff is suddenly ending up in people’s feeds -- that’s obviously the result of some algorithmic cock-up, as you say. It’s an explanation of the difficulty of excluding it from the platform altogether. Again, scale is the enemy of any algorithmic solution, because when you’re dealing with very large volumes, even small error rates result in large absolute numbers of errors.