“What are the roots that clutch, what branches grow
Out of this stony rubbish? Son of man,
You cannot say, or guess, for you know only
A heap of broken images, where the sun beats,
And the dead tree gives no shelter, the cricket no relief,
And the dry stone no sound of water.”—T.S. Eliot, The Waste Land
I logged onto Facebook for the first time in a while, and I was shocked by how bad it’s become.
My feed was a torrent of spammy ads and irrelevant “suggested” posts from groups I’m not in. I had to scroll and scroll and scroll to find even a single post from one of my actual friends – i.e., the people I chose to connect with, the people whose presence is theoretically the reason for me to use this platform. I almost gave up before finding one.
On subsequent logins, the content-to-ad ratio seemed better, but only slightly. The site still felt like a wasteland, populated mostly by ads and spam, and only secondarily by human beings. I had to actively fight against it, dismissing or scrolling past a blizzard of annoying algorithmic junk, to see the content I wanted to see.
Using Facebook now is like wandering through an abandoned casino. The people are all gone; the gaming tables are collecting dust. But, somehow, the electricity is still on, so the signage is a tawdry blur of neon and scrolling marquees and chasing lights, calling out to you to play games whose dealers have long since departed. It’s the ghost of human interaction, lingering after the humans themselves have gone away.
What happened?
For one thing, the enshittification cycle is complete. Facebook’s algorithm has become hostile to its users, showing them more and more of the content advertisers want to show them, rather than the content they want to see. (I stopped posting my own content to Facebook a while ago, when it became clear that it was suppressing posts with external links. If I shared an article I wrote on my own site, no one would see it, even my friends who chose to follow me.) Under constant pressure for higher profit, the algorithm gets more and more aggressive about pushing ads, until the noise is drowning out the signal.
At the same time, they’ve given up on content moderation. Academic researchers and watchdogs who study social media have both noticed this:
Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by “AI influencers” in the service of promoting OnlyFans pages also full of stolen content.
…There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and I’ve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform.
Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there.
…Meta’s content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview.
It appears that Facebook decided that moderation was just too difficult to solve at scale – and more important, it’s an expense rather than a profit center – so they got rid of it. It’s a naive cost-cutting measure, and in the short term, it might’ve produced a small bump in the stock price. However, anyone who’s ever run a personal blog could’ve told you what happens next.
When you give up on moderation, you don’t get a flourishing garden of free speech and enlightened debate. Instead, the worst characters emerge from their slime pits, and when they find nothing to stop them, they take over the comment section. Any real discussion gets overrun by spam, abusive racist and sexist bile, and conspiracy blather. It’s like weeds taking over a garden. Eventually, people who put actual thought and effort into their contributions are driven away, and only the trolls remain. Facebook (and Twitter) are experiencing that on a broader scale.
And chatbot AI has made this problem far worse. It’s knocked down all barriers to spammers, scammers, and astroturf influence-buyers. Without the necessity of having human beings involved, it’s trivial for them to churn out garbage content on a colossal scale. Whatever genuine human conversation there was, it’s swamped by clickbait factories grasping at monetization or trying to manufacture fake consensus. The end state is a wasteland devoid of humanity. It’s a zombie business, staggering along on inertia until someone realizes that there are no people left, just endless hordes of bots advertising to bots.
It’s not a universal law that this has to happen in any community. It’s the intersection of social media with capitalism that’s to blame. The profit incentive demands that social media companies push as much junk content as possible, in order to squeeze the most money out of every user they have. It compels them to do only the bare minimum for moderation and safety – or less than the minimum, if they can get away with it. (See also; Elon Musk firing Twitter’s entire moderation team.)
When social media is run for profit, overseen by algorithms that decide what users get shown, this is almost inevitable. That’s why I favor non-profit, non-algorithmic social media like Mastodon (follow me there!), where users are the customers rather than the product, and where you see only the content you choose to see. It’s not free of all these problems – there are still spammers and abusive jerks, as there always have been in every collection of humans – but they tend to get weeded out. More important, the network itself isn’t promoting them.
Pierce R. Butler says
I have long longed to see Mark Zuckerberg sitting by the side of the highway, waving a handmade cardboard sign saying,
“Will sell your
most personal secrets
for food.”
– and to drive by with one finger raised.
Now I want to see, behind him, Musk and Trump in a bumfight.
LykeX says
Twitter is rapidly going down that road, too. There was a time when I could check twitter and get suggestions about science and relevant news. Now, I can guarantee that my front page always has an Elon tweet at the top, followed by “promoted content” and a (not so) healthy chunk of far-right BS.
Adam Lee says
It’s always a great time to quit Twitter.
Katydid says
I’m wondering if it all devolves into chaos? Usenet was bought out by Google and went mostly-defunct and unusable. AOL had its chat rooms in the early 1990s, but the trolls invaded and people were getting doxxed and abused. Then came Yahoo Groups and then Facebook, and on and on it goes.
I never got involved with Facebook, but I was surrounded by those who did, and by 2010, everyone I knew who used it was busy fighting with everyone else I knew who used it. Person A didn’t like Person B’s post fast enough, or Person C took offense to something Person D said. Great Unfriending events happened regularly–friends, family members–it didn’t matter. Identities and contact lists were stolen. It just seemed like such a bad place to spend time.
I’m not surprised you think it’s become a wasteland of increasingly-desperate ads and sleaze.
JM says
@2 LykeX: Twitter/X isn’t doing it because it’s more profitable. That is because Musk bought it and he is pretty right wing himself. Since he bought Twitter/X it has lost money and user base. He just bulk unbanned all of the right without considering the consequences. He then had to hurriedly reban some of the real right wing nuts.
@3 Katydid: Any large active group will have some issues. It’s more a question of how fast they can be blocked/removed without the groups administration getting too heavy handed.
Reddit is an example of an organization that sometimes has the reverse problem of too much administration. Most of the administrators are volunteers and in some groups are excessively zealous.
dangerousbeans says
Moderation is very much a hard problem. I help moderate a Mastodon instance, and while 90% of it is just obvious shit the remaining 10% is the problem. Is someone doing dog-whistles or do they just not know the right terminology? Is an argument racist, or just racist adjacent?
And good moderation results in the dickheads leaving, which negatively affects user numbers. When the main metric is active user accounts then shitty right wing troll bots look like success.
IMO, this is the same problem capitalism always has with maintaining infrastructure
Adam Lee says
What’s your Mastodon instance? I’ll check it out – I’m daylightatheism@universeodon.com.
dangerousbeans says
I moderate for aus.social. Basically the same username
Dunc says
This is pretty close to the nub of the problem, I think… The thing is all of the objective, easy metrics are bad, and lead to bad outcomes, while attempting to define and measure “good” is extremely difficult, if not impossible.
Personally, I’d settle for just a linear feed of people / groups / whatever that I’ve actually subcribed to, but that doesn’t really work very well for the ad-supported funding model.
My dream is that someone like the EU enforces open interoperability standards, so that you’re not tied to any particular service, but I’m not holding my breath for that.
Marcus Ranum says
Facebook decided that moderation was just too difficult to solve at scale
If I recall correctly they used to pay Accenture something like $12mn/month to moderate content. My bet is that Accenture would have developed any technology used to scale that up in-house and not given it to Facebook because capitalism. So, if Facebook eventually got tired of paying so much, they probably had to walk away from content moderation.
But, you know what that means: there will be an AI content moderation system and hilarity will ensue.
Marcus Ranum says
Pierce R. Butler@#1:
I have long longed to see Mark Zuckerberg sitting by the side of the highway, waving a handmade cardboard sign saying,
AI can do that. 🙂 They’ll even toss in some extra fingers for him to really grip that sign well.
cubist says
As I’ve often said: In the absence of active moderation, any online venue will degenerate into a troll-infested wasteland. The only question is how long the descent into total wasteland-ness is going to take.
Dunc says
The thing with modern platform enshittification is, it’s not just trolls. It’s the platform itself. I can live with FB groups being infested with trolls and idiots, but what really pisses me off is that FB itself refuses to show me the stuff I actually signed up for, while deluging me with other crap that I very definitely do not want.