
“What are the roots that clutch, what branches grow
Out of this stony rubbish? Son of man,
You cannot say, or guess, for you know only
A heap of broken images, where the sun beats,
And the dead tree gives no shelter, the cricket no relief,
And the dry stone no sound of water.”—T.S. Eliot, The Waste Land
I logged onto Facebook for the first time in a while, and I was shocked by how bad it’s become.
My feed was a torrent of spammy ads and irrelevant “suggested” posts from groups I’m not in. I had to scroll and scroll and scroll to find even a single post from one of my actual friends – i.e., the people I chose to connect with, the people whose presence is theoretically the reason for me to use this platform. I almost gave up before finding one.
On subsequent logins, the content-to-ad ratio seemed better, but only slightly. The site still felt like a wasteland, populated mostly by ads and spam, and only secondarily by human beings. I had to actively fight against it, dismissing or scrolling past a blizzard of annoying algorithmic junk, to see the content I wanted to see.
Using Facebook now is like wandering through an abandoned casino. The people are all gone; the gaming tables are collecting dust. But, somehow, the electricity is still on, so the signage is a tawdry blur of neon and scrolling marquees and chasing lights, calling out to you to play games whose dealers have long since departed. It’s the ghost of human interaction, lingering after the humans themselves have gone away.
What happened?
For one thing, the enshittification cycle is complete. Facebook’s algorithm has become hostile to its users, showing them more and more of the content advertisers want to show them, rather than the content they want to see. (I stopped posting my own content to Facebook a while ago, when it became clear that it was suppressing posts with external links. If I shared an article I wrote on my own site, no one would see it, even my friends who chose to follow me.) Under constant pressure for higher profit, the algorithm gets more and more aggressive about pushing ads, until the noise is drowning out the signal.
At the same time, they’ve given up on content moderation. Academic researchers and watchdogs who study social media have both noticed this:
Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by “AI influencers” in the service of promoting OnlyFans pages also full of stolen content.
…There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and I’ve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform.
Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there.
…Meta’s content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview.
It appears that Facebook decided that moderation was just too difficult to solve at scale – and more important, it’s an expense rather than a profit center – so they got rid of it. It’s a naive cost-cutting measure, and in the short term, it might’ve produced a small bump in the stock price. However, anyone who’s ever run a personal blog could’ve told you what happens next.
When you give up on moderation, you don’t get a flourishing garden of free speech and enlightened debate. Instead, the worst characters emerge from their slime pits, and when they find nothing to stop them, they take over the comment section. Any real discussion gets overrun by spam, abusive racist and sexist bile, and conspiracy blather. It’s like weeds taking over a garden. Eventually, people who put actual thought and effort into their contributions are driven away, and only the trolls remain. Facebook (and Twitter) are experiencing that on a broader scale.
And chatbot AI has made this problem far worse. It’s knocked down all barriers to spammers, scammers, and astroturf influence-buyers. Without the necessity of having human beings involved, it’s trivial for them to churn out garbage content on a colossal scale. Whatever genuine human conversation there was, it’s swamped by clickbait factories grasping at monetization or trying to manufacture fake consensus. The end state is a wasteland devoid of humanity. It’s a zombie business, staggering along on inertia until someone realizes that there are no people left, just endless hordes of bots advertising to bots.
It’s not a universal law that this has to happen in any community. It’s the intersection of social media with capitalism that’s to blame. The profit incentive demands that social media companies push as much junk content as possible, in order to squeeze the most money out of every user they have. It compels them to do only the bare minimum for moderation and safety – or less than the minimum, if they can get away with it. (See also; Elon Musk firing Twitter’s entire moderation team.)
When social media is run for profit, overseen by algorithms that decide what users get shown, this is almost inevitable. That’s why I favor non-profit, non-algorithmic social media like Mastodon (follow me there!), where users are the customers rather than the product, and where you see only the content you choose to see. It’s not free of all these problems – there are still spammers and abusive jerks, as there always have been in every collection of humans – but they tend to get weeded out. More important, the network itself isn’t promoting them.




