Being an old geezer, I don’t use Facebook even though I have an account. But I have been seeing increasing reports of how it has become a pernicious influence, not merely because it encourages the waste of time. Facebook has become a haven for the spreading of false information and generating hate and divisiveness. And what makes it worse is that Facebook is not neutral when it comes to monitoring hate speech and taking steps to combat it.
The investigative website ProPublica has issued a report by Julia Angwin and Hannes Grasseger that says that Facebook is far quicker to censor hate speech targeting white men than it is when it comes to hate speech targeting black children. How it achieves this is in the algorithms that the company has developed that alerts their employers as to what posts may need to be deleted.
In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”
Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.
But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.
“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.
A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims — those that are “radicalized” — while Delgado’s post was deleted for attacking whites in general.
While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.
One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.
One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.
The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.
Another longitudinal study published in the Harvard Business Review argues that the more people use Facebook, the worse they feel.
Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.
Our models included measures of real-world networks and adjusted for baseline Facebook use. When we accounted for a person’s level of initial well-being, initial real-world networks, and initial level of Facebook use, increased use of Facebook was still associated with a likelihood of diminished future well-being. This provides some evidence that the association between Facebook use and compromised well-being is a dynamic process.
Kevin Drum says that the negatives of Facebook outweigh the positives.
The casually brutal insults almost certainly outweigh the praise for a lot of people. It instills a sense of always needing to keep up with things every minute of the day. It interferes with real-life relationships. It takes time away from more concentrated activities that are probably more rewarding in the long run.
The more I read stuff like this, the more relieved I am that I never got hooked on Facebook, or Twitter for that matter.