Amanda Hess at Slate points out what a terrible, non-existent job Twitter does of preventing users from harassing people.
When CNBC invited Twitter users to ask questions of Twitter CEO Dick Costolo last month, thousands of people chimed in with queries like, “Why is reporting spam easy, but reporting death and rape threats hard?” and “Why are rape threats not a violation of your ToS?” According to CNBC, more than 28 percent of the 8,464 questions submitted to the network concerned harassment and abuse on Twitter. But when Costolo appeared on CNBC’s Closing Bell, he didn’t address the problem of online threats.
Sure enough, that sounds exactly like Twitter. It never does address the problem of online threats.
The company’s typical response to complaints about abusive and harassing behavior on Twitter is to advise users to fend for themselves. The networktells abused individuals to shut up (“abusive users often lose interest once they realize that you will not respond”), unfollow, block, and—in extreme cases—get off Twitter, pick up the phone, and call the police. Twitter opts to ban abusive users from its network only when they issue “direct, specific threats of violence against others.” That’s a criminal standard stricter than the code you’d encounter at any workplace, school campus, or neighborhood bar.
And the result is that Twitter is a playground for people who enjoy harassing others.
What this approach fails to recognize is that online harassment is a social problem (one that disproportionately affects the same folks who are marginalized offline, like minority groups, LGBT people, and women), and making the Internet a safe and equitable place to communicate requires a social solution.
But of course their goal isn’t to make the Internet a safe and equitable place to communicate. It’s to get as many people as possible using Twitter as much as possible. Obsessive harassers are great for that.
She talks about the Blockbot and other blocking apps, and points out the limitations.
But without Twitter’s cooperation, these developers are still focusing on selected users instead of addressing the problem on a site-wide level. Sharing my block list with my followers might alert a few people to a few bad apples, but all that will accomplish is offering a handful of people the option to block some vile tweets from view. This is, ultimately, in service of Twitter’s preferred solution—that users ignore abuse, pretend stalkers don’t exist, avert their eyes from harassment, and don’t bother Twitter HQ.
These apps won’t actually inspire Twitter to shut down the serial abusers who use their Twitter accounts to harass and threaten women. They won’t help attract serious legal attention to their crimes. And they won’t compel Twitter to instruct its brilliant developers to imagine new sitewide solutions for the problem, or else lend its considerable resources toward educating government officials and law enforcement officers about the abuses its users are suffering on its network. Right now, Twitter doesn’t even have the basics down: University of Maryland law professor Danielle Citron, writing about a recent lawsuit filed against Facebook for ignoring revenge porn on its site, suggests that social networks can begin to serve harassed users by hiring more employees to sift through complaints instead of assigning the task to robots; prioritizing reports of threats over reports of spam; notifying users of the outcome of their complaints; and—above all—actually communicating with users on this issue.