Twitter: Fixed it for you

Dear Twitter,

I noticed you’re having a hard time balancing user issues with dealing with harassment and dealing with privacy. I further noticed you had a good idea for a change to a function that solves one class of issues, but that had side-effects that made another class of issues dramatically worse.

I think I have found a reasonable solution for your problem.

Do both.

Oh, and there are some more tweaks I can offer to help fix other outstanding problems, if you’ll listen.

To catch everyone up, Twitter recently changed their Block button into a Mute button, then when a large segment of the Twitter population started pointing out the problems with that approach and/or claiming they felt significantly less safe on their service, they called an emergency after-hours meeting of their executives and rolled the changes back and issued an apology.

The specific change they attempted to make was changing the Block button’s functionality — which stops a specific user from interacting with your timeline in any way, disallowing built-in Retweets or Favorites, forcing them to unfollow you, and preventing them from following — into more of a Mute button, keeping those users from realizing that they’d been blocked. The idea was, there’s apparently a segment of the Twitter-using population for whom discovering that you’ve been blocked creates a road rage that causes out-and-out retaliation. Mind you, that retaliation sometimes comes from something as innocent as unfollowing someone — I’ve had someone yell at me for unfollowing them when I was doing a housecleaning of people saying terrible things about myself or my friends, to the point of absolutely laughable hyperbolic accusations of censorship for curating my digital experience. Not that this hyperbole is a foreign concept to folks around here.

This is what happened when The Block Bot was targeted by misinformed (or disinformed) Anonymous members who, despite Twitter explicitly saying that such blocks do not factor into their decision to ban abusive users, went after TBB and James Billingham (Ool0n) who created the service. The service was an easy scapegoat — despite there being very little correlation between having your name on the service and having your account blocked, Anonymous still blamed “Trollpocalypse” (an event involving literally dozens of trolls being banned! Surely that merits an apocalyptic name!) on them.

I understand where Twitter came from with this. The change had noble intentions. However, their intentions do not rectify the fact of their pushing the change without feedbac, and as a lot of people pointed out, the larger issue here is not retaliation, but ease of harassers to get around blocks. This change meant that fixated individuals who follow people on Twitter could more easily and more readily sic their own followers on people, amplifying any harassment they were already doling out. It further meant that the recipients of such treatment had less control over who has that level of access to their public stream, without going into Private mode, the Twitter equivalent of becoming a hermit, switching your follower mode such that you have to approve every follower and removing your tweets from the public eye.

My solution to this problem is simple — make another state you can apply to others that works the way your new block button was going to work, and call that Mute. Explain to users that they now have the choice of allowing people to continue to read, retweet and otherwise interact with your account without you ever seeing any of it. Perhaps you could add a tiny grey note to your timeline to indicate that someone has interacted with you that has otherwise been muted — or, make that a display option, in case you really don’t want to know when some particularly fixated individuals (like ElevatorGATE or his two-dozen-odd parody accounts) spend all day talking about you or trying to sic their followers on you.

Some more improvements can be made with minimal effort, as well. For instance, you would certainly benefit from offering end users more choice of who they share their tweets with. While a knock-off of Facebook’s Privacy Settings scheme is probably over-ambitious and an overengineering of the problem, a simple hybrid of private and public could be useful. You could very easily create a class of semi-private account, where tweets are protected, but followers are automatically added unless and until they’ve been explicitly refused — like having a private account whose “whitelist-only” scheme” has been reverted to the normal “blacklist” scheme, only with a memory of whom you’ve decided has abused that privilege. You could also allow for a semi-public account whose tweets are not necessarily not-public, but can’t be interacted with directly unless you’re a follower. Or you could offer an a-la-carte selection of privacy settings, letting you mix and match these functionalities as you please, rather than giving only a locked-down Private and a wide-open Public option.

Another issue — part of why I still feel that the balance of power tips to the harassers, and a simple way to right some of that balance — is that when you’ve blocked someone, they can immediately switch to an alternate nickname and carry on harassing. They can open a new window and copy tweets, then paste them into their original account to retweet manually (something like the “analog hole” of recording DRM-protected music or videos). When someone has been blocked, Twitter must know the IP address of the user who’s been blocked, and they must be able to really rapidly correlate a new account coming from that IP in order to create a map of all of a user’s (sometimes dozens of) sockpuppets or alternate accounts. When a user blocks another user, it is safe to assume if a different user jumps onto the same IP and starts interacting with the original blocker, that interaction is probably also unwanted. Preventing that behaviour should be simple enough — you can’t tweet at someone who’s blocked an account at your IP for 24 hours. Attempts to do so would flag the interaction automatically as potentially abusive. Likewise with even seeing the account — you could take it that step further and stop users from seeing the account even while not logged in or using an Incognito browser window.

Twitter has in the past said they don’t ban people based on IP address, because those are very mutable, and therefore not a good identifier of a particular user’s identity. Doing IP-level blocks when a user is banned would certainly catch a lot more false positives than it would fix the underlying problems. However, preventing certain users from being repeatedly harassed from the same IP address via different accounts seems relatively doable, with precious little actual fallout. And it’s not like motivated harassers couldn’t further get around this by using proxying services — though proving such activity, circumventing blocks to harass people who’ve blocked them, could actually be itself a bannable offense. It’s certainly a reportable offense now, what with the ability to report people for repeatedly sending you @-messages despite you having blocked them — and people apparently get banned for that activity now, including both the “Trollpocalypse” users and even more recently some atheists.

Something else that could help is a generalized shared blocking function. Building a community often involves identifying toxic elements and removing them as having been abusive or damaging to discourse. Building a “free market” also doesn’t involve total deregulation of a market — “free markets” that have no rules quickly become autocracies, so a real free market has to have careful regulations to prevent any one entity from acting against the best interests of all involved in order to concentrate too much power, spoiling the market for everyone but themselves. Likewise with communities. The Block Bot is a novel way of helping users share block lists of people who’ve been abusive, harmful, or even just annoying. Having that functionality cooked right into Twitter would be easy enough — you could subclass the Shared Lists functionality which lets you share whole lists of users to follow, and turn a list into a Shared Block List. That would allow any user to share a block list with any other, would remove the centralized-control-by-necessity that The Block Bot has now, and would keep people from targeting a completely and wholly unrelated service for such atrocities that demand retaliation as “antifeminists getting blocked for abusive behaviour” (which is literally like Kristallnacht!!!).

I need to make careful note, though that all of these rules that I’ve outlined would affect those atheist Twitter users who think it is all good sport to seek out and harass Christians, going to their Twitter feeds based on searches for Christian tropes and challenging people directly on them. These atheists often search for topics to confront people about, retweet things that are particularly stupid, and otherwise tacitly encourage (even while they pay lip service to the contrary) their followers to further harass these individuals. To be absolutely clear, I don’t have any problem with curtailing this behaviour whatsoever.

Regardless of the fact that Christianity needs to be challenged publicly, individual Christians should no more be harassed by individual atheists (or these atheists’ followers) than feminists or humanists need be harassed by antifeminist assholes like ElevatorGATE or members of the slime pit. Their interactions are unwanted, and Twitter should allow people to choose to deal with those unwanted interactions with greater granularity and greater certainty of their ability to maintain what level of privacy they so choose.

Any or all of these recommendations would certainly make MY personal Twitter experience feel a lot more like I’m in control, rather than swept along in waves upon waves of hateful interactions. And I’m sure that my advocating for the ability to plug your ears more effectively will be seen as against “free speech”, but don’t worry, trolls — it’s not like your tweets are being removed from the public firehose. Giving someone the ability to shut the door to their household on your face does not stop you from ranting, it just stops you from ranting AT THEM.

{advertisement}
Twitter: Fixed it for you
{advertisement}

7 thoughts on “Twitter: Fixed it for you

  1. 1

    The Block Bot’s main account (@The_Block_Bot), which gives notice of new additions, has been blocked for “targeted abuse”. (That said, the bot itself is still working properly.)

    While this Anonymous campaign may have had a role (together with Lazy Savant, the guy who publicly fantasized about dismembering Stephanie Zvan), I think the real culprit it Trans Exclusive Radical Feminists. These are the people who relentlessly stalk transwomen on Twitter, deliberately misgendering them, falsely accusing them of rape, calling them mentally ill, and jeering at their stories of being publicly assaulted.

    Disturbingly, they were boasting about having a friend who works for Twitter, and they appeared to know about the outcome of the suspension before we did. In other words, it is likely that Twitter employees are colluding with harassers, and obstructing people who try to stop the abuse.

  2. 3

    Having administrated IP-addressed based security systems both currently and in the past, I can confirm that Twitter is not just making excuses about the issues with blacklisting based on IP address. It is very problematical these days.

    First of all, there are lots of user situations (such as my own when I am at work, and many schools and so on) where hundreds of users share an IP address behind a NAT firewall or proxy of some kind. That means one bad apple at that site, and suddenly nobody can create a Twitter account there (for some time period).

    Secondly, like the “log out and view” dodge for viewing someone’s tweets, these types of IP based blocks are easily avoided by anyone with a modicum of technical skills. All you have to do is switch to a different proxy server and boom you have a new IP address. Our Canadian friend “David Mabus” started doing that after we got him arrested a couple times, and we could no longer match his posts to his location in Montreal. And this is a guy we know was not all that technically adept – he once generated the titles on a YouTube he posted by screen-capping himself typing into a Microsoft Word document.

  3. 4

    IP address-based blocks for a user-based problem really don’t work.

    There are all kinds of other approaches that can and do work; one of which is to make accounts valuable so that having your account closed for misbehavior is actually something people will desire to avoid. I wish sites would experiment with that kind of system. Unfortunately, a lot of the design-work for the kind of thing I’m talking about has to be done before the system is loaded up with a zillion users and their established expectations. So, for example, you might be expected to pay $50 for membership, which is refunded after the end of the year, unless your account is closed for misbehavior. Another technique that works well is that if your account gets closed for misbehavior all of your contributions to the site vanish – so, the longer you are on there, the more valuable your comments get, to you, and the less likely you’ll be to flush them all in a moment of frustration. Other models are sites where your ability to post increases over time; so a person is less likely to create sock-puppets because they’re not very valuable until they’ve been “levelled up” a bit.

    Twitter. Meh.

  4. 5

    I absolutely agree that there are ways to make people prioritize account longevity by giving them value, and I agree that that’s better than IP-based bans. However, what I advocated in the original post was not IP-based bans, but rather correlating IPs between accounts and, when one person gets blocked for bad behaviour and another different account immediately interacts with the blocker from the same IP, that’s a damn good tell that this new interaction might not be wanted either and should be automatically flagged for review. It would make sockpuppeting at someone marginally harder, forcing the sockpuppeteer to use Tor or some other proxying service. Take away the lowest hanging fruit, you lower the overall amount of abuse.

  5. 6

    I think Twitter are doing something like your IP correlating as the reports from the “trollocaust” suspensions was that when they created a new account they were suspended in minutes. Often for doing nothing. Twitter could do it on a number of things – IP, cookies, multiple accounts (flipping from one to another on your mobile or in your ToR session could help them link accounts)

Comments are closed.