Trolls griefers and flamers


Things get more interesting every day.

BBC Newsnight talked about the Twitter harassment tonight (that is, two and a half hours ago). Paul Mason wrote a related article about it. It traced the harassment back to none other than Rebecca’s “guys, don’t do that” video, and showed a bit of said video. Then it talked to oolon about the block bot.

Paul Mason sets the stage:

Since I’ve been on the trail of the people threatening high-profile women with rape on Twitter I’ve learned a lot. I get a fair amount of grief on social media, usually from the kind of people who get driven to using the F-word about Keynesianism, or the Laffer curve.

Now my timeline’s been flooded with abuse – and its alter ego, gentle condescension laden with malice – from all kinds of trolls, griefers and flamers (the latter, one of the trolls explained, are not serious, adding that it is they, the trolls, who are the kings when it comes to ruining people’s lives online).

Proposed solutions range from forcing Twitter to suspend the account of anybody reported for threats or violent abuse, to forcing all users to sign up with a verifiable e-mail address. But who are the “trolls” and can anything be done to stop them?

He talked to a technical person who thinks blocking is all one can do.

Since I did this report I’ve been flamed by a particular community. I put it to Ms Norton that if I sanitise my own experience by blocking individuals and – unlike with Stella Creasy MP or Caroline Criado-Perez – they do go away, this
just leaves me trapped in a sanitised reality while a “dirty” reality goes on around me.

She says “Throwing that ‘dirty’ conversation off Twitter doesn’t end it. And actually knowing where it is, I think that’s helpful. I thinking knowing that those dirty conversations are out there, that we can choose to engage with them or choose to ignore them depending on the time and energy and motivations that we have.”

If it’s a social problem and not a technological one, what is the root of it? Ms Norton, believes it is stark:

“The social problem is that men are raised to hate women and technology is not going to fix that. What’s going to fix that is a societal conversation about why that is and why it shouldn’t be, and why women aren’t a threat to men. And the technology gives us the opportunity to have that conversation. It’s not always a pleasant conversation, but we need to have it. Just shutting down the voices we don’t like doesn’t make the sentiments go away.”

I don’t think so. I think having extra places and ways to have the “conversation” just makes that way of seeing women more entrenched and more feverish. I think shutting it down on various media would help.

Personally, as I get enough great conversations from the people who are prepared to debate ideas without abuse, I’ve resorted to the “shared block list” strategy. This focuses the wisdom of the Twitter crowd onto the most notorious idiots and enables those who sign up to engage in a collective block, without necessarily banning the perpetrators from the internet.

I’ve installed The Block Bot and I’ll be talking to the man who coded it tonight about the strange online community that revels in the belittling of women. Though I’ve been aware of trolls, sexism and the flaming of fellow women journalists for years now, what this has taught me is that violent misogyny is probably the defining fault line of the internet, and is what has a better chance of killing the social media than Ayatollah Khamenei and Kim Jong-un ever could.

You can already feel cyberspace divided into a world that hates women and one that does not. Fortunately the former is small, but incredibly powerful – and underestimated at its peril.

And that man is our very own oolon – whom I used to rebuke for his frivolity about the whole thing, but he (obviously) took it more seriously as time went on.

Toby Young was also on Newsnight. He boasted about having tweeted about an MP’s cleavage. The guy is a complete asshole.

Update

A sock puppet commenter pointed out that Jeremy Stangroom is talking about suing oolon for defamation. It’s true.

defa

 

Comments

  1. says

    I can see Mason’s the point. Judging by what happened on YouTube, any sort of process to ban people who are reported as abusive will lead to mass false flagging to silence unpopular opinions. The likely result is Thunderf00t’s fan-base would mark everything you and Amanda Marcotte post as abusive until you are gone. Human review can prevent this, but it would raise costs, possibly to the point where they are unfeasible.

  2. MarkShaw1965 says

    Oolon might be facing legal action if Twitter is to go by.

    The report referred to the people on the bot as a “shared list of abusers”.

    That is defamation, as Jeremy Stangroom has rightly claimed. He has already sent in a complaint. Legal action could follow as well.

    Still proud of your association with Oolon, Ophelia?

  3. Chaos Engineer says

    There are ways to detect false-flagging, though.

    If a video gets flagged and blocked, and the person who posted it appeals the block, then someone’s going to need to review it. (Otherwise, the trolls will immediately flag and block every video.) An account with a history of flagging videos for no good reason can have future flag requests quietly ignored.

    To speed things up even further, some sites (like Slashdot and League of Legends) go to “community review” – if you have a good account reputation, then you have the option of seeing a random selection of flagged entries and you can vote on whether the flag was valid or not. If there’s a general consensus, then the site administrators don’t even need to get involved. (Because the selection is random, you don’t have the opportunity to vote down based on personal grudges.)

    The only downside is that there’s a risk of creating an echo chamber where people who are out-of-step with the majority get forced out, but if the administrators don’t want that to happen, they can stop it with a little fine-tuning.

  4. throwaway, feels safe and welcome at FTBConscience! says

    “Abusers” is too general a term to be libelous. Abusers of what, precisely? Abusers of hospitality? Abusers of logic? Abusers of good-will? Perhaps the abuse of a platform from which they can make unfounded derogatory remarks about a person or persons in order to disparage their name and alter public opinion of them? Yeah, possibly that last one, just for the irony.

  5. says

    Ophelia, from what I’m reading in those tweets, the person (and organisation) that uttered (and broadcast) the words “shared list of abusers” was Paul Mason himself (and thus, the BBC Newsnight program)… so I’m guessing that Jeremy Stangroom’s axe to grind is aimed at him (and the BBC) – not ool0n / James.

    The worst thing that @the_block_bot documentation accuses people such as Stangroom and Drescher of being is obnoxious and tedious. There is no ambiguity about the fact that the_block_bot uses Twitter’s own API and assesses various Twitter accounts of being at varying levels of incivility / abuse.

    (Also, perhaps Stangroom’s supporters should counsel him on the Streisand Effect.)

  6. A. Noyd says

    Well, maybe oolon can start calling it the “shared list of abusers and Jeremy Stangroom, who’s only bestest Twitter buddies with several of the abusers, but totally not an abuser himself even if he does retweet plenty of their abusive shit, but that doesn’t qualify as abuse because, oh look, a hummingbird.”

  7. says

    I have a twitter list of hundreds of homophobes.
    I have put some famous people on there, and any others I run across.

    I’ve had people demand to be taken off, I’ve been threatened with lawsuits, etc.
    Showed those threats to Ray Beckerman and another lawyer who laughed at the suggestion that putting someone on a list with a title they don’t like is legally defamation.

    Of course, that’s the US. The UK has horrible law when it comes to defamation, so I can’t speak to that. But if what Oolon and others are doing IS legally considered defamation in the UK, that is something to question the law about, NOT something to disparage Oolon for or Ophelia for.

    Blocking people is not immoral. Sharing a list of people who have behaved like asses with others who don’t want asses harassing them online, allowing others to block people who are asses – that is also not immoral.

    You do not have a right to force someone to listen to you.

    “Abusers” is a title you don’t like for a shared list or people to avoid? Fine. Lets call it a Shit List.
    Now whine about how horrible it is that you can’t tweet to people who don’t want to see your tweets because you’re on a shared shitlist.

    Speaking of pride of association… MarkShaw, are you proud of the people you’re defending?
    Are you proud of the “right” you’re imagining and then standing up for?

  8. brucegee1962 says

    Aside from helping the revolutionaries in Egypt, I have so far never heard of a single positive contribution that has ever been made by anything connected with Twitter. I liked the description someone had on it here a while ago, that it’s “graffiti for grownups.” Actually, I think there’s more genuine wit in most stalls in a bus station men’s room than there is in those 140-character outbursts.

    My suggestion for solving this problem is for the grownups to return to media that demand the ability to hold and develop a sustained thought, and abandon the Twitterverse to the sub-adolescents for whom the character limit is no hindrance because they have nothing of value to say.

  9. hjhornbeck says

    There’s a simple solution: move the block list out of the UK. Few people in Canada or the US would give a shit if they received a court summons from the UK, and the latter has legal protection anyway.

    I doubt Stangroom’s serious about this. Skeptic and atheist conferences are starting to rely on The Block Bot to filter their streams. Does he really want to be the one to snatch that tool away, and get on their bad side?

  10. latsot says

    I’m sure that Stangroom is talking about suing the BBC, not Oolon. Oolon didn’t make the remark about “abusers”. But I don’t think he’ll have any luck suing the BBC either. No specific claim has been made about any named person. Even our libel laws aren’t that stupid.

  11. A. Noyd says

    @hjhornbeck (#11)

    I doubt Stangroom’s serious about this. Skeptic and atheist conferences are starting to rely on The Block Bot to filter their streams. Does he really want to be the one to snatch that tool away, and get on their bad side?

    Do you think people get on that list by having a reasonable sense of proportion?

  12. Bjarte Foshaug says

    Jeremy Stangroom is talking about suing oolon for defamation.

    What a thin-skinned, hyper-sensitive, whining, professional victim. What happened to Freeze Peach?

  13. latsot says

    What a thin-skinned, hyper-sensitive, whining, professional victim. What happened to Freeze Peach?

    He’s not offended, he has delusions of mattering.

  14. says

    LOL woke up to this, I wound Rich Sanderson up a little last night and it seems he has managed to troll Stangroom and Drescher. Paul clearly signed up to Level1, they ain’t on L1! Then we spoke some hypotheticals about shared lists of abusers for each community. I made it clear in the interview that he wouldn’t want to sign up to L2 or L3 or he’d be blocking some random atheists that we see as sometimes abusive, annoying, boring or some combination of these. That bit ended up on the cutting room floor – but still obvious that we were talking about L1.

    So good luck with that, especially as the bit he objects to was said by the BBC, I’d like to see that libel case! How anyone can take someone seriously who “seriously asks” that question about libel is beyond me.
    https://twitter.com/PhilosophyExp/status/362366072667648000

    More interesting was the convo we had about how there are relatively small core of abusers but then a LOT more people motivated to minimise the harassment. Helpfully pop up to say ignore the trolls and its your fault if you react. Grow a thicker skin, censorship, free speech etc etc etc … They are a bigger problem than the out and out misogynists as they give them cover and “freeze peach” legitimacy.

  15. opposablethumbs says

    Watched the whole thing last night – it was excellent! Wish there had been more of oolon’s interview, it was a bit condensed, but it was great to see it – and good to see Rebecca Watson’s “guys – don’t do that” referenced in this context. And I loved it when Creasy so elegantly and comprehensively skewered the odious Young – twice in quick succession on one point alone. She was absolutely outstanding (and single-handedly restored a substantial proportion of my faith in the Labour Party (well, willingness to vote for them happily rather than with teeth gritted, that is)). I wish she were PM, she’d be blinding.

    Of course it resulted in a heated discussion for the rest of the evening … about whose hands the power of troll-control could end up in, depending on how exactly it was implemented.

    there are relatively small core of abusers but then a LOT more people motivated to minimise the harassment. Helpfully pop up to say ignore the trolls and its your fault if you react. Grow a thicker skin, censorship, free speech etc etc etc … They are a bigger problem than the out and out misogynists as they give them cover and “freeze peach” legitimacy.

    Exactly. 🙁

  16. latsot says

    And I loved it when Creasy so elegantly and comprehensively skewered the odious Young – twice in quick succession on one point alone. She was absolutely outstanding

    Yeah, she was seriously good. I loved the way she took control of the whole interview. And Young was not in the least interested in listening to answers. His points were just so unfathomably blithering. For that and many, many other reasons, I’m glad he got his arse handed to him. He’s since written an article here: http://blogs.telegraph.co.uk/news/tobyyoung/100228904/twitter-abuse-stella-creasy-has-over-stepped-the-mark/ where he rehashes the same argument as though Creasy hadn’t answered them perfectly well, then explains that shes ‘stepped over the mark’ before ending with what looks very much like a veiled threat.

  17. Nomit says

    “A sock puppet commenter pointed out that Jeremy Stangroom is talking about suing oolon for defamation. It’s true.”

    No, it isn’t true. He has only said he will complain (without hope of satisfaction) to the BBC because he has been defamed. He isn’t talking about suing. I think he is right that the comment was defamatory, though, an action would be likely to succeed in the UK and a quick apology would be advisable in case someone more litigious decides to go for it.

  18. latsot says

    He isn’t talking about suing.

    Even though his (Stangroom’s) quote asked whether it would be possible to sue for libel?

  19. says

    It occurs to me having read some ignore-the-trolls (that’s just a passive-agressive form of blame the victim) twits from the dark side if they would give this piece of advice to Sam Harris, as he seems to take the threats he’ve recceived EXTREMELY seriously.

  20. Nomit says

    “Even though his (Stangroom’s) quote asked whether it would be possible to sue for libel?”

    Yes, he asks if it would defamatory under the law, but doesn’t suggest he is going to sue. He is complaining to the BBC through the usual channels. I think a suit would succeed though, there is no obligation under UK law to prove damage. Not that I think it is wrong to sue if you are defamed, I am surprised that so many people think it is. Some speech acts need to be policed, isn’t that the point of this whole twitter controversy, even though the acts in question are different kinds?

  21. says

    Nomit – would you please read the god damn tweets? He does ask about suing. Twice. In the tweets that you can look at by looking at the post on this very page. Two of the four tweets ask about suing.

  22. says

    Hmm a wild “Nomit” appears, who are you, a sock? I’d like to see how he expects to win anything, the BBC clearly referred to Level1 and I only agreed that Level1 was “the worst of the worst” … He wasn’t mentioned and everyone on levels 2/3 were off the table. Stangroom is on L3 for boring people and fools, seems apt. Sue me for that! The irony is that the “freeze peach” crowd are pursuing a legal solution that if it had a gnats chance in hell of working would make it defamation to call someone an annoyance, fool or bigot on Twitter. Free Speech!

    However I did speak to my friend who is an ex-barrister and teaches Law at Portsmouth Uni and she suggested I formally refer Mr Stangroom to Arkell v Pressdram as my official legal response.

  23. EllenBethFlorida says

    Ophelia’s response at 25 shows what a vindictive, nasty little person she is.

    Ophelia was squealing and crying over getting called a liar, and say that sooooo “libellous”. Now her attitude is different. A hypocrite of the worst kind.

    By agreeing and laughing at the labelling the victims of bullying who happen to be on the bot, you are victim blaming.

    Ophelia – you are simply a piece of scum. Don’t ever, ever complain about people calling you a liar. With your attitude, you deserve no such respect.

  24. lippard says

    In the U.S., the legal precedent is very much on the side of the publisher of a blocking list. I think if the list is either (a) clearly based on objective factual criteria, (b) clearly based on opinion, and/or (c) user-generated and voluntarily subscribed to, there is unlikely to be any legal ground for a successful lawsuit.

    There have been multiple lawsuits against spam blocking lists on the grounds of defamation or tortious interference (e.g., e360 Insight v. Spamhaus, Experian v. MAPS, Harris Interactive v. MAPS, Pallorium v. Matthew Sullivan (SORBS)), web content rating services (Career Network et al. v. WOT Services), and, somewhat different, against online review sites (e.g., Ripoff Report, Yelp). All of the spam block list suits and the WOT suit were won by the publisher of the blocking list.

    If the data is generated by the users (e.g., like the online reviews), the Communications Decency Act’s safe harbor for liability defamation for online providers would also be likely to apply in the U.S.

    Tortious interference would only be relevant if a business were being damaged.

    Things might be different, of course, outside the U.S., such as in the UK.

  25. says

    Hmm. I’m confident that @26 is the same reliable predictable sock who’s been trying to comment here for months, but this time the sock signs in as EllenBeth. That seems extra-special sleazy.

Trackbacks

Leave a Reply

Your email address will not be published. Required fields are marked *