What are we to do with online comments?


The Guardian takes a look at the problem of online comments, using an enviable database of 70 million comments, which they’ve dug into to try and tease out the sources of the conflicts. I have a database of a bit over 900,000 comments here (and another 800,000 at the sadly gutted comment database at scienceblogs), but unfortunately the way blocks are handled in wordpress means blocked comments are eventually completely purged, so I can’t compare them as the Guardian does. They report that about 2% of all comments are abusive, trolling, or otherwise blockworthy, which sounds about right — that’s probably in the high end of the ballpark of the percentage of filtered comments here. When you look at it through that lens, just the percentage of all discussions of all types that are abusive, you’re typically going to get a very small number.

It’s also the case, although the Guardian didn’t look at this, that the number of abusers is even smaller. There are a relatively small number of obsessive, dedicated individuals who do their damnedest to poison conversations all over the place — I see pretty much the same tiny rat’s nest of tedious trolls popping up in the sites I like to read — so it’s safe to say the majority of humanity is really decent online. Unfortunately, it doesn’t take many to wreck a discussion thread.

That’s especially true for the targets of abuse. Another thing the Guardian finds is that the trolls are focused: they tend to be racist and sexist.

Although the majority of our regular opinion writers are white men, we found that those who experienced the highest levels of abuse and dismissive trolling were not. The 10 regular writers who got the most abuse were eight women (four white and four non-white) and two black men. Two of the women and one of the men were gay. And of the eight women in the “top 10”, one was Muslim and one Jewish.

And the 10 regular writers who got the least abuse? All men.

It’s an interesting series. They’ve made a good effort at identifying the problem, but then they go looking for a solution, and unfortunately, their answer is that they don’t have one. So they throw up their hands and ask their readers to leave a comment suggesting one. Unless that’s a trick to get some more comments to analyze, that doesn’t sound like a good approach. It’s a bit like polling a cancer to ask it how we can make our body a little more pleasant to live in.

Comments

  1. bittys says

    I’m not really enamoured of that final metaphor, especially as they point out that somewhere <2% of the commenters are abusive.

    Surely it's more like polling the whole of your body to ask how it can get rid of the (relatively small, but disproportionately dangerous) cancer growing in it?

  2. says

    Another thing the Guardian finds is that the trolls are focused: they tend to be racist and sexist.

    No surprise there, unfortunately. The overall percentage of abusive asses found in comments may be small, but I’ve found a much larger amount will come out now and again, as long as they feel they have support. These people tend to stay at one, or a select few sites that reflect their views. When they do decide to come out en masse, it can be overwhelming.

  3. says

    Bittys:

    Surely it’s more like polling the whole of your body to ask how it can get rid of the (relatively small, but disproportionately dangerous) cancer growing in it?

    I don’t think so. Personally, I suspect the amount of toxic people in comments is larger than what the Guardian finds. It’s not just the outright abusive people; there are many people who seem to come across as polite and non-abusive, but their posts are amazingly poisonous. Then there are all those who have their “sincerely held beliefs” which hold a great deal of bigotry and hatred.

  4. says

    I don’t agree. I think the number of assholes is small, but that they are exceptionally loud, obsessive, and persistent, so they make much more noise than their numbers warrant, and they also take advantage of the fact that it takes only a few to completely disrupt the many.

  5. Lofty says

    Extreme assholes live at one of the bell curve but on the upslope there are plenty more people who suffer from the banality of evil.

  6. Siobhan says

    I’ll file this under “Things I’m Glad They’ve Been Measured But Seriously We Knew This All Along.”

  7. Raucous Indignation says

    I am a grossly over-privileged lurker here, so please take this for what it’s worth. It seems this topic comes up again and again. I understand why a blogger might wish to have an anonymous or pseudonymous blog, but the regard given to a commenter’s anonymity baffles me. They are not equivalent, are they? Why let the toxicity of the trolls foul your site? Why don’t you expose the worst of trolls? I can only think of one case where you did, and then only after years of death threats. Make it a condition of commenting that you reserve the right to expose a commenter’s identity if their behavior does not conform to your code of conduct. Maybe that would change the dynamic.

  8. Dunc says

    Make it a condition of commenting that you reserve the right to expose a commenter’s identity if their behavior does not conform to your code of conduct.

    How is anyone supposed to establish a commenter’s identity in the first place?

  9. Raucous Indignation says

    Not sure how that would work. I don’t work in IT. But we’re not truly anonymous online, are we?

  10. rietpluim says

    Occupy a small country, drop all men there, and build a wall around it. That’s what Trump would do, if it were not men but some random minority.

  11. says

    Raucous Indignation @ 9:

    Not sure how that would work. I don’t work in IT. But we’re not truly anonymous online, are we?

    You know, demanding people use their legal name or threatening to dox people comes up all the time. It’s not a solution, in any way. A good number of assholes do use their legal name, or would not mind doing so (and how would you possibly check on whether a name is a true name? If I tell you my name is Indigo Jones, how do you know if I’m telling the truth?) As for doxxing people, that’s bloody dangerous, and I think only certain circumstances might warrant such an action.

  12. says

    There are people who have legitimate fears if their identity is revealed. These are not generally the over-privileged assholes who make trolling comments, and it’s the oppressed targets of trolling who are often the most afraid of exposure. It makes for a complicated balancing act.

  13. Dunc says

    Not sure how that would work. I don’t work in IT. But we’re not truly anonymous online, are we?

    I do work in IT. While you’re arguably not “truly” anonymous online, you’re not readily identifiable either – even if you’re not taking any active steps to avoid identification. Sure, even when commenting systems merely ask you to supply an email address, most people give a real one – but those aren’t the people you’re concerned with. Even with systems that require registration, setting up a burner email address is trivial. IP address is not reliable either, even without using any of the readily-available options to obfuscate it, as PZ pointed out recently.

    Short of tying your internet access to a government-issued ID, or requiring people to register to comment with notarised documents verifying their identity, there is no solution here that can’t be fairly easily circumvented.

  14. says

    This won’t cure all ills, but one thought is letting people post on a probationary basis or have them apply for permission much the same as applying for a job. Genuine and positive commenters are more likely to be patient, more interested in building a positive reputation for themselves, or even have a resume of constructive comments they’ve posted elsewhere over time (which make account systems like wordpress and disqus invaluable).

    Trolls are unlikely to apply, explain why they want to post and then wait for approval. Even if they are, going through that process just to post one obnoxious comment that immediately gets them banned takes more effort than most are willing to put in. It’s hard to be a “hit and run” troll when neither running nor hitting is possible.

  15. says

    There’s an aspect to online identities which can – and should – be leveraged against spammers and abusers: avatar longevity or digital ID longevity.

    The problem is that identities have no cost associated with them, so getting banned is no loss. In subscription services like online games, there’s still plenty of trolls but if the game service wanted to (most don’t) abusers, gankers, trolls, racists, homophobes, etc, can be banned and it costs them $45 each time they want to come back.

    In a free blog forum the way to replace cost is to make longevity of your ‘nym have some value. For identities that are newer than 6 months (pick a number) there might be an approval loop on posting, and aggressive curation of comments. For identities that have been around longer than a year, posting links or urls or text decorators might be allowed. Having a feedback rating on the identity (some of the gaming forums where I hang out have this) and it should be coupled with an option to mask off new commenters. Sort of like the old days where there was the “Order of Molly” as an indicator that a given commenter ‘nym had value – you’ll reduce the tendency for people to be willing to just throw away their account.

    I suspect that more systems in the future will use longevity. I.e.: if your email account has no history of being a well-behaved gmail account, your spamicity score can trigger blocks very quickly, whereas an account that has existed for years without sending any reported spam will be less likely to get blocked. These are familiar techniques in online fraud management, because they work fairly well. Like with online fraud there will be people that attempt to farm synthetic identities. In those cases you have to burn through the identities and have ways to detect multiple sign-ups, or identities that are being created and groomed. Either way, banhammering an identity still has an impact on the owner of the identity, which is something that sites generally don’t do today (except for the ones that matter, like Ebay…)

    I’m fairly sure that one reason many wankers dislike John Scalzi is because he fearlessly plies that mallet of loving correction. He also removes offensive comments, which makes them a waste of time for the commenter (while wasting some of Scalzi’s time in return) There seems to be a consensus among the better bloggers I follow that curated content is better and more troll-free, and it’s the unrestricted easy-to-generate ‘nym comment pools that turn into cess like reddit and 4chan. Oddly, even on cesspools, a persistent long-term ‘nym has value: they could do a lot to clean things up there if that value was leveraged.

  16. says

    A few years ago, at a meeting with some Washington mucky-muck policy-makers, I got to pitch the idea that there should be government-issued fake ID for online identities (also, online government-backed identity) A lot of the infrastructure to do that is in place; the USPS could do a pretty good job of it just by leveraging mailing addresses plus government-issued ID. The IRS could do it better (and it would give them a positive purpose as well…) as could the SSA. There’s such a lack of vision about online stuff in Washington, though. It’s a really sad state of affairs.

    Eventually the internet will wind up with a handful of federated identity systems. That’s what the real battleground between google and microsoft and facebook is. And preventing that battleground from becoming a great reservoir of pain in the ass is exactly what government should be doing. But they won’t.

  17. says

    How is anyone supposed to establish a commenter’s identity in the first place?

    The easiest way:
    tie it to a credit card

    Imagine a website where you sign up and pay $100 to open your account. At any time if you’re in good standing and close your account, you get your $100 back. Under the terms of service if you get banned for (list of infractions) your account gets closed and you forfeit the $100. Money that is forfeited is paid against advertising revenue; the more spammers and trolls that get clobbered, the more ad-free the site gets.

  18. says

    All comments sections should have a “block” button like on FaceBook. And/ or the site owner should have a “Nasty User” function that allows the asshole to keep posting yet no one but them sees their posts.

  19. Vivec says

    Personally, I’m kind of fond of the somethingawful method of rolling periods of a paywall and the ability to buy your way out of being banned proportional to how long you’ve been banned. If you’re going to get trolls and shitposters either way, might as well have them contribute to your bottom line for the privilege.

  20. wzrd1 says

    There are reputation based systems, where hidden posts/responses and blocks are tabulated, but those are also manipulated by trolls and their sock puppets.
    There are moderated forums, but those are maintenance intensive.
    The simple reality of it is, no matter what you do, it is a tremendous pain in the ass and an asset and there is no magic formula to give one a clear path away from that annoying damned 2%. They simply create a new account with one of their many, many socks and continue, they use TOR or other anonymizers, they use VPN services. All, while legitimate users also use TOR or other anonymizers or VPN services.
    So, we end up with one variety or another of a moderator and ban hammer.

  21. says

    Marcus: I like the idea of avatar longevity. I wonder how easy it would be to have something like sign-up date or # of comments attached to a commenters name…and also whether it would be a new source of cliquishness or pseudo-merit.

  22. wzrd1 says

    As I recall, DailyKos does something similar. They still have hide/block, timeouts and ban hammers.

  23. brett says

    @#14

    I like the “probationary” idea as well, although I don’t know how well it would stop the obsessive trolls that PZ mentioned above (or the more dangerous online stalkers that women being harassed on the internet have to deal with). These are folks who might fixate on the same target for months and years at a time.

    It might help, though, and at the very least it would make it much harder for them to consistently troll. They’d have to be on good behavior for months at a time just to get in a minor amount of trolling before being banned.

  24. says

    These are folks who might fixate on the same target for months and years at a time.

    The beauty of a longevity-based system is that they’d have to lurk for months and years and then burn the account.

    Another thing that can be done is to completely unroll the commenter’s existence. That also makes a strong disincentive for trolls: if you get banned, everything you’ve ever said is gone and all your past efforts are wasted.

    I do agree with PZ’s concern that reputation systems can become cliquish. That’s why I like survival-based longevity systems. It doesn’t say “so-and-so is an awesome person” it says, rather, “so-and-so has survived for 2 years on this site” I wouldn’t want to put a big green arrow pointing and saying “NOOB!” at first-time posters’ comments, nor “WORSHIP ME” at long-term commenters’ comments.

    One of the forums where I hang out uses a system where you don’t get to start a thread until you’ve survived un-banned for a number of months. You can comment on other people’s threads after you’ve been on the site for a number of days. Your first few days the site is read-only and commenting is unlocked only after you’ve read (or at least accessed) the site rules, and a few of the FAQ threads.

    Scalzi and Charles Stross both have pretty well-behaved commenters because they’re both unafraid to use the banhammer and unroll annoying commenters’ complete existence. I have often wondered how much that has to do with certain trollish people really really hating them. Troll can spend hours arguing and arguing and – poof – hours unspent.

  25. wzrd1 says

    Again, at DailyKOS, one’s comments are easily tracked by one’s moniker/account, where *everything* you commented upon or posted is shown in your account page.
    Hence, the longevity based troll, laying low and only occasionally coming out and playing becomes readily apparent.

  26. says

    BTW, having things like: “you must read the FAQ and site rules before you can comment” really goes a huuuuuge distance toward preventing robo-spam. There are loads of wordpress bots that know how to create an account, log in, and drop their nonsense. There aren’t any bots (yet) that know how to read and parse membership rules and figure out how to comply with a commenting policy long enough to be able to post.

  27. says

    I know The Guardian would prefer not to institute a paywall, but I have long thought that requiring a low-cost annual subscription — say, something around $10-$15/year — before you can leave comments, would be worth trying, especially if coupled with a permanent ban (without refund) for abusive comments.

    Then if someone wants to give The Guardian $10 or $15 just for the chance to troll or abuse someone in the comments before being banned, then at least the site will have made some money out of them.

  28. malta says

    @Marcus, #15:
    The avatar longevity idea is fascinating. I’ve also spent time on forums that included information about how long someone has been a member and number of posts, and it is quite useful for sorting the wheat from the chaff.

    I also like the systems that give you a chance to sort by upvote, but they pretty quickly turn into a good first post and then a long chain of angry trolls replying to the first post. (In particular, I’m thinking of the NPR website. There’s a shocking number of people who like to comment on NPR articles about how much they hate NPR.)

    The credit card idea is also interesting. I remember reading once that spam could be eliminated if email cost a penny to send to a new email address. Would a dollar to create a new account be enough to discourage trolls? If not, gmail has learned to do a great job of sorting spam, so maybe the solution will be troll filters.

  29. DonDueed says

    How about pay-to-comment? A user could establish an account and deposit any amount in one-dollar increments. Each comment decrements the account by ten cents. Fees go to the site to cover costs (including the cost of keeping track of the charges). Perhaps some hardship accounts could be subsidized for those who can’t afford the fees or don’t have credit cards or other means of payment.

    That would have a couple effects. It would reduce the number of “me too” and other non-helpful comments, and would remove anonymity since every commenter would have to disclose personal information.

    Top-notch cybersecurity would be needed to protect the commenter database, though.

  30. says

    How about pay-to-comment?

    That disincentizes commenting. What you want to do is disincentivize spamming or trolling.

    So you have someone put up $100 ‘bond’ and as long as they don’t ever spam or troll, they get their money back any time they want to disable their account. The beauty of that system is it doesn’t actually cost the user anything (they get their money back and in the meantime, they don’t see ads) and they get a site that is paid for by trolls Win! WIN!!! WIIIIN!

    I know that variations of this idea have kicked around a few places and may eventually happen. Imagine if you had the ability to tell you gmail account to only accept messages from accounts that have been in good standing for more than 2 years, or which are bonded.

    There are 2 “gotchas”: it works only with ‘in-system’ messages. If you accept messages from the outside, then you have a huge forgery problem. But gmail could use account longevity as a control for gmail-to-gmail messages with no problem with the added benefit of discarding all incoming messages from outside gmail claiming to come from inside gmail as spam by definition. The other gotcha is that it creates an incentive for spammers and hackers to begin hijacking and reselling hijacked accounts. If you think about it, that’s exactly the problem credit card companies have: attackers trying to leverage established reputation. That would need to be accompanied with better than the usual 6 characters of your dog’s name password authentication.

    Top-notch cybersecurity would be needed to protect the commenter database, though.

    That should always be a concern.
    That’s actually (again) why I recommended government-issued fake ID. Not that I think the government is particularly good at security (actually, they suck unbelievably badly; I would really not want to be in the witness protection program, knowing that some FBI idiot doubtless has my ID on his laptop….) But if the government issued the fake IDs then websites would not have to worry about protecting IDs as much. Imagine if someone dumped a site’s userbase and discovered that the IDs were all references to official pseudonyms. Instead, each site effectively becomes an anonymizing service of sorts and the site’s users have to trust the site.*

    About 15 years ago I started thinking about this stuff and realized that we’re effectively trusting way too many people to anonymize our pseudonyms. The stalkerish creepy assholes are particularly way too trusting (ironically) So the only conclusion I could come to was to do everything under my real name and not do anything I was afraid to acknowledge.

    (* I am not suggesting that anyone set up a fake hate site with the intention of growing it for 4 or 5 years then publishing the user database; that could even be buried in the terms of service. … it would be soooooo wrong)

  31. ck, the Irate Lump says

    PZ Myers wrote:

    Marcus: I like the idea of avatar longevity. I wonder how easy it would be to have something like sign-up date or # of comments attached to a commenters name…and also whether it would be a new source of cliquishness or pseudo-merit.

    It definitely would be a source for those kinds of things (Slashdot was famous for this with low IDs). You could probably defray that a little by making big categories that contain ranges of account ages or post counts, and making the maximum relatively low (maybe one year or a few hundred posts maxes things out). This might still strongly discourage newbies, though.

  32. Dunc says

    Marcus @17: Are you OK with excluding anybody who doesn’t have a good credit rating and a spare hundred bucks? I’m not. That would exacerbate a lot of the existing problems which bias participation against marginalised people.

  33. starfleetdude says

    If he was still here, I’m sure Chris Clarke would disagree about the comments only having problems because of a few abusive trolls.

  34. says

    Marcus @ 17:

    The easiest way:
    tie it to a credit card

    You’d certainly end up eliminating a fucktonne of people if you insisted on a credit card. That’s some serious white privilege talking.

  35. says

    starfleetdude @ 34:

    If he was still here, I’m sure Chris Clarke would disagree about the comments only having problems because of a few abusive trolls.

    Made that point @ #3. Wasn’t terribly popular.

  36. says

    Another problem (or potential benefit) of a longevity system is that, having just taken a look at the user database, there are a tremendous number of people who registered but have not posted, ever. Some signed up the day we went live with FtB. How do they count?

    Also, a lot of the regulars here do not have an entry in the Pharyngula database — you signed up through FtB. Or, I think, some signed up with a generic wordpress registration. Tracking all that might be really difficult and unfair to some people.

  37. says

    PZ:

    Or, I think, some signed up with a generic wordpress registration.

    I was signed up at Pharyngula Sciblogs; when the move was made to FTB, I just used my WP registration, and so did a whole lot of other people. A lot of people come in through FB and G+ now.

  38. says

    Also, trolls and asses in general tend to come in through G+ because it’s an easy sign up, and they don’t use the account for anything except blog posting.

  39. says

    PZ Myers (#21, #37) –

    Marcus: I like the idea of avatar longevity. I wonder how easy it would be to have something like sign-up date or # of comments attached to a commenters name

    A change to the FTB search engine could obviate that (or making everyone’s past comments available to read, as wzrd1 says in #25) since all the old content and cooments are still in the system. Currently, the search only works on the bloggers’ titles and content, not the readers’ comments. Being able to search through readers’ past comments would make identifying problem people easier. And it would be nice to be able to prove someone was lying or a hypocrite in their past statements.

    Searchable comments would also help both to condemn or defend people. A person who says something regrettable once after years of posting could be forgiven, but a person who baits others constantly could be identified and removed.

    Caine (#35) –

    You’d certainly end up eliminating people if you insisted on a credit card.

    Even if money weren’t an issue, the loss of anonymity due to identifiable CCs would cost some readership.

  40. Owlmirror says

    A change to the FTB search engine could obviate that (or making everyone’s past comments available to read, as wzrd1 says in #25) since all the old content and cooments are still in the system. Currently, the search only works on the bloggers’ titles and content, not the readers’ comments. Being able to search through readers’ past comments would make identifying problem people easier.

    Blog administrators already have that capability (see all comments by some given commentator).

  41. rorschach says

    These days, since I don’t spend all my time at Pharyngula anymore and have become somewhat of a travel tragic, I write hotel and airline reviews. These sites show your “achievements” close to your profile name, so everyone who reads a review can see how many reviews the person has written, how many “helpful” votes they have received over time etc.
    In a sense that’s the avatar longevity principle, and I think that works well.
    You don’t want to disincentivize people from commenting as Marcus points out, and we don’t want any of that evil clique stuff happening, but I don’t think it’s too much to ask of a new commenter to be made to read the FAQ and commenting rules and tick a box before they are allowed to post somewhere. Credit cards are probably a less ideal way, simply since not everyone has access to one.

    As to the FtB signup, I think I still login with my wordpress credentials? That’s why I have to login to Pharyngula before I can comment on any other FtB site, I think.

  42. danielwalter says

    @Marcus Ranum
    Another problem with a credit card tied approach is that it might exclude many commenters from countries in which payment via credit card is unusual. In germany, for example, most people don’t even have a credit card, because most cashless transfer is done via debit cards.

    Daniel