The same sort of standards


Lots of people are talking about Laura Hudson’s article in Wired on how to curb online abuse. I liked this bit in particular:

Really, freedom of speech is beside the point. Facebook and Twitter want to be the locus of communities, but they seem to blanch at the notion that such communities would want to enforce norms—which, of course, are defined by shared values rather than by the outer limits of the law. Social networks could take a strong and meaningful stand against harassment simply by applying the same sort of standards in their online spaces that we already apply in our public and professional lives. That’s not a radical step; indeed, it’s literally a normal one. Wishing rape or other violence on women or using derogatory slurs, even as “jokes,” would never fly in most workplaces or communities, and those who engaged in such vitriol would be reprimanded or asked to leave. Why shouldn’t that be the response in our online lives?

Why indeed?

What would our social networks look like if their guidelines and enforcement reflected real-life community norms? If Riot’s experiments are any guide, it’s unlikely that most or even many users would deem a lot of the casual abuse, the kind that’s driving so many people out of online spaces, to be acceptable. Think about how social networks might improve if—as on the gaming sites and in real life—users had more power to reject abusive behavior. Of course, different online spaces will require different solutions, but the outlines are roughly the same: Involve users in the moderation process, set defaults that create hurdles to abuse, give clearer feedback for people who misbehave, and—above all—create a norm in which harassment simply isn’t tolerated.

Let’s do that.

 

 

 

Comments

  1. Martin Cohen says

    But how do you prevent the abusers from labeling the abused as abusers?

  2. says

    Examine their claims. It’s implicit in one of the solutions they mentioned.

    At the time, players were informed of their suspension via emails that didn’t explain why the punishment had been meted out. So Riot decided to try a new system that specifically cited the offense. This led to a very different result: Now when banned players returned to the game, their bad behavior dropped measurably.

    They can’t do this without being able to look at the situation in enough detail to be able to describe the problem behavior. With the help of the community this gets easier because people interested in the community will help to document and more.

  3. says

    They don’t want to build communities.
    They want to aggregate user data.

    That’s it. That’s all.
    The rest is marketing.
    “Fair and Balanced” etc.

  4. leni says

    But they kind of do want to build communities, Jafafa Hots. They can’t aggregate as much data if they let their worst users drive mainstream customers off.

    They don’t have to be benign, community building angels to be better than the worst of us. Low bar, I know, but still higher than it was.

    League of Legends has a bad reputation for a reason and it wasn’t accidental. Riot basically let trolls exact a portion of their profits from every person who crossed into their kingdom. I don’t have to think they are the nicest people in the world to reap the benefit of fewer trolls on the road.

  5. says

    That’s funny. The comments here at FTB can be considered “aggregated user data” in ways that let blogger and readers make decisions on community issues.

    Can you be a little more specific Jafafa? Why is the specific data they are aggregating and how they are using it unhelpful?

  6. quixote says

    Sounds rather similar to the Slashdot system. They have a huge readership, which I suspect is important for that system to work since it’s based on crowdsourcing small dollops of moderation to large numbers of readers. If you ever wade through the zero-rated crap, moderation does make a huge difference to the tone of the place.

    I think more support for community standards could make a real difference.

    I also think Face knows that perfectly well, so there has to be some reason why they’re okay with posting rapes but not with information about breastfeeding. Jafafa has a good point, but it has to go deeper than that for them to be that bad.

  7. leni says

    Can you be a little more specific Jafafa? Why is the specific data they are aggregating and how they are using it unhelpful?

    Can anyone?

  8. says

    Data will be aggregated and the national discussion is over who gets access, how it’s used, what is the transparency, what is the oversight, and what are the consequences for misuse. Don’t get me wrong, I’m not saying that those issues should not be in mind. But a scary horror story will not change the reality. Specific suggestions for how we might go about solving these problems is better. At least Riot is doing the experimentation necessary to start finding solutions and I love how they are using psychology, cognitive science, and neuroscience people. That is precisely how this should be started.

    I like the idea of online communities that are involved with the community authorities in shaping their culture with respect to behavior and data collection will be needed at some level.

  9. Blanche Quizno says

    I suspect that what Jafafa is referring to is the concept of creating a space that’s sufficiently neutral and unregulated that people will be induced to reveal basically *ALL* their personal information in public, where any 3-letter agency that wishes can collect and tabulate such data. Leaving it open enough for people to reveal their inner asshole is useful to these ends, because such data might well come into play if one of these assholes is implicated in any sort of crime or plot. So there’s definitely an incentive to let it be a wild wild west with no protections for normal people. That’s not what they’re really there for.

  10. says

    I thought Jafafa was talking about the cynical commercialism of Facebook and Twitter etc wrt the user data aggregation for onsale and marketing the idea of communities rather than actually building communities.

    I presume Riot’s game operates on a subscription model? Keeping repeat subscribers happy by building a community with standards would therefore be commercially important to them, and the data they aggregate is specific to their game, so they don’t have much incentive to onsell it. By comparison Facebook, Twitter, YouTube etc are free and the only way they make money is by serving their users eyeballs and data up to advertisers, so they have no commercial incentive to moderate their users at all.

  11. latsot says

    Social networks could take a strong and meaningful stand against harassment simply by applying the same sort of standards in their online spaces that we already apply in our public and professional lives.

    Indeed. And people tend (incorrectly) to expect that the rules they happen to be used to hold in new situations. Social media providers exploit this tendency in various ways. But not just social media, Amazon and your local supermarket do at least equally bad things and sometimes worse.

    It’s unrealistic to expect such providers to make spaces that behave the way people expect them to because data is valuable. But they don’t have to lie about it. They don’t have to use horrible, legalise-stuffed privacy policies. They don’t have to do this because we are all terrible at making decisions about how to exchange privacy for convenience. We don’t have anything approaching an intuitive understanding of the exchange rate.

    The answer is about letting people decide things about how their data are collected and used. It requires providers to have clear policies about how data are used, to be straightforward about what they’re going to use it for and enabling users to interact in spaces they understand. Doable,

  12. hm says

    Well if they enforced their terms and conditions consistently, that would be one way of getting a baseline standard of behaviour. One of the issue that Facebook has is that there seems to be no rhyme or reason to bannings or removal of bans.

    I think what Riot has done with citing of specific incidents makes it easier to reject or allow users. My 2¢.

Leave a Reply

Your email address will not be published. Required fields are marked *