Hate doesn’t pay, but it can be subsidized


Jim Rutenberg is concerned about the flood of hate speech that’s been accelerating over the last few years. He has examples, including himself — I guess you shouldn’t use Twitter if you have an obviously Jewish name. Or a woman’s name. Or a black name. It’s a medium that’s only safe for us True Aryan Males, I guess, and that’s a problem that’s affecting their bottom line.

Now that Twitter is contemplating putting itself up for sale, we can only wonder what lucky suitor is going to walk away with such a charming catch.

Twitter is seeking a buyer at a time of slowing subscriber growth (it hovers above the 300 million mark) and “decreasing user engagement,” as Jason Helfstein, the head of internet research at Oppenheimer & Company, put it when he downgraded the stock in a report last week.

There’s a host of possible reasons for this, including new competition, failure to adapt to fast-changing media habits and an “open mike” quality that some potential users may find intimidating.

But you have to wonder whether the cap on Twitter’s growth is tied more to that most basic — and base — of human emotions: hatred.

Yes. I suspect the answer is “HELL YES.” Twitter might look to their competition, 4chan, which is also experiencing problems and might be up for sale.

All good things must come to an end and as it stands now, 4chan will probably be gone before the end of the month. Or at least several of its boards will.

Ever since 4chan was sold by Christopher Poole (Moot) to Hiroyuki Nishimura about a year ago, the new owner has come to realize that paying several millions of dollars for an anonymous image board probably wasn’t a very good idea. 4chan is good for trolling, raiding, shit-posting and doing basically anything, it just isn’t a good business venture. No corporate brand wants to advertise their products on a website where users nonchalantly joke about rape, death and every other politically incorrect topic you can think off. Even as owner of 4chan, Moot has stated several times that 4chan was in several ways, a liability. It costs more to maintain the site than the revenue generated from it.

Heh. Surprise!

But don’t cry for 4chan. There might be a white knight riding to the rescue. Martin Shkreli. They belong together.

The problem is that we have advocated free speech, as in free from all responsibility, rather than free speech, as in free of political and economic restrictions. We want a medium where Exxon and North Korea don’t get to control what people say about them, but instead we have a medium where racists and misogynists and shitlords get to abuse everyone, and we don’t yet have a tool that strikes a balance between permitting criticism and permitting open hatred, or even between truth and lies. I’ve been watching the growth of so-called “satire” sites that follow the rule of anything goes — you can lie in a clumsy, ugly way about anyone or anything if you slap a “satire” label on it — and I’m not the only one who finds them to be a threat to the integrity of information on the net.

It is easy to be a free-speech fundamentalist. I’ve been one as long as I can remember without ever breaking a mental sweat. It requires belief in only two basic tenets, the one more feel-good than the other: that people are essentially decent and smart, and that truth always wins over lies in the long run.

The internet has proven both to be wrong. Social media shows that people are essentially a mob of thoughtless arseholes, and the “post-truth” political era shows that the dark side is, in fact, the more powerful.

The dark side is only more powerful if we uncouple free speech from incentives for honest speech, which is the status quo right now.

Unfortunately, that’s a very hard problem, and there is no easy solution. It is, however, easy to see that the balance is totally out of whack right now.

Comments

  1. says

    The problem is that we have advocated free speech, as in free from all responsibility, rather than free speech, as in free of political and economic restrictions. We want a medium where Exxon and North Korea don’t get to control what people say about them, but instead we have a medium where racists and misogynists and shitlords get to abuse everyone, and we don’t yet have a tool that strikes a balance between permitting criticism and permitting open hatred, or even between truth and lies.

    […]

    The dark side is only more powerful if we uncouple free speech from incentives for honest speech, which is the status quo right now.

    Unfortunately, that’s a very hard problem, and there is no easy solution.

    Recently I’ve been working on some ideas for my ideal social website. I’d like to think its features could help to combat these issues.

    Unfortunately, making such a website will be a huge undertaking. Sigh. Maybe at some point I’ll be able to do a kickstarter or something.

  2. ikanreed says

    Financially gouging AIDS patients to pay for the people who treat AIDS like a joke. America.

  3. erichoug says

    I always think that the real problem is the anonymity that the web affords. You used to see a lot of the same stuff before telephones had caller ID. And even now, you see some with blocked number calls which, I never bother to pick up.

    Maybe having some sort of verification or non-anonymity is the way to go. Not necessarily having your real name and home address shown to everyone but making sure you are who you say you are and making sure you understand that you can be found and prosecuted if necessary.

    The reason these people don’t do this same stuff IRL is because they know they will suffer real and dramatic consequences. We need something similar on line.

  4. raven says

    There is a internet law that says, any open forum will be overrun by trolls. And without troll control, it will eventually die. It’s Gresham’s law applied to trolls.

    I’ve seen it since the DARPA/USENET days. First it was USENET. Then AOL, Yahoo.
    Now I guess it is Twitter and Reddit. Maybe Youtube. Guess because I refuse to waste time on troll sites.

    This is an empirically derived rule. I’ve never seen it fail yet.

  5. raven says

    Maybe having some sort of verification or non-anonymity is the way to go.

    That isn’t going to fly.
    Wherever you have hate speech, you have hate violence and hate murder.
    Wait until your first 100 death threats and try again.

  6. erichoug says

    @Raven I don’t think that publishing peoples names and addresses is the way to go. But, if the forum has them available to law enforcement or some other consequences. With that, not sure if the trolls would even bother. But then, isn’t that the point? So, maybe something where you can only join the forum with verified user information.

    Not quite at 100 yet. I would say I am in the low double digits. But, i’ll let you know when I get to 100.

  7. drowner says

    @4 raven:

    I wouldn’t include Youtube in that group because the comment section–toxic as it may frequently grow–is easily avoided by viewers, or disabled by posters. The site’s purpose is video/audio storage and playback (and advertisement, of course). While there are certainly countless examples of despicable videos accessible by one click, perhaps even finding their way to the “front page,” the viewer must consciously confirm their playback. There is no such gate-keeping mechanic on the other sites; users are inherently subject to harassment by tweets or 4channers.

    I don’t see any realistic competitor to behemoth-Google-owned-Youtube, either. Before Facebook, there was Myspace, Friendster, etc. Youtube has… LiveLeak?

  8. raven says

    My introduction to trolldom.
    A doc set up a discussion site for an autoimmune disease. This particular one is undertreatable. These patients are also at a much higher risk of suicide.

    It wasn’t long before the trolls showed up. And started advising everyone to…commit suicide.

    The forum still exists. But it is invitation only and hard to find.

  9. erichoug says

    A lot of the trolls will target specific people for their vitriol, bringing a whole bunch of people to pile on. Like the 4Chan morons all spoofing the debate polls. What about groups that did the same thing to them? Not with death threats and racist/ sexist/ homophobic comments but with something more positive. Not real sure what would be effective but I’m reminded of the people who counter protest the WBC idiots. I still believe that there are more good people than bad people so how about making the trolls afraid for a change?

  10. starfleetdude says

    From what I can tell, Facebook is the only social media service that’s working to discourage hate speech, and it does seem to be one of the few places where people can disagree, sometimes vehemently, without it getting vile. It helps that it’s fairly easy to block people there who are obviously trolls.

  11. Rich Woods says

    @erichough #9:

    Yeesh, I think I’ll just go to a bar. Wait…I don’t drink! DAMN IT!

    The situation is almost so bad that it would drive you to drink.

    #TrollsPoisonedMyLiver

  12. says

    I mentioned this on the political madness thread last night.

    I feel like the rest of us need to get behind a canonical pluralism frog to turn back the tide, re-appropriate what’s been lost.

    It makes me smile.

  13. davidnangle says

    Anti-troll idea:

    Every user has stats that live forever (and a stat that tracks the age of the account, to discourage multiple accounts.)
    Up-votes, down-votes (only a few from any other individual account possible,) tracked permanently.
    Up-votes and down-votes weighted by the voter’s relative standing.
    Every user can set filters so you won’t see posts by users that don’t match your specs: Total down-votes, up-votes per down-votes, down-votes per post, down-votes over account age, etc.

    This way, shit-gibbons can wallow together, nice people don’t have to hear from them, and angels only chat with angels. One forum that can handle the worst of 4-chan as well as the best of humanity.

  14. says

    starfleetdude

    From what I can tell, Facebook is the only social media service that’s working to discourage hate speech

    What alternate universe do you live in? FB happily lets hate speech flourish and if you complain they tell you there’s nothing against their rules in it. But if you complain you’ll be locked out:

    Auslöser für meine Recherche war der Fall der Berliner Schauspielerin Jennifer Ulrich, die auf ihrer Facebook-Seite die Drohung eines Nutzers fand, ihr Gesicht mit einer Kettensäge zu zerkleinern. Nachdem Ulrich den Kommentar gemeldet hatte, erhielt sie zur Antwort, dass der Beitrag nicht gegen die Standards des Unternehmens verstoße. Als sie diese Antwort auf ihre Seite stellte, verbunden mit der Frage, was jemand schreiben müsse, damit Facebook es als löschenswert empfinde, wurde erst ihr Eintrag entfernt und dann ihre Seite gesperrt.

    Translated from here
    “My research was triggered by the case of the Berlin based actress Jennifer Ulrich who had found threats to chop up her face with a chain saw on her page. When she reported the post Facebook replied that it was not against community standards. When she posted that reply on her page asking what somebody had to write to go against community standards Facebook first deleted her post and then locked her site”

  15. starfleetdude says

    I live in this universe, where you can find this statement of policy on hate speech from Facebook:

    Community Standards – Encouraging Respectful Behavior

    Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us.

    People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others about that hate speech. When this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content.

    We allow humor, satire, or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content.

    While we work hard to remove hate speech, we also give you tools to avoid distasteful or offensive content. Learn more about the tools we offer to control what you see. You can also use Facebook to speak up and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.

  16. says

    starfleetdude:

    I live in this universe, where you can find this statement of policy on hate speech from Facebook:

    Oh FFS, you are some kind of a super-idiot. They can say all that, it doesn’t mean jack shit when it comes to implementation, and FB has a long history of allowing hate speech, and vile, hateful users.

  17. says

    This poem accurately sums up the experience an awful lot of people have with Facebook moderation.

    I’d be surprised if Shkreli went through with openly backing 4chan, though. People underestimate just how effective a legal defense not having any money is. The next time there was a death associated with the site, lawyers across the country would add “nine figure net worth” and “least sympathetic defendant imaginable” together, and get a lot of dollar signs.

  18. John Morales says

    Paladynian @7, thanks for the reference. I found it really interesting and informative.

    starfleetdude, FB also claims this:

    What names are allowed on Facebook?

    Facebook is a community where people use their authentic identities. We require everyone to provide the first and last names they use in everyday life so that you always know who you’re connecting with. This helps keep our community safe.

    Heh. It is the only platform where I don’t use my real name.

    (Rules and standards that are either not enforced or unenforceable are worse than pointless)

  19. Crimson Clupeidae says

    Can we start a gofundme to buy twitter? :D

    How fun would it be to set it up and actually enforce some minimal community standards?

    It would be worth the tears of all the whiny white manbabies.

  20. says

    John @ 21:

    Heh. It is the only platform where I don’t use my real name.

    Back when I had a FB, my name there was Caine Only.

  21. taraskan says

    Twitter is a cesspool, and it would continue to be a cesspool even if its user base were entirely made up of middle-aged urbanite Canadians. The problem is there is nothing worth saying in only 140 characters or less – nothing. “Big think” is beyond its post capacity. By intention the only thing you can use it for is bullshit smalltalk, small-business advertisements, derailing links, and big dumb blanket statements about anything newsworthy, all according to kindergarten playground ettiquette. Personally, I don’t want to hear or partake of any of those things, even in the friendliest of circumstances, and will never, ever, consider it a legitimate form of social intercourse. I have a revelatory bathtub of tequila buttercream set aside for the day it, too, goes the way of Napster.

    But the cold-hard reality of social media at large is that free-speech is incompatible with anonymity. All public-access fora or anything with a commentary section must, eventually, be linked to real-life information about that person. My guess is some kind of pseudo-FB registry when the larger world goes all UK-database with their ones and zeroes. If you want in on a discussion, your page is going to become available to everyone in that discussion; that’s just the only way you can get humans to talk to each other online in exactly the same way they talk to each other out on the street.

    And it won’t eliminate hate speech, not by a mile, but it would level the playing field. Hopefully we’d see the return of society’s greatest ally: fear of embarassment. It won’t happen, though, until law enforcement starts taking online matters as seriously as offline ones. For that matter, it won’t happen until offline matters are taken more seriously than they are.

    At some point in the future, the internet is going to be a lot less free, but that doesn’t have to be a disadvantage, so long as it is centralized in public access and not maintained by the offspring of Google and Viacom and helmed by neo-libertarian trustafarians.

  22. says

    John Morales@#21:
    No, really, my name actually is “Surly Badger”

    With that many people, there’s no way they can verify names. It’s stupid.

  23. Vivec says

    @24
    I’unno, me and a lot of my friends in the fandom side of twitter have a good amount of fun, and it’s a pretty decent art hosting site now that Tumblr and Devart have put more restrictive guidelines in place.

  24. Vivec says

    @18
    Facebook lets hatemongers like Cathy Brennan run wild, and bans/punishes people that criticize or joke about her (the “Fake goth” incident, for example).

    Hardly a beacon of opposition to hate speech.

  25. says

    @16, davidnangle

    Anti-troll idea:

    Every user has stats that live forever (and a stat that tracks the age of the account, to discourage multiple accounts.)
    Up-votes, down-votes (only a few from any other individual account possible,) tracked permanently.
    Up-votes and down-votes weighted by the voter’s relative standing.
    Every user can set filters so you won’t see posts by users that don’t match your specs: Total down-votes, up-votes per down-votes, down-votes per post, down-votes over account age, etc.

    Close to parts of my idea! But I think we could do far better than measuring numbers of votes.

    @25, Marcus Ranum

    With that many people, there’s no way they can verify names. […]

    I think I could make it possible and practical. Though I’d make it optional, it could still be a useful piece of info about an account.

  26. John Morales says

    Brian, Marcus refers to the sheer size of the FB userbase — around 1.7 billion accounts.

  27. mostlymarvelous says

    taraskan

    All public-access fora or anything with a commentary section must, eventually, be linked to real-life information about that person.

    Why? There are thousands upon thousands of people, millions worldwide, who cannot use their real names/ identities online. There are police, teachers and public servants and employees of private companies with restrictions on what they can publicly say if they are identifiable in their jobs. There are people escaping abusive families who could be in real danger if their new locations were made public. There are stalking victims – who cannot be sure of the identity of the people they’re in danger from – who must conceal their names and locations in order to be safe. Add to this people who’ve been subjected to online mob abuse.

    Then, this is where the numbers increase exponentially, there are the spouses and children of all those categories of people who could be used to identify or track down those individuals. Does this mean that a person who’s been abused by a spouse or other family member or complete stranger has to ensure that their children cannot use online communities? Ever?

    Because that’s the destination where a blanket denial of online access without real identity links will take you.

  28. karpad says

    @16 Davidnagle

    Anti-troll idea:
    Every user has stats that live forever (and a stat that tracks the age of the account, to discourage multiple accounts.)
    Up-votes, down-votes (only a few from any other individual account possible,) tracked permanently.
    Up-votes and down-votes weighted by the voter’s relative standing.
    Every user can set filters so you won’t see posts by users that don’t match your specs: Total down-votes, up-votes per down-votes, down-votes per post, down-votes over account age, etc.
    This way, shit-gibbons can wallow together, nice people don’t have to hear from them, and angels only chat with angels. One forum that can handle the worst of 4-chan as well as the best of humanity.

    not to be a negative nancy, but this is a genuinely terrible idea for a whole host of reasons. I understand the appeal and the reasoning, but allow me to spell out a scenario that would play out:
    a relatively small but incestuous echo chamber that only interact with one another will continually upvote one another, which will in turn give each other more power for upvotes or downvotes. It would be relatively hard to game deliberately, but simply by using the forum continuously and applauding themselves in groups, soon they would have the power to destroy people’s vote levels. I would not, for example, want to give TERFs a more powerful upvote or downvote just because they keep agreeing with each other, while at the same time they’d only truly say something monstrous and exclusionary on occasion, while most of their posts would be relatively benign things like anti-DV or pro-choice opinions.

    You know who else would get unreasonable levels of power? actual celebrities. Right now on twitter, they can tweet an get people to harangue a single @ for a day or two. Now imagine if Dawkins could express his displeasure for “Elevatorgate” not simply by obliquely encouraging people to harrass a woman almost none of them had ever met or heard of, but he could simply, with the click of a button, relegate her to The-Worst-People-In-The-World post tiers, where no one would see or hear from her again.

    Verifying actual names is also a bit of a problem. Facebook and Google have done something similar, and this has had two major effects: banning of accounts of people who have unusual but legitimate names, and destroying the support network of gay and trans artists who attempted to use the network professionally, because they don’t use their “real” name.

    Honestly, you know what would be a better model? 4chan with aggressive and effective moderators. Completely pseudonymous or anonymous, might allow sockpuppets, but it also protects everyone involved and limits the effectiveness of cyberbullying behaviors. Ello, for example, was half baked and incomplete at launch, but that’s what an ideal social media presence really should look like.

  29. John Morales says

    Well then, Brian. Presumably, you also accept that there is a distinction between the near-real-time policing of activity and the vetting of users.

    Note too that FB’s fundamental and primary objective is to profit, not to provide a civil space.

    Not evil, but not saintly, either.

    It mainly profits by on-selling information gathered from its userbase* and by selling access to users’ screens, so they have incentive to increase (or at least maintain) their userbase and its degree of engagement with it. And it’s doing damn well at it, so far.

    (Or: it depends on one’s criteria how satisfactory is its policing of its own standards)

    ** Often value-added: they have pretty datamining and pattern-detecting tools, and the lode is quite a few years old by now.

  30. snuffcurry says

    @taraskan 24

    Twitter is a cesspool, and it would continue to be a cesspool even if its user base were entirely made up of middle-aged urbanite Canadians. The problem is there is nothing worth saying in only 140 characters or less – nothing. “Big think” is beyond its post capacity. By intention the only thing you can use it for is bullshit smalltalk, small-business advertisements, derailing links, and big dumb blanket statements about anything newsworthy, all according to kindergarten playground ettiquette.

    That’s a really long, hyperbolic way of saying you’re the kind of person for whom Black Lives Matters, for example, means nothing and has achieved nothing (possibly because you haven’t personally benefited from it or are leading a comfortable, insular existence which metaphorically blinds you to its purpose and utility). On-the-ground journalism and citizen journalism, real time, is enormously important in the fight to maintain and expand civil rights throughout the world and Twitter is one of its many tools (accessible to many, easy and free to use, encourages safe anonymity, allows for video and images, can serve as an historical reference or potential evidence in the investigation of a crime, a means of immediate communication of often vital information across vast distances to millions of people without much censorship, a means of achieving a virtual kind of unfettered assembly where assembly in person is denied or not feasible, et al).

    Your lack of imagination and experience doesn’t make something a “cesspool,” for fuck’s sake.

  31. davidnangle says

    karpad, I came up with the idea literally as I typed it.

    I think the key to preventing cliquish abuse is a limit on your ability to vote. Lifetime cap or a set number per week, either of which could be increased or reduced based on your own performance…

    Perhaps a ban would remove all your past votes.

    Infinite possibilities. It would take some experimentation.

  32. says

    starfleetdude

    I live in this universe, where you can find this statement of policy on hate speech from Facebook:

    Obviously you believe anything someone says on their official website. I mean, you believe whatever shit Dakota Access posts, so I’m not surprised.
    It’s of course entirely possible that you also believe that threatening to chop up someone’s face is some sort of legitimate free speech, satire or social commentary….

  33. Dark Jaguar says

    300 million users… Maybe the dark side is stronger, because no matter what policy changes Twitter makes, how can they ever even HOPE to actually regulate all that? There’s no way their department could actually keep up with that flood, even if they quadrupled the employees in whatever department manages user complaints.

    And if you think algorithms could solve this, there’s no way. Chat bots can’t possibly understand context, and basic “key word” filters are easily defeated (sometimes on accident, thanks to the general lack of care for correct spelling online).

    The best we can hope for is that the notion that anyone that posts hate speech is “at risk”, just as soon as someone in charge gets around to noticing that particular example among the flood they’ve already dealt with that day. I suppose that’s something, just as soon as their policy shifts to reflect that.

    Scale really does change everything… including notions of morality and responsibility… I’m sad now…

  34. says

    @40, Dark Jaguar

    I think there is a way, it involves division of labor. Forums often have moderators, and Ok Cupid already lets users (like me) review items that were reported as violating rules. So it’s doable.

    That cuts down the work for the employees.

    I think it can even be cut down further by taking the idea further. I think it can be given over almost entirely to users, rather than employees (algorithms can help, but wouldn’t need to be relied upon).

    The “block bots” for twitter are a similar idea.

    Also, an easy starting point is to have everyone “blocked”. Then users can un-block individuals and groups (or they could unblock everyone if they really want to).

  35. Dark Jaguar says

    The notion of having moderation be handed to users is interesting, but that particular implementation might have some flaws that would need to be addressed.

    For starters, how do minority groups protect themselves from a majority consensus that, depending on the season, swings against them? If everyone is moderating everyone else, majority rule is inevitable.

    As for everyone starting with “blocked” status, I’m unsure exactly how someone manages to get themselves unblocked at that point, since no one’s ever going to see them.

    There’s also the potential for abuse in such a system. We’re already seeing it with the Youtube Heroes program, but if everyone can report everyone else, a few bad apples can spoil it for everyone else. I suppose it could take the form of a “like/dislike” voting system, where passing a certain threshold of user complaints results in an automatic ban, but then there’s issues with bandwagon effects (like Anita, say, being banned thanks purely to large numbers of haters reporting her). Also, it seems people would be more likely to get banned than otherwise, because the natural tendency of people is to say nothing when things are going fine and only speak up with something’s wrong (or when someone else is complaining that something is wrong, it seems). In that state, unless someone’s fairly well liked, the natural tendency of the system will be a drift towards banishment. It might also be a good way for oppressive regimes to snuff out all the little fires of resistance before they can swell up into an inferno.

    It’s good that we’re coming up with ideas, but it pays to think about the ways those ideas might go wrong. I think the overall idea, that of community selected moderators, is a good one. However, I think there’s got to be a good system in place to select those moderators to prevent abuse.

  36. says

    @42, Dark Jaguar

    For starters, how do minority groups protect themselves from a majority consensus that, depending on the season, swings against them? If everyone is moderating everyone else, majority rule is inevitable.

    I think actually banning people completely from a website could be rare, a last resort. But even then, majority rule isn’t inevitable. Instead, like with most websites already, it’s the owner of the website who rules. The owner of a website chooses who to trust with the task of banning people.

    My idea is that those people could then choose to trust others with the task, thus making the work lighter for themselves. And so on. Any mistakes or abuses could be investigated, and (as is already the case on websites) the will of the owner, not the majority, will ultimately rule.

    As for everyone starting with “blocked” status, I’m unsure exactly how someone manages to get themselves unblocked at that point, since no one’s ever going to see them.

    Ya…but I think there’s a way. You could add the user name of a friend to your “unblock” list. They’d still be invisible (depending on their own settings that determine who is allowed to see them) until they add your name to their “can view me” list. Easy enough between friends.

    from there, I’ve got ideas to make this task (of unblocking/blocking people etc.) easier too, using a method similar to the one I described for banning. The user could access the lists that other people have made available. So if you trust someone else to have a good list, you can adopt it (either entirely, or in any selective way you wish). This is similar to the block bot things on twitter. This can scale up easily enough I think, as people can aggregate the lists of people they trust, and other people can aggregate the aggregations that they trust, and so on. And any of these list creators can modify their list however they wish, either with Venn-Diagram type tools for easily comparing a whole list to another, or by selecting individuals.

  37. Dark Jaguar says

    You’re suggesting everyone is cut off until added to another’s “circle”? I think that might undercut the entire purpose of Twitter, ultimately, since Twitter’s greatest success at the moment comes from everyone seeing everyone else all the time. Such a change would make cops killing minorities “disappear” again, I’m afraid. I mean, at least on Twitter. Youtube would still provide that monthly dose of depression. I also think that, currently, far too many people online tend to cloyster themselves away from any group that doesn’t share their point of view, resulting in the formation of an “alt right” not one of us outside that group even knew was forming. This could be one of those “risk/reward” tradeoff situations…

    As for moderation, that’s a fair point, but it’s also why I think companies like Facebook and Twitter are ultimately flawed. It would be better if social media was an online standard rather than something single companies owned. But, that’s neither here nor there at the moment. Fundamentally, I think scale really will prove to be a major hurdle. Scale changes everything, and one of those things is how well a moderator can handle infractions. Ultimately, they’ll end up “sticking to a script” to control for renegade agents acting unilaterally.

    Still, if implemented well, it’s a start. Just remember that there won’t just be “one person” at the top. Most major companies just don’t work like that any more.