An argument from false authority

Our friend Tom at Dubito Ergo Sum has an excellent, thorough post about Tim Farley’s objections to the block bot.

He too quotes the list of credentials and then comments on it.

This would make for a great game of spot the fallacy, wouldn’t it? Farley lists all these qualifications, but none of them are “noted anti-spam crusader” or “longtime anti-bigotry activist,” not that those would be excuses either. See, none of these qualifications are inconsistent with “abusive […] anti-feminists, MRAs, or all-round assholes” or “annoying and irritating”3. It’s possible to be an Emmy and Golden Globe award-winning comedian and also be an annoying asshole who delights in baiting feminists with disingenuous arguments, just as it’s possible to be a Ph.D. biochemist who believes in intelligent design. This is a pro hominem argument, an argument from false authority, that these people’s lofty credentials make them somehow incapable of being bigots, jerks, trolls, abusers, or just antagonistic assholes to specific groups of people.

The last paragraph there is a doozy of arguments from ignorance and unstated major premises. “I see little evidence” is very different from “there is no evidence,” and the mechanics of Twitter mean that offensive tweets are often lost to the depths of a person’s timeline after a relatively short amount of time. But there’s plenty of evidence that prominent skeptics are capable of being petty, antagonistic, obtuse, bigoted (both in overt and unintended/unconscious ways), and asshole-ish. Some skeptics love poking various hornets nests, some love directing snide comments and thinly-veiled insults at people/groups they disagree with on social media, some keep dredging up sexist/racist/homophobic arguments and tropes time and time again even after hearing repeated responses/debunkings, some hyperbolically respond to the slightest criticisms with howls of NaziCommieStasi witch-hunt inquisitions. Farley’s right, they’re probably not going to be arrested anytime soon, but that’s because being an annoying, antagonistic asshole isn’t a crime.

The unstated major premises here are that “only anonymous trolls (and certainly not people I consider friends) behave in ways that would merit mass blocking,” which I dealt with above, and “only behavior that is illegal merits mass blocking,” which is the usual response to those complaining about harassment: if it’s not illegal, it’s not really harassment; if it was real harassment, why didn’t you call the police? I’ve responded to this notion, so has Stephanie Zvan, and the fact that Farley is able to spout off with it in such a casual manner shows just how insulated from this stuff he really is.

Yes. Very well said. Read the whole thing including comments, some of which are from Farley.




  1. leni says


    I think you might be *that* Tom but I’m too lazy to hit the back button to find out for sure 😉

    I think there is a kernel of truth in the broader point about insulating yourself from criticism or challenge. I’m thinking ultra-conservative Christian homeschooler types. But I think Tom, and probably several hundred other people several hundred times each, have answered that.

    Aside from the fact that there is no moral imperative to listen to other people’s bullshit if you don’t want to, there is also no bubble, at least that I’m aware of, where feminists are free of criticism. Just, you know, walk outside. Go to work. Go about your daily business. Go online. It’s there waiting patiently for you even when you don’t expect it.

    So I don’t need some jackass on Twitter to tell me what I already know and experience daily. In any case, Block Bot is about the last thing I’d turn to if I really wanted to create a bubble. If I wanted an effective lady bubble, I’d live in a feminist commune totally off the grid and I’d probably own a lot of guns. That’s never, ever going to happen. Guns and communes make me nervous. Maybe if there are zombies, though, maybe.

    Anyway, I enjoyed the fisking even if Farley was too lazy read it.

  2. says

    So apparently Richard Sanderson is on Tim Farley’s list of people who can’t possibly be annoying and abusive and trollish? Am I reading that correctly?

  3. Maureen Brian says

    Thanks, Ophelia, for sending me back to read the comments – saw the OP yesterday.

    According to Tim Farley, the world is perfectly well explained by three arguments

    * if it hasn’t happened to him then it hasn’t happened to anybody

    * lesser mortals, by which he means all the non-Tims, don’t get to choose with whom to converse

    * if someone has reached any quantifiable level of eminence, visibility even, then he or she can’t possibly also be either an aresehole or a bore.

    Right. So I won’t be looking to Tim as a beacon of human progress. Wasn’t Asimov a serial groper and Koestler a rapist? Come to that, wasn’t Caravaggio a murderer? And isn’t there an ex-Mayor in San Diego complaining that the city did not train him in non-abusive behaviour, even though he already had to scuttle away from Congress for the very same reasons.

    I rest my case.

  4. HappyNat says

    It goes back to the Authoritarian mindset doesn’t it? These are “Very Important Skeptics”and have done good work in the past, so it’s an insult to block them. People can do a good job skewering some aspect of Woo or Religion and I can appreciate that, but if they say something dumb and insensitive later it’s not my fault if I tune them out. I’ve enjoyed Dawkins, Shermer, Harris, and Maher etc. in the past, however they have shown me they don’t “get it” so I’ve moved on. That’s my right, I shouldn’t be forced to still respect or listen to them.

  5. Al Dente says

    Wasn’t Asimov a serial groper

    Asmov’s first wife divorced him for adultery. Apparently he had a series of one-night stands when he was a professor in Boston. He considered himself a feminist but that didn’t stop him from pinching the ass of any woman within reach. Gene Roddenberry had a similar reputation. Harlan Ellison is notorious for his antics. Ellison grabbing Connie Willis’s breast at the 2006 Worldcon was just the most famous example of his disrespect for women.

  6. Beth says

    The Ideology of Anonymity and Pseudonymity by Malcolm Collins:

    I’d be interested to hear your opinion of the above article. In particular, whats your take on this paragraph:

    Not only can trolling be useful in exposing hypocrisy or concealed extremism, but it can act as a barrier to participation in a debate by individuals not intelligent enough to discern real points from those made by trolls, individuals who are overly invested in the topic (which can prevent an individual from considering other points of view), and individuals whose sense of entitlement prevents them from participating in a discussion that involves teasing or vulgar, childish language. Despite the disastrous effects of purely malicious trolling, tactfull trolling has proven itself to be invaluable when it comes to preventing outsiders from influencing online political ideology.

  7. says

    [gestures wildly]

    Big name famous people can be horrible! People who are famous for good reasons, people who have done and continue to do great things, can also be horrible! Our lives are of a mingled yarn, people – doing great things doesn’t magically make anyone incapable of doing horrible things in addition. It provides some motivation not to, but at the same time, it also provides some motivation to – feelings of entitlement or immunity or grandiosity or all those.

    Shelley! Horrible to his first wife. Byron! Horrible to many many people and especially to women. Jefferson! Owned slaves. Owned his own children as slaves.


  8. says

    I’ve been really impressed today with how quickly the usual suspects have gone from “How dare they accuse innocent people of being annoying, without evidence!” to “How dare Stephanie be so thorough and obsessive in compiling evidence that one of those innocent people was annoying!”

  9. says

    Great post, and great responses in the comments.

    Apparently I must spend an equal amount of time on every aspect of the argument in order for it to be valid, not address one section that’s completely fallacious in its attempt to make the point that the bot isn’t just inadequate, but harmful.

    Strange, that. In any case, someone should, because there are several weird aspects of the argument.

    As a preliminary, I’ll note that while Farley describes the “harassment problem” generally, he offers nothing useful himself in this context. I’ll also note that I’m not on Twitter, never have been, and have no intention to be.

    His issues seem to boil down to: potential harms to those opting in to the BB, potential harms to those learning about or considering the use of the BB, potential technical difficulties, and potential harms to those blocked. They’re confused and unconvincing.

    Strong technical measures like this demand strong procedures around them, to guard against abuse.

    He doesn’t really spell out what he means by “abuse” here, or what about this constitutes a “strong technical measure” or what that means. We’re talking about a voluntary method of blocking people from your personal Twitter feed. “Abuse” seems an odd term. If you want the term to have value, you have to describe the meaningful harms of having some group of people decide to block you from their twitter feeds. Until you can, the discussion of abuse, transparency, audit trails, and so on is misplaced.

    The core problem here is this tool was developed for specific needs of a very specific community (namely, those who identify with “Atheism+”). Therefore the operators of the bot assume knowledge or attitudes on behalf of the user base that may not be held by the average Twitter user. Essentially, if you are good friends with Billingham and agree with him on most issues, the bot may well operate exactly the way you expect it to. This specificity of the bot to a particular community is completely glossed over in the BBC TV report.

    Since the user base is people who’ve chosen to opt in to the A+ bot, I don’t see this as a problem. It’s obviously not a problem that it was developed for a specific community – that’s the point, in fact. It serves the needs of people in that community, and offers a model for other communities. (It’s not a problem that those other communities are a diverse group, either.)

    If the argument is that the “average Twitter user” might jump to sign up without realizing that it was developed for a specific community, I don’t think it holds up. It’s clearly labeled and its specificity, including terms Farley himself cites, would suggest to most intelligent people that it isn’t supposed to cover all of Twitter.

    If he thinks it’s being promoted as a more general tool in the media, then his issue is with them.

    For the operating bot, I could find no documented list of authorized blockers on the site.

    So what? If you don’t know or don’t trust the authorized blockers, don’t use the bot. If you start using a bot and then think the blockers aren’t making the right choices, then opt out. The decision affects only people’s personal Twitter feeds. They can put their trust in whoever the hell they want.

    And remember, because of the way the Twitter API and services like this work, when they make a decision to block someone, the actual block is happening using your account credentials – pretty much exactly as if you had pressed the button yourself. Suppose the operators of The Block Bot were to select a series of accounts to block that looked suspicious to Twitter HQ? Twitter might take action to suspend the application (i.e. turn off the block bot), or they might take action against you, since it was your account that did the blocking.

    OK, part of this comes under potential harm to users. He seems to be suggesting that blocking a set of people might be seen by Twitter as “suspicious,” even possibly leading to a suspension of the accounts of those opting in. Is there any evidence to suggest that this is plausible? I would be surprised. But even if there is, that would be totally unacceptable on the part of Twitter and something that should be fought, not trotted out as a scare tactic to discourage people from controlling their Twitter accounts.

    The other part appears to be suggesting that Twitter could turn off the bot for some reason. Again, this would reflect a serious problem with Twitter if it were to happen. Twitter should encourage the strongest possible control of people over their own interactions.

    The Block Bot has very little in the way of an audit trail. Nothing is recorded in the back-end database to indicate who told it to do something, when or why. When commands are sent via Twitter, that could leave some evidence scraps behind. But it can’t be totally trusted. For instance, a user could send a command to the bot, wait for it to be acted upon, then delete the tweet.

    The lack of auditing means that someone could end up on the block list and there might be no good way to figure out why they were put there.

    My response is: So?

    So the person affected would have no way to know about the mistake so they could call it to anyone’s attention.

    So what? You’re being blocked from some people’s personal Twitter feeds. How does that harm you?

    As a result, even if an effective appeal procedure is put in place, it can’t undo some of the damage done by the bot.

    What damage? What’s this nonsense about an appeal procedure? It’s not a trial; it’s some people opting in to a bot that blocks some other people from their Twitter feeds.

    Like the previous issue with who gets blocked, this text is entirely unclear to me. Who defines “deeply unpleasant” or “tedious and obnoxious”?

    The people doing the blocking, and the people choosing to use the bot. If you determine many of the people blocked to be unpleasant, tedious, or obnoxious, then you can opt in to that level of blocking. If not, not. (The concern about not getting retweets that open new intellectual vistas is touching. And by touching I mean hilarious.)

    This is a bit clearer, but still quite confusing. It uses a number of terms that are not defined here (such as “Deep Rifts”, D0X, MRA and frozen peach) and scare quotes on some terms to further confuse the matter. (Yes, I’m fully aware of what those terms mean, but is everyone?). Again here the tool suffers from having been coded for a specific community for whom this text probably makes more sense.

    It doesn’t “suffer from” that. That’s who it exists for.

    But I feel a general Twitter user will be confused here and probably make the wrong choice. I know I am not familiar with the norms of Atheism+, and I can’t fully interpret the above text. For instance, what do they mean by parody accounts?

    The suggestion appears to be that a general Twitter user is quite unthinking. But again, so what? So that impulsive, reckless Twitter user who opts into a bot without understanding what it’s about will have some people – with whom they’re not especially likely to have contact in any case – blocked from their feed. Big deal.

    Blocking and reporting for spam on Twitter absolutely have consequences for the reported account including potential suspension.

    I don’t like the joining of “blocking and reporting” here. Again, I’m not particularly familiar with Twitter, but is there good evidence that one or several people simply blocking – but not reporting – someone either raises flags or results in suspensions from Twitter? The impression I get is that even reporting people for real abuse often doesn’t result in suspension of the abusers, not that Twitter is trigger-happy when it comes to suspending people.

    But even if this were a real danger, it would be a major problem with Twitter that would need to be addressed. Blocking alone shouldn’t be cause for suspicion of the blocked person any more than it should be a cause for suspicion of the blocker.

    I cannot recommend this online tool for anyone who is not already very closely allied with the Atheism+ community

    Since it’s an online tool explicitly for that community and those sharing its goals, this is a goofy, pointless recommendation.

    One of the reasons I’ve not written about this bot until now is that I’ve long been expecting Twitter to cut it off as a violation of their automation policies (specifically: mass unfollowing). It remains to be seen if the media attention causes Twitter to take action.

    Amusingly disingenuous, since you’ve criticized the bot extensively (and fallaciously and inanely) – “I don’t like your bot, but I’m terribly concerned Twitter’s going to cut it off!” (Reminiscent of “I’m not a feminist, but you feminists are alienating potential allies!”) In any case, you state at least twice in your post that you support people’s individual and collective right to block people from their feeds, so given that principled position you should oppose this decision should Twitter make it.

  10. says

    So the person affected would have no way to know about the mistake so they could call it to anyone’s attention.

    This business about “mistakes” and “appeals” is fascinating. Like it’s some cosmic verdict of assholery condemning you in every aspect of your being for all time.

  11. says


    As a preliminary, I’ll note that while Farley describes the “harassment problem” generally, he offers nothing useful himself in this context.

    From his comment on my post, it seems like he thinks his dealings with Dennis Markuze give him some expertise on the kinds of harassment that we’re talking about from the anti-feminist/MRA/TERF types.

    If he thinks it’s being promoted as a more general tool in the media, then his issue is with them.

    That’s just it, and it’s why his whole article is so bizarre. Farley saw a commercial for a toaster and wrote 4,300 words complaining about how it wouldn’t satisfy all of his cooking needs. He thinks (1, 2) that because Oolon talked about the Block Bot on TV, it means he meant that the existing Block Bot would be the perfect solution for all of Twitter’s harassment problems. This despite it still being clearly labeled and shown as “The Atheism+ Block Bot,” despite the fact that what Oolon actually promoted in the interview was the ability to share block lists, and despite the fact that the accompanying article refers to the “‘shared block list’ strategy.”

    To return to the metaphor, what Oolon (and the article, and the interview) are actually promoting is the concept of using heat to cook edible things; the toaster is one specific tool that operates on that principle for a specific subset of edible things. The Block Bot is a model of a strategy that could be adapted by other groups for their specific needs/desires, or even by Twitter as a whole. By assuming that Oolon and Mason instead meant the absurd notion that a handful of people should be responsible for moderating all of Twitter, I think Farley was violating that same principle of charity which he thought relevant enough to link to earlier.

    So Farley’s whole post is rebutting an argument no one made, making a point no one really disagrees with (“the Atheism+ Block Bot isn’t for everyone”), and makes a digression into fallacyville in order to make his dubious claim that it’s not just inadequate but harmful. And next week, we get to look forward to the follow-up.

    The impression I get is that even reporting people for real abuse often doesn’t result in suspension of the abusers, not that Twitter is trigger-happy when it comes to suspending people.

    Yeah, this notion that Twitter would monitor accounts that get blocked too much but not reported and shut down the block bot or the blocking accounts seems to fly in the face of Twitter’s laissez-faire policy toward even explicit rape threats.

  12. sailor1031 says

    Like all other incoming communications it’s my decision whether or not to accept tweets. And I frankly don’t care about the validity or otherwise of your views. It is not my function to be a sounding board for you, unless I operate a blog with a liberal comment policy and just happen to permit it.

    Just because you send me something does not obligate me to read it and respond. So i ignore a lot of incoming phone messages; i delete a lot of email unread; my cellphone is off a lot of the time; i recycle a lot of mail unread; why just because you tweet at me are you different and deserving of my attention? If there was a USPS Bot i could set to not deliver unwanted mail I’d use it. I ruthlessly filter email according to sender.

    My communications are for my convenience. Why is this so difficult to understand? BTW I don’t know Tim Farley or care about Tim Farley or ever expect to get a message from him but he’s on my list now anyway.

  13. says

    I’ve been reading the Twitter rules, which is leading me to question some of Farley’s “concerns” even more:

    One of the reasons I’ve not written about this bot until now is that I’ve long been expecting Twitter to cut it off as a violation of their automation policies (specifically: mass unfollowing). It remains to be seen if the media attention causes Twitter to take action.

    The policies on mass following/unfollowing are based on the recognition of how mass/automated following or unfollowing is used and affects people’s experience. The problems with aggressive following and follow churn derive from the nature of following on Twitter. I don’t see how mass/automated blocking is comparable, even potentially. It could be because of my limited knowledge of and experience with Twitter that I’m not seeing this as a form of automation with similar effects or that could be abused in similar ways. It’s entirely possible that I’m failing to recognize some potential for abuse. Perhaps Farley could explain the rationale?

    Blocking and reporting for spam on Twitter absolutely have consequences for the reported account including potential suspension.

    It appears that having several people block you is one of a many signs they use to identify someone as a spammer (they provide a list of about 20 criteria, and that’s not everything). So it doesn’t look like having several people block you – especially if they’re also blocking several other people – itself would be a realistic cause of suspension absent other factors. And as I said, if it would, that would be a huge problem with Twitter. Reporting for spam or another violation would of course likely have consequences for the reported account – that’s why people do it. It seems strange that Farley would discuss them together like this unless he’s trying to make people think the bot has consequences it doesn’t.

  14. says

    When I first saw that several writers were commenting on Farley’s piece, I dialed up his piece to read first before reading the commentary.

    As I made my way through it seemed to me a reasonable technical critique from a security and development perspective. After all, the initial report on the BBC rather did gloss over the niche application of The Block Bot and gave the impression that it might be something the average user could fire up. Given that context I would expect someone involved in application development to comment on it the way Farley did. Sure, while reading that I could see where objections could be had that he was missing the point, but felt technical commentary was fair game.

    Then I got to his points 5. Jeezum crow did he go off the rails. If Farley had an editor and I was that editor and I couldn’t convince him not to write that part of the post, I’d at least have endeavoured to convince him to cut those and use them as the basis for a second post. The reaction to the second post would still have overshadowed the first, but at least the technical dissection could stand on its own.

    If your point is technical commentary, don’t give yourself a coup de grace by stabbing off in an almost entirely, non-technical direction. About as welcome as a turd in a punchbowl.

  15. says

    Tom Foss,

    I just saw your last comment.

    That’s just it, and it’s why his whole article is so bizarre. Farley saw a commercial for a toaster and wrote 4,300 words complaining about how it wouldn’t satisfy all of his cooking needs.

    It is bizarre. (I liked oolon’s response to one of those tweets: “Its not called oolons big block bot of blocking everything imaginable on twitter.”) Farley really appears to think that talking about or promoting it on national television necessarily means portraying it as some sort of global solution, which of course it doesn’t at all. People talk about local measures – from technical to social and political – to deal with broader problems all the time on national TV. In many cases, including this one, the people talking about the measure hope and expect that other communities might want to adopt the model and adapt it to their particular needs and circumstances. I’ve been reading recently about how doctors in New York City will be able to prescribe fruits and vegetables, to be purchased at local farmers’ markets. I don’t think anyone promoting the idea thinks that it will solve all health problems or all problems related to the Standard American Diet, that the specific program will be adopted in its exact NYC form by every other city regardless of differences, or that offering or promoting the model nationally means that any city that chooses to adopt it should send people to New York markets to buy their produce.

  16. says


    People talk about local measures – from technical to social and political – to deal with broader problems all the time on national TV.

    Another relevant comparison is the matter of convention anti-harassment policies. Various anti-harassment policies have been floated as solutions that could be adapted by different groups (the Geek Feminism Wiki one for instance); Farley’s complaints read like someone saw that promoted and said “Yes, but it’s a clear weakness that the con is never named, except for calling it “$CONFERENCE,” and no specific information is given,” as though someone were just going to copy-paste the whole thing to a website somewhere without alteration.

    More realistically, it’s like the goons who complained about the “Booth staff (including volunteers) should not use sexualized clothing/uniforms/costumes, or otherwise create a sexualized environment” clause as meaning a draconian dress code, rather than a prohibition against booth babes that A) could easily be excised/altered and B) wasn’t relevant to most of the skeptical conventions we’re talking about anyway.

    I think HappyNat’s right, that it comes down to that authoritarian mindset. “Here is a set of rules handed down from some authority. They are to be followed to the letter. They cannot be altered. They are not up to interpretation. Context is irrelevant.” We saw it when the ‘Pitters were going all tattletale on people using their phones to text at WISCFI (I think?), when the rules clearly stated phones should be turned off, ignoring the actual context/reasoning behind the rule (e.g., that it’s rude when your phones make noises during speeches).

    I would think that a bunch of skeptics/atheists would understand that good rules aren’t arbitrary, but exist for practical reasons, and that if reasons and contexts differ from location to location, the rules should as well. But then they wouldn’t be able to play this rules-lawyering gotcha game, and that’s basically the only way they can feel any sense of victory at this point.

    Ophelia: It’s okay, it’s a consequence of my wanton, overlong fisking :).

  17. says

    I think HappyNat’s right, that it comes down to that authoritarian mindset. “Here is a set of rules handed down from some authority. They are to be followed to the letter. They cannot be altered. They are not up to interpretation. Context is irrelevant.”

    Yes, that’s an interesting idea. I had two readings of Farley’s comments about what he calls technical problems with the bot. Thinking about it more, they’re complementary. The first was that he doesn’t like the BB because the list of blocked accounts includes some people he likes or respects and doesn’t want to see there, so he’s kind of passive-aggressively trying to make a case for why Twitter should shut it down. His expressions of concern about the suspension of users’ accounts and his stated expectations that Twitter would shut it down, then, look like a subtle way of saying “Psst, Twitter! Did you know this is what these people are doing? Don’t you have a problem with that?” (Granted, this is a suspicious and uncharitable reading.)

    At the same time, given that he says a couple of times that he supports people’s right to block whoever they want, his presenting the possibility of Twitter’s suspending users or shutting it down as a technical issue with the BB itself perplexed me, as I mentioned above. It was like he had no problem with Twitter (hypothetically) setting arbitrary rules with no practical rationale, including rules that interfere with people’s control over their own interactions. He doesn’t provide any reasoning for how this would constitute a violation (which could conceivably exist, but he doesn’t think it necessary to even mention) and he doesn’t suggest that he would have any problem with them disallowing the BB without making a case for why. He doesn’t ask: “If the people at Twitter did act that way, would it be justified? Should I support or oppose it?” Like Twitter’s rules just form part of the natural technological ecosystem rather than resulting from human decisions.

    When he suggests that it could be a violation of their existing policy, he doesn’t stop to question how this would work – how the BB is comparable in action, in its effects and potential for abuse, to mass following/unfollowing. This could be read either way: he’s reaching to find a reason for Twitter to stop this BB because he doesn’t like its specific content, or he doesn’t recognize rules as political and doesn’t care if they’re arbitrary and don’t serve real people’s needs. I suppose both of these could reflect an authoritarian sensibility….


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>