Argument Clinic: Supplemental on ${topic1} and ${topic2}


Charles Stross’ book Accellerando has a large number of absolutely brilliant projections of the future of trolling.

Is this the right room for an argument

This posting will not contain spoilers, so don’t worry. There are weaponizable ideas in this posting that I am not encouraging you to do, because not all things that can be done are worth doing.

Perhaps the most ruthless response ever against a sealion was by a certain information security practitioner who was being annoyed by a facebook commenter who simply would not go away. Suddenly, the facebook commenter’s profile was constantly being bombarded by criticisms from about 20 different people from various places. The criticisms were abrupt, rude, abusive, badly written, not particularly coherent. Everything the target did was being badly but thoroughly swamped. Worse, yet, when the target stopped posting anything, the attackers would show up on existing comment-threads and kept hammering incoherently away. After a few weeks of this, the storm of complaints began appearing with a subtext reading: (“this comment was paid for by xxxxx via Amazon mechanical turk”) It was the first instance of a Meat Cloud attack and it cost surprisingly little. When the target invested hours in getting someone at Amazon to decide that trolling jobs were ‘offensive’, the meat cloud shifted to another tasks-for-hire site.

Any of you who are entrepreneurial nihilists: there is a market out there for “trolling as a cloud service.” That will inevitably lead to “troll filtering as a cloud service.”

Over at Charles Stross’ blog, there is a thread by Hugh Hancock (vibrant with weirdness from Charles’ amazing commentariat!) [stross] on “The Rise of the Trollbot”

In “Accelerando”, Charlie posited the idea of a swarm of legal robots, creating a neverending stream of companies which exchange ownership so fast they can’t be tracked.

It’s rather clear to me that the same thing is about to happen to social media. And possibly politics.

I will note that Hugh’s posting is from April, 2016 – well before certain things happened that bring trollbots into the frame of politics. It is the job of science fiction writers to hypothesize things that might happen. Now, we have a troll POTUS. Or, perhaps we have a neural network that is trained to emulate a troll POTUS: the beauty of statistical machine learning models is that the garbage they output is statistically similar to the garbage you input, only slightly garbagier.

Back in the day Rob Pike wrote two Markov Chain drivers and trained them to respond to the “zumabot” – Serdar Argic. [wikipedia] Argic was fixated on Turkey, and could be relied upon to produce massive frothy postings whenever anyone said anything about that country. Pike’s robo-poster, Mark V. Shaney [wikipedia] was not a whole lot more coherent and just as enthusiastic. Pike trained another data-set to produce ‘Bimmler” a right-wing trollbot. My personal favorite bit of Shaney’s writing was the FAQ that Shaney wrote for Bell Labs’ Plan 9 operating system:[bell]

 

  • Question: What is Plan 9?
  • Answer: Plan 9 is a new user interface in which any word on the screen can be imported from another machine.
  • Answer: Plan 9 is a programmable debugger that understands multiple-process programs, and except at its own console, it doesn’t run as an exercise in understanding the principles and mechanisms useful in designing operating systems, and not as a product as such. In this way it is analogous to the Unix operating system. In the most general configuration, it uses three kinds of networks, including Ethernet, Datakit, specially-built fiber networks, ordinary modem connections, and ISDN. In Plan 9, each network presents itself as a product as such.
  • Question: What GUIs does it use?
  • Answer: The standard interface doesn’t use icons; Plan 9 people tend to be local files.

 

That ought to answer the question “how did the republicans produce their budget?” in case anyone was wondering. I think Pike should not have given them his code.

The point of all of this: it’s been done. Trolling has no future because eventually someone will devise a victim-bot that simply accepts infinite abuse and whines realistically, meanwhile sociopathic internetters will unleash their trollbots, not realizing that they are attacking a victim-bot.

Perhaps you are thinking that this is sarcasm and silliness: there are already bots that you can purchase “favorites” from. Comments are not far behind. From some of the comments I see on some blogs (not mine, stderr’s commentariat are wonderful – thank you all) I wonder if it’s just a markov chain of previous comments getting fed through to create “new” comments. Who’d know? It has already occurred to people that I know to write an “SJW bot” to go argue on misogynist youtube postings – I talked them out of it, but we may not be so fortunate the next time, or the time after that.

For the record: At Argument Clinic we do not approve of weaponizing or commercializing trolling. The very purpose of an argument is to exchange information and to influence people’s convictions. If arguments turn into a sea of robots, then nobody will bother to argue anymore – they’ll assume that Sam Harris is just another markov chain, and ignore it. That’s not fair to Sam. Imagine, if everyone suspected anyone they talked to was a robot: it would be just like picking up a telephone, nowadays. Worrying about robo-trolling is, however, going to be necessary unless you control the vertical and the horizontal.

I believe that the future of robo-trolling and meatclouds eventually will drive a return to curated content – like this blog. You may not like me, or agree with me, but you can be pretty sure that you’re not going to be talking to a robot either using my name, or any of the commentariat’s. If I am right about this, curated content may well drive a move away from mega-aggregators like Facebook, which has such a huge volume of gunk that it cannot possibly filter it all, to smaller blogs and journals with curated comment sections. If you look on John Scalzi’s blog, or Charles Stross’ blog, or PZ Meyers’ blog, you’ll notice commenters that are engaged, interesting, and knowledgeable: exactly the kind of commentariat you will not find on mega-aggregators that are just vehicles for increasingly stealthy irrelevant banner ads.

------ divider ------

Serdar Argic – Argic’s schtick was to post lengthy screeds about Armenian massacres of Turks. You read that rightt: Armenian massacres of Turks (not the other way around!)  Initially he was jokingly called “zumabot” because his postings were robotic… Because, they were. Serdar Argic was probably an awk script. The history of zumabot is here [zuma] including an interesting tidbit that I did not know before: apparently the poster was one Achmed Cosar, of the Turkish Secret police; zumabot was a deliberate disinformation campaign (aka: “fake news”) that only ended when the US Government cancelled his H1B visa. This all took place in the early 1990s.

Some attribution regarding Argic [zuma]  it’s amusing to see some of the folks I hung out with in those days, cropping up again. I wonder what they’re up to now.. Or if they ever existed at all.

Comments

  1. Pierce R. Butler says

    Which came first: bots writing at Turing* test levels, or humans functioning at bot levels?

    Which predominates now?

    *In present context, Tpyos kindly guided my fingers to write “Turking”…

  2. polishsalami says

    “SJW bot”

    Here’s one from Twitter: https://twitter.com/arguertron

    It only takes a modicum of intelligence to work out this is a bot, but people still argue with it: some gamergater shouted at it for two whole days, according to its creator.

  3. Dunc says

    Any of you who are entrepreneurial nihilists: there is a market out there for “trolling as a cloud service.”

    I believe it’s called “Twitter”.

    Re: Facebook, I’m starting to get the feeling that I must be using in it a very different way to a lot of people… The only people I bother to interact with on FB are people I actually know from meatspace, and their immediate circle of friends (who I presume are also people they actually know from meatspace). I have long since given up interacting in any way with public posts from pages that I follow (not that I follow many of those in the first place) on the grounds that I don’t enjoy having pointless arguments with idiots. This approach works very well for me.

    Funnily enough, FB is also the only targeted advertising venue that I’ve ever found to actually work… I bought my (now) beloved manual espresso press as a result of targeted advertising on FB, and ended up subscribing to the maker’s crowdfunding campaign for their matching manual burr grinder. They really hit the bullseye there…

  4. says

    Pierce R. Butler@#1:
    Which came first: bots writing at Turing* test levels, or humans functioning at bot levels?

    In the beginning, there was Turing, and Turing tested himself and found himself wanting. Can Turing design a test that he cannot pass?

  5. says

    Dunc@#3:
    I believe it’s called “Twitter”.

    Re: Facebook, I’m starting to get the feeling that I must be using in it a very different way to a lot of people…

    That’s what I used to do, too: I’d prune anyone who posted cat or kid pictures or said “bless” or anything like that. Soon I realized I wasn’t seeing anything at all, so I closed my account for a couple years.

  6. says

    polishsalami@#6:
    Facebook is for normies; Twitter is for the demented, such as myself.

    It’s probably a bit out of bounds to liken social media use to mental illness. Mental illness is not the sufferer’s choice, but social media is.

  7. lanir says

    Had two thoughts about this.

    1. Once you “robo” any point of view you’re giving up on letting it mean anything. That’s why it’s such a bad idea. It’s an improvement on the sea-lioning tactic because it wastes the bot’s time to do not yours. Essentially bots and sea-lioning are both just lengthy ways of telling the other person you don’t care about their views, you just want them to go jump in a lake (or other, less polite variants of same). After you’re done punishing them for having the “wrong” view by wasting their time, of course.

    2. Kept thinking about AI. Making one that thinks like we do is impossible for the foreseeable future. Making one that can interact with us in limited but useful ways as a tunable solution for various general tasks… That’s probably not far off. I think the primary thing lacking for that (from what I’ve seen) is a good UI for programming the thing for the task you want it to do. All I ever hear about is how smart they are, not how easy they are to program so if you want (probably bad) entreprenurial advice… :)