The Paradox of a Weapon you Give to your Target


I pointed this problem out during my “cyberwar is bullshit” talk at RSA conference in 2012: once you begin using your cyberweapons, they become subject to commercial pressures: and competitive analysis.

This guarantees that cyberweapons will have (relatively) short lifespans, and they’ll have the same problem that copy-protect and other digital rights management systems have: in order to work, you have to give them to the enemy, which means they are subject to examination and dissection. The cost of innovation is borne by the designer of the system, and once the system is widely fielded, it can be completely mooted by a single attacker.

Note, the language of “attacker” and “defender” gets confusing in this situation: is the person trying to analyze the malware “attacking” the malware architecture that is “attacking” their system? Our perspective flips around depending on who we are.

I just stole this from the [bitdefender] paper because it looks cool

Ars Technica has a nifty description [ars] of a piece of Russian malware’s command and control system. Think about the problem: the author of the malware needs a stealthy and reliable way to get commands into the target’s network. Once the target learns how the commands are being passed back and forth, it’s a problem in pattern-matching to be able to detect it as it goes into and out of the network. It gets even more fun: if your command and control isn’t stealthy enough and hard enough to reverse-engineer, the target can take over your command stream and take over other systems you control. That’s one of the stealthy games that is being played in cyberspace: there are anti-malware companies that try to set up malware traps, collect traffic and reverse-engineer how botnets work, then take them over and shut them down. There are also bot-herders who are trying to outright steal other bot-herders’ botnets. Early botnets might share a common encryption key for the command channel, so if you cracked the key and had a compatible control client, you owned the entire botnet.

source: IEEE analysis of torpig botnet takeover

The NSA’s TAO (“equation group”) used some very clever techniques to generate encryption keys that were specific to targeted systems, so that the botnet’s command channel couldn’t be completely compromised – which means that the control client has to have a key management framework embedded in it: remember the exchanged keys for every remote, individually, and probably also looking for authentication errors that might indicate someone is trying to break into your channel.

The Russian malware Ars describes:

[A] recently discovered backdoor Trojan used comments posted to Britney Spears’s official Instagram account to locate the control server that sends instructions and offloads stolen data to and from infected computers. The innovation—by a so-called advanced persistent threat group known as Turla—makes the malware harder to detect because attacker-controlled servers are never directly referenced in either the malware or in the comment it accesses.

So, the malware is watching a specific channel (doubtless selectable) and looking for a specific pattern in a comment, which might then be used to decode the command. Suppose my command channel is coded into words mapped to a dictionary. The order to collect a screenshot might look like “HAVE A A++ DAY! COVEFFE! Word.” where the “HAVE ${something} Word.” are framing around the command and the command’s bytecodes are embedded in “A A++ DAY!” Imagine ‘A’ is the opcode for ‘grab a screenshot’ and ‘A++’ is the duration and ‘Day!’ means to exfiltrate the screenshot to a private image profile somewhere in the cloud.

Somewhere, someone’s pretty tool just burned. It’s a new version of a tool that Bitdefender did an analysis of back in 2016. [bitdefender]

Somewhere, a trojan horse coder is thinking “maybe I need to steganographically encode the commands in the least significant bits of JPEGs, with the X,Y permuted using a secret key exchanged when the malware first goes live… yeah, that’s it!”  Each generation of this stuff that gets fielded is going to burn the previous generations, and will also serve as a “smoking gun.” Consider Duqu: we now know it was the ancestor of Stuxnet. We now know Stuxnet was an NSA TAO cyberweapon. Therefore, we can conclude that any systems prior to December 1, 2011 that showed prior infection by Duqu were infected by NSA TAO, Q.E.D.

Back in the “Orange Book” (Trusted Computer Systems Evaluation Criteria) days, we used to worry about “covert channels” – the idea being to calculate the maximum amount of information that could be exchanged across a trust boundary. I remember great fun arguments with the trusted systems folks, who thought that a “firewall” was a terrible idea unless it was able to do very detailed content analysis, sorted by data type, so that the maximum bandwith of each channel could be assessed. At that time I was so annoyed with the theoretician-gabble that I lashed a tunnel driver together with some shell scripts to uuencode/uudecode UDP packets via email, and mounted some NFS filesystems through a “secure” email guard system. The internet of today is not a “covert channel” and the firewalls are more or less ‘bumps in the data-flow that slow things down a bit.’

All possible futures in this game point toward more and more complexity, to the point where I do think that in another 50 years the systems will be so complex and stochastic that they appear life-like. Imagine a piece of controller software and a piece of malware that you have to train together for years so they learn and figure out preferred ways of communicating that are so subtle even the owner can’t follow them? Meanwhile, of course, the game in detection is machine-learning algorithms that are trying to learn what “normal” traffic looks like so they can trigger on everything else. The endgame will be increasingly tightly specified definitions of “normal” – which is exactly what the old-school trusted guard software was trying to do.

These cyberweapons are expensive and take a lot of R&D and testing. Then, they get “burned” and become evidence that is sitting on someone else’s computer.

Now, about the Russians angle, here: I’m willing to believe that this is probably developed by a nation-state. It’s expensive and complicated and is clearly part of a long-term development process. It is far more sophisticated than the stuff we’ve been told about what was used on the DNC: I’d say there’s a high likelihood that if this skill-level of attack was being run on the DNC, they’d still be leaking data like a sieve. Maybe they are! Maybe the whole tool-chain that was “burned” is just a loss-leader – but if someone starts demonstrating that kind of tradecraft, then I’d be more and more likely to believe they were a state-sponsored actor. Of course, if we jam our tinfoil hats firmly on our heads, then we can consider that there’s a very special “tradecraft of no tradecraft” like Bruce Lee’s art of fighting without fighting.

------ divider ------

RSA talk link: [rsa] The good bit is at 14:00 – Not the best talk I’ve ever given – I was up too late the night before and was exhausted and jet lagged.

One other point: the really good malware is presumably built into firmware of devices: network physical interfaces, WiFi controllers, hard drive controllers, etc. My strong suspicion is that the stuff we see is a great big sideshow of amateur hour. But obviously, I have no way to verify that suspicion. It’s obvious, though: reverse-engineering a microchip is a whole lot harder than reverse-engineering a piece of software, and the software isn’t always easy.

More on Attribution: As I’ve said before, when/if the US Government starts publishing attribution that is of similar quality to the stuff that the Russians (yes, I believe it is them!) are leaking about the NSA and CIA’s tools, then I’ll start accepting that “Russians did X” for some of the breaches that involve attributable tools. For example, until Stuxnet/Flame/Duqu leaked out, if someone said “this attack used Duqu therefore it’s the US” that’s a pretty fair attribution because only the US (and Israel) had Duqu at that time. If someone can conclusively link a piece of malware to the Russians, and it’s being used only by them, then of course it’s them. The problem is: I think that the Russians’ tradecraft is better than the CIA and NSA’s – if anything, they’re smart enough not to sign their work by using their custom shiny toys. But, as soon as you start accepting “does not use the shiny toys” as an example of good tradecraft, then, as Jeff Foxworthy in cyberspace would say: “here’s your tinfoil hat.

Why hasn’t the US done that kind of attribution on the Russians? Maybe it’s not the Russians and they haven’t hit on the bright idea of manufacturing evidence, yet? Maybe it is the Russians and they just know that the American people are clueless gits who will just suck up whatever they’re told? I am genuinely puzzled.

“Guard” was old TCSEC-speak for the analysis module of a “cross-domain solution” – basically a ‘firewall’   But calling a cross-domain solution a “firewall” was guaranteed to get an eye-bugging response out of the classified computing folks. Cross-domain solutions were very expensive and required a lengthy analysis and certification process, back in the day. Now – judging by the kind of leaks that are going on – the intelligence community’s network security posture has loosened considerably.

No, I don’t think Donald Trump’s tweets conceal command/control between him and his Russian handlers. He’s too stupid to have such good tradecraft.

Comments

  1. Pierce R. Butler says

    All this analysis and meta-analysis – phooey.

    Just lob 59 Tomahawk cruise missiles into Britney Spears’s house and have done with it!

  2. says

    Comey appears to have dropped an interesting hint:

    As the nearly three-hour hearing drew to a close, Comey mused about the “vital” importance of special counsel Robert Mueller’s inquiry, owing to the persistent threat from Russian electoral interference. “I know I should have said this earlier, it’s obvious, but if any Americans were part of helping the Russians do that to us, that is a very big deal,” Comey said. “And I’m confident that if that is the case, Director Mueller will find that evidence.”

    [beast]

  3. Dunc says

    Somewhere, a trojan horse coder is thinking “maybe I need to steganographically encode the commands in the least significant bits of JPEGs, with the X,Y permuted using a secret key exchanged when the malware first goes live… yeah, that’s it!”

    Damn it, that was my idea! Post ’em on a public Instagram or Tumblr feed, only use photos containing certain visual elements (say, a particular face you can recognise using MS’s Azure Face API), and make sure the content of the stream is good enough to be widely followed…

  4. Pierce R. Butler says

    Siobhan @ # 5: … the Syrians are tired of being “saved.”

    Not nearly so fatigued with that as, say, the Iraqis.

    Yet neither of them knows a fraction as much about that as the Haitians, practically comatose with boredom after their multiple USMC liberations.

  5. bryanfeir says

    Re: the ‘orange book’ guys:
    I’ve got a friend who used to MUD from inside a ‘secure’ facility by tunnelling his telnet session over DNS request/responses to an outside DNS server he controlled. Low bandwidth, sure, but that’s all you need for what is pretty much a telnet session, and most firewalls had holes for DNS at the time. And this was mostly done to prove that he could. (Can be fixed by running a caching DNS server on the firewall, and blocking any DNS requests that go outside, of course.)

  6. says

    bryanfeir@#8:
    I’ve got a friend who used to MUD from inside a ‘secure’ facility by tunnelling his telnet session over DNS request/responses to an outside DNS server he controlled.

    OK, now that’s funny. If you tell me it was an ubermud or untermud (I coded those servers) I’ll fall over dead laughing.

  7. John Morales says

    From the OP:

    This guarantees that cyberweapons will have (relatively) short lifespans, and they’ll have the same problem that copy-protect and other digital rights management systems have: in order to work, you have to give them to the enemy, which means they are subject to examination and dissection. The cost of innovation is borne by the designer of the system, and once the system is widely fielded, it can be completely mooted by a single attacker.

    In a way, they’re zero-day exploits.

    Surely those who employ whatever malware recipe innoculate their own systems against that very hack.

    (Yeah, the wider computational ecosystem bears the cost)

    PS Wow! Preview works.

  8. says

    John Morales@#12:
    Yes, the way that information about new attack frameworks propagates through the field is pretty much the same as for 0-days.

    My understanding is that the people who build this stuff tend to have all the hallmarks of bioweapons researchers: obsessive cleanliness, strict separation of systems, etc. There is a story about Bell Labs back in the 80s, Tom Duff wrote a UNIX virus as a shell-script that appended itself into other shell-scripts – it was just a proof of concept – but very quickly everyone was infected with it. It didn’t spread beyond the research group.

  9. says

    bryanfeir@#10:
    Don’t think so, this guy mostly played TinyMUD.

    So did I. There are a couple other TinyMUDders who occasionally hang around here, too. I hung out a lot on Tiny and got into coding servers because people kept insisting you couldn’t build a virtualized disk-based server. I’m sure that code’s out of the gene-pool by now but for a long time, MUDs were running the disk layer from UnterMUD. Yipe, that was a long time ago!

Leave a Reply