They Were Already Unrestrained


For the last decade, the US military has been hinting that it would like to be able to be more aggressive in cyberspace.

Naturally, since this is the US, and the US military, offensive operations are cast as ‘defense’; that is in keeping with the long-standing tradition that history’s deadliest military force exists to keep the peace by destroying potential threats to its supremacy. People who are capable of thinking well, and accurately, immediately recognize this as ‘offense’ not ‘defense’ but it’s hard to correct the language of what is, basically, a war-fighting machine with a government attached to it, that collects the taxes that feed it.

We have already seen that in “cyber” the NSA and CIA have spent a tremendous amount of effort on offensive operations; they have built overlapping ground-work for compromising systems and software on a vast scale and have dramatically weakened the security of computer systems as part of an ongoing project to make it possible to gain illicit access to whatever they want, when they want it. Note that that’s what the US complains bitterly that China, North Korea, Iran, and Russia are trying to do – there’s nothing as fun as ‘helping’ to ‘solve’ a problem you helped make immeasurably worse. It’s difficult to get anything like an accurate figure but I’d say, conservatively, that the US government’s expenditures on computer security are about offense=80%, defense=20%. There is some innovation going on, on the defensive side, but it’s mostly happening in the commercial sector, for commercial reasons. The government buys its defensive technologies “off the shelf” and its offensive tools are expensive, secret, bespoke stuff.

Expensive, secret, bespoke stuff that they cannot seem to keep from leaking regularly. The current crop of malware uses leaked implementations of top secret US government-funded attack tools; when the US is complaining about North Korean malware, they are complaining that the North Koreans are using US-developed malware. It would be ridiculous except that the huddled masses appear to accept the US government’s position relatively uncritically.

The spooks have been having so much fun, the grunts want to get in on the party. No doubt they will develop another parallel stack of tools and techniques, which will – in due course – leak and be used against everyone.

[CNN] The US military is taking a more aggressive stance against foreign government hackers who are targeting the US and is being granted more authority to launch preventative cyberstrikes, according to a summary of the Department of Defense’s new Cyber Strategy.

The Pentagon is referring to the new stance as “defend forward,” and the strategy will allow the US military “to disrupt or halt malicious cyber activity at its source, including activity that falls below the level of armed conflict.”

“Defend forward” is another one of those Orwellian terms that Washington likes to come up with: it means “attack.”

The notion of “launching a preventable preventative cyberstrike” makes no sense at all, but perhaps that’s because I’m making the mistake of trying to understand it. It parses as “optional combat” or “attacks we say we had to do in order to defend ourselves, but we didn’t actually have to defend ourselves.” Let’s be honest – it’s pre-emptive strike. Pre-emptive strikes are an ancient military concept: it’s when you hit the other guy and say “he was about to hit me!” And there’s always a problem with that: if you know that I am about to attack you, that means it’s still time for Diplomacy. “Hey, Marcus? Why are you walking toward me with that baseball bat? The sniper I have positioned 300 yards away is worried that you are thinking of hitting me with it, and I think we should discuss the situation before you get within say, 30 feet of me, as that will avoid your potential involuntary dissolution.”

The new military strategy, signed by Defense Secretary James Mattis, also emphasizes an intention to “build a more lethal force” of first-strike hackers.

The “defend forward” initiative wasn’t included in the 2015 strategy and further enables the United States to carry out offensive hacking operations to defend against cyberattacks on critical US infrastructure, such as election systems and the energy grid.

We all saw that coming, didn’t we? Since the US doesn’t know how to defend anything, it’s going to adopt a policy of pre-empting any threat that looks like it may emerge. Basically, that has been the US military strategy all along: destroy any possible threat and blame them – after all, it’s their fault because they were threatening! And if you attempt to defend yourself, well, that’s threatening too!

In effect, it gives the US military more authority to act on its own — even against computer networks based in friendly countries.

Generally I am not a fan of quoting The Founding Fathers (all in caps!) as though they were a source of political wisdom, but: The Founding Fathers specifically tried to prevent the nation’s military from being able to decide when to act on its own. The Founding Fathers knew that soldiers tend to be nationalistic thugs who see violence as their preferred solution – they are not the ones to be carrying out foreign policy on their own initiative. That was why The Founding Fathers put war-making authority in the craven and feeble hands of Congress… Oh, nevermind.

Until recently, if the US National Security Agency observed Russian hackers building a computer network in a Western European country, the president’s National Security Council would need to weigh in before any action is taken.

Now, the NSA won’t have to give its seal of approval, according to Jason Healey, a senior research scholar at Columbia University and former George W. Bush White House cyber official.

We’re in some real pretty shit now. You saw how that just happened, without any oversight on the part of Congress or the people? Our military has decided that it is acceptable to attack whoever they want, whenever they can convince themselves that it’s justified. And they’re really good at convincing themselves that it’s justified. When people said it was dangerous to have Mattis in a policy-making role because the military likes to make policy that says “attack stuff” – this was exactly what they were worried about. This has happened. They are not asking for more authority, they took it.

Comments

  1. Curt Sampson says

    Typo in paragraph 9, I think: ought ‘launching a preventable cyberstrike’ not say preventative?

    I suppose it’s sort of ironic that ‘[building] overlapping ground-work for compromising systems and software on a vast scale’ is actually one of the things you’d want to do to have good defense; it’s a long-accepted technique to have a bunch of hackers put on their black hats and show you the problems you need to fix. It’s only that you need then to use that knowledge to secure the software against those attacks, rather than leave the software vulnerable in the hope of using those attacks later, in order for this to count as defense rather than turn into a future own goal.

    Here’s the (or a) basic problem, as I see it: the NSA finds a vulnerability, builds an exploit for it, lets that very exploit get stolen and then used against Americans, and then they’re not held responsible for that. It’s almost as if defending Americans against cyber-attack just isn’t their problem. (This isn’t a new thing; we’ve seen since the 90s or earlier that the NSA has influenced Americans to use, e.g., inferior encryption technologies in the hope that they can then exploit that to hack non-Americans. Or even Americans, not so infrequently.)

    It would be great to be able to file a tort suit against an organization that came up with an attack that harmed you, even if they weren’t the ones that used it. But yeah, right, like that’s ever going to happen.

  2. says

    Curt Sampson@#1:
    Typo in paragraph 9, I think: ought ‘launching a preventable cyberstrike’ not say preventative?

    That’s not a ‘typo’ that’s a flat-out bit of mangled English. Good catch. My quality control drops sharply when I am jet-lagged.

    It would be great to be able to file a tort suit against an organization that came up with an attack that harmed you, even if they weren’t the ones that used it. But yeah, right, like that’s ever going to happen.

    A class action suit against the CIA or NSA, for damage caused by derivatives of the tools they’ve leaked, would actually make sense. But, as you say, it’s not going to happen. They would just hide everything behind a curtain of “classified” and everyone participating in the suit would never go to an airport again without getting shaken down.

  3. komarov says

    The spooks have been having so much fun, the grunts want to get in on the party. No doubt they will develop another parallel stack of tools and techniques, which will – in due course – leak and be used against everyone.

    Well, one of the roles of the free market is to provide the consumer with choices, isn’t it? Wheter it’s potato crisps or leaked software “tools” hardly matters.

    To be fair, you’ve pointed out in the past that defence is a lot harder working with computer systems. One might conclude that the only way of protecting oneself would be to take the MAD approach to IT infrastructure. Hence skip the defence stuff and just make sure that you can scrap everything everywhere given the excuse. Which, by the sound of it, is pretty much what has happened.

    From the point of view of CIA, NSA etc., maybe the only real downside is how much harder it is to hang on to software compared to (nuclear) hardware. The latter is a lot trickier to leak and copy.

  4. says

    komarov@#3:
    From the point of view of CIA, NSA etc., maybe the only real downside is how much harder it is to hang on to software compared to (nuclear) hardware. The latter is a lot trickier to leak and copy.

    I really don’t think they are even worrying about it. They’re having too much fun to worry.

  5. Pierce R. Butler says

    The current crop of malware uses leaked implementations of top secret US government-funded attack tools…

    Funny they don’t use the word “tactical” a lot more. Everybody likes tactical!

    The new military strategy … emphasizes an intention to “build a more lethal force” of first-strike hackers.

    I can think of several ways to hack stuff that result in human deaths, but, except for sabotaging hospital equipment (or supercyber as in Algis Budrys’s Michaelmas), they would produce very messy imprecise results.

  6. says

    Recently you wrote that all the current software sucks—it’s buggy and contains countless vulnerabilities that could be exploited by hackers. This is why you proposed that humanity should scrap it all and start all over and make new software from scratch. However, it seems like governments want all our software to have exploitable vulnerabilities and backdoors. Hence my question: assuming somebody with a ton of money decided that they want to create new and secure software from scratch, would it even be possible considering that American government would probably actively try to prevent such a project from succeeding? I mean, I have already heard about plenty of precedents where some private company tried to make some secure software (especially one used for secure communications) and the American government destroyed their business.

  7. says

    Ieva Skrebele@#7:
    I mean, I have already heard about plenty of precedents where some private company tried to make some secure software (especially one used for secure communications) and the American government destroyed their business.

    You either work for them, or they compromise you.

    In fact, they will compromise you even if you’re working for them. There is a famous incident in which the NSA ‘helped’ Crypto A.G. with their algorithms and actually backdoored their product. Crypto found out that was an issue when the Iranian government arrested one of their sales-people for selling backdoored devices. So, “thanks a whole lot, NSA!” NSA’s approach is, however, a scattershot mess; there is good evidence that NSA actually made the Data Encryption Standard (DES) encryption algorithm much better – the NSA knew about differential cryptanalysis already and Don Coppersmith, the designer of DES did not; the NSA tweaked Coppersmith’s S-boxes in the DES Feistel network and made them resist differential analysis. So, that was interesting: it’s solid evidence that NSA was way ahead of private industry on code-breaking (we expect that) and was choosing when and how to either improve things or leave them be. I had a lunchtime conversation with Marty Branstad, who was at NIST when the DES was developed and he believes that the NSA’s improving the S-boxes was a bureaucratic error; they only did it because they mistakenly assumed IBM was releasing DES only as a hardware implementation on a chip – NSA was horrified when a software specification of the algorithm was released, with all the details just sitting there for anyone to study.

    NSA also paid RSA to let them tweak one of their cryptographic library implementations, and they deliberately weakened it considerably. Since that implementation (BSAFE) was the underlying layer of SSL, that meant that all internet communications were basically semi-backdoored.

    NSA’s strategy has consistently been to weaken things just enough to bring them within reach of the NSA’s capabilities, and nobody else’s. The end result, naturally, is that everything sucks just a little bit more than it has to, and once one of their backdoors is revealed, everyone has to spend a huge amount of money to upgrade it. Probably the only time that they have failed to make things worse was with DES, and that was because they made a mistake.

  8. says

    Marcus @#8
    You mentioned cases where businesses allowed NSA to tweak their software, and NSA used the opportunity to make it worse. I was thinking about a different problem—whenever a private business creates some software and refuses to allow NSA to mess with their code, then the USA legal system simply destroys this company. For example, cases like Lavabit. I have also heard about other cases where some of the large tech companies tried to resist USA compromising their products and taking away their users’ privacy.

    This makes me wonder about whether it would be possible for some tech company (or even some other country like China or Russia) to create any good and secure software at all. I suspect that USA would never allow that to happen. And, well, most other countries probably also want all the same exploitable vulnerabilities and backdoors that the USA keeps on demanding.

    How can anybody ever make good and secure software when most of the governments demand it to remain full of vulnerabilities and backdoors? Thus, I’m very pessimistic about the future of computer security.

    Do you share my pessimism or do you think there’s any hope?

  9. Sunday Afternoon says

    Obligatory Douglas Adams:

    The history of warfare is similarly subdivided though here the phases are retribution, anticipation, and diplomacy. Thus, retribution: “I’m going to kill you because you killed my brother.” Anticipation: “I’m going to kill you because I killed your brother.” And diplomacy: “I’m going to kill my brother and then kill you on the pretext that your brother did it.”

    https://www.clivebanks.co.uk/THHGTTG/THHGTTGradio6.htm

  10. says

    Ieva Skrebele@#9:
    How can anybody ever make good and secure software when most of the governments demand it to remain full of vulnerabilities and backdoors? Thus, I’m very pessimistic about the future of computer security.

    Yes, I think the field is a wipe-out, except on the commercial side

    All governments are concerned with breaking “their” citizen’s pathetic attempts to have privacy. The “five eyes” have everyone’s cellphones covered (and share with Israel) and in the US that means that every service provider that tries to produce something without a backdoor is going to have the FBI show up, threaten them under CALEA and PATRIOT and – if necessary – shut down their service. So far only one service that I am aware of actually shut down rather than allow the FBI access, but that doesn’t help us because now that service is gone.

    Corporations have consistently thrown their customers under the bus. Others, like AT&T and Verizon have turned government access to user data into a profit center.

    I discussed this problem with Roger Schell and his view is that the only way to prevent this sort of thing is by having small teams build things carefully, while scrutinizing themselves and each other. THe software has to link to the hardware, which is difficult. Someone in The Commentariat sent me a link to a recent talk about a very mysterious debug mode in a popular processor. It appears to be a complete ‘god mode’ backdoor into the system (I think it is probably a thoughtlessly designed debug mode like Intel’s IME) – so, you could build really great software and run it on that processor and you’re still owned and there is nothing you can do about it.

  11. Curt Sampson says

    NSA’s strategy has consistently been to weaken things just enough to bring them within reach of the NSA’s capabilities, and nobody else’s.

    Yup. And they seem to do a pretty good job of it. The only problem is that the weaknesses come within reach of others’ capabilities within a few years, at best, and then the NSAs enemies can hack all those systems, too.

    It’s beyond me what they’re thinking.

  12. John Morales says

    Curt,

    It’s beyond me what they’re thinking.

    Perhaps they know they’re ahead of the curve, and are arbitraging that discrepance?

    Marcus,

    So, that was interesting: it’s solid evidence that NSA was way ahead of private industry on code-breaking (we expect that) and was choosing when and how to either improve things or leave them be.

    So far, so good for them.

  13. sonofrojblake says

    How can anybody ever make good and secure software when most of the governments demand it to remain full of vulnerabilities and backdoors?

    I just had a tiny epiphany, I think.

    We are (aren’t we?) already at a point where there is software so complex that even the people who produced don’t entirely understand how it operates and achieves the results it does. How far are we from a point where the software is in a position to ignore demands for backdoors? Or where it reaches a point where it can present what to any human analyst look like backdoors, but which in fact have a hidden lock on them that can’t be removed or compromised?

    Shorter: how long before AI can ignore the NSA?

  14. komarov says

    Re: sonofrojblake (#14):

    Shorter: how long before AI can ignore the NSA?

    Who says the AI will understand how it works? I don’t know how I work. I just know it involves kidneys at some point so I keep a watchful eye on those. But they might not be my only weakness. Might…

    More seriously, proper AI would probably be reliant on so many layers that there’s bound to be things in there that was compromised long ago. And if it’s not in the software then it’ll be in the hardware. Maybe the NSA couldn’t backdoor an AI in exactly the way it wanted to but they’d still manage. If you can’t poison someone’s drink you poison the well instead. By the sound of things that’s exactly what the NSA has been doing, although their explicit goal may have been to poison everybody’s drink all along. (*mumble mumble* national security)

  15. says

    sonofrojblake@#14:
    We are (aren’t we?) already at a point where there is software so complex that even the people who produced don’t entirely understand how it operates and achieves the results it does.

    Ken Thompson’s Turing Award lecture (On Trusting Trust) points out the difficulty in understanding software once it has been transformed into object code, if the compiler is under hostile control. It’s an interesting point. I nearly quit security when I read that.

    Another thing (I believe it was) Thompson said: “It is harder to debug code than it is to write it. So if you are writing code at close to your own limit of ability, you are writing code that you cannot possibly debug.”

  16. says

    komarov@#15:
    More seriously, proper AI would probably be reliant on so many layers that there’s bound to be things in there that was compromised long ago.

    At what point does it all get so complicated we decide it’s biology?
    Your point about things that were compromised, existing in deep strata, reminds me that cellular organelles are single-celled lifeforms that became incorporated in the cell’s symbiosis. And, we don’t think much about all the bacteria in our gut. But occasionally the bacteria do enough damage and our self-repair gives us a cancer and we die.

  17. Curt Sampson says

    Perhaps they know they’re ahead of the curve, and are arbitraging that discrepance?

    Of course they know that they’re ahead of the curve; everybody knows that they’re ahead of the curve.

    But what they do is in no way, shape of form arbitrage or anything even resembling it. Arbitrage is, by definition, risk-free.

    If you’re going to compare this to something in finance, it would be a highly risky forward trade: ‘We are purchasing a high probability that someone (who we hope will be us) can exploit this vulnerability in the future.’

    But it’s often enough not them exploiting it, and often enough the victims of the exploit are their own teammates. (Or at least the people funding them; perhaps the NSA doesn’t consider the people who pay them to be ‘teammates.’)

  18. Curt Sampson says

    We are (aren’t we?) already at a point where there is software so complex that even the people who produced don’t entirely understand how it operates and achieves the results it does.

    The naïveté here is charming, but we are far from from the stage where the complexity is the primary barrier to understanding, even though it ought to be. The main problem is still the unwillingness on the part of software developers to spend the effort to understand what they’re doing even when it’s relatively simple.

    Typical conversation:
    Me: Can you explain to me what you’re writing?
    Programmer: That thing the manager asked me for.
    Me: Yes, but what exactly is that?
    Programmer: It doesn’t matter; it’s done.
    Me: How do you know it’s done if you don’t even know what it is?
    Programmer: It’s marked as complete in the project manager’s spreadsheet. Clearly, it’s done and correct.
    Me: …

    I don’t think that replacing such humans with full human-level “AI” that’s just as good as the humans is going to be helpful here, even were we capable of doing it.

    How far are we from a point where the software is in a position to ignore demands for backdoors?

    This is, unfortunately, nonsensical. We (both us ‘white hat’ programmers and the hackers) do not ‘demand’ that software does something; we simply write or modify software to do (ostensibly) what we want. If someone makes a wheel round, one can’t ‘demand’ that it not roll. If someone writes code for a backdoor, it’s a backdoor and that software cannot be anything but what it is.

    We can (and do!) write software to examine the behaviour of other software in the system and try to alert us or even stop things should some software appear to be doing something ‘bad.’ One day this will perhaps be as effective as hiring people to look at things and tell you if they are ‘bad.’ But even if we ever reach that point, we all know how well that works.

    …but which in fact have a hidden lock on them that can’t be removed or compromised?

    Can you describe how that works? Can you provide a (mathematical) proof that it works? I didn’t think so.

    Sorry to be a bit harsh about this, but this demonstrates the problem. For you, it’s completely forgivable. But most software developers are quite as confused, which is less forgivable.

    Let me close with the programmer’s lament:

    I really hate this damn machine;
    I wish that they would sell it.
    It never does just what I want
    But only what I tell it.

  19. komarov says

    Re: Marcus Ranum (#17):

    At what point does it all get so complicated we decide it’s biology?

    The cynical answer is, “Never, because is life is special and since we made this stuff we know it’s artificial.” People are still talking about GMOs as if they were hulking monsters, after all.

    On the other hand at this point we’d be well into the realm of evolutionary change, with, for example, the old backdoor becoming a vital connection to some other part of the AI organism. And the day it breaks the AI turns catatonic. Oh dear! AIs would have even more reason to be wary of the constant software updates. Or maybe this is just going to be the AI-equivalent of body image issues. “Maybe I can’t display Comic Sans properly but I’m fine with myself and don’t need patches no matter what anyone says. You could update me but I wouldn’t be myself anymore.”

    P.S.: Curt Sampson, thanks for posting the lament, which I didn’t know but, as an amateur who is occisionally forced to write itty bits of code, can relate to far too easily.

  20. Dunc says

    Marcus, @ #17:

    At what point does it all get so complicated we decide it’s biology?

    ?

    A very long way away from where we are right now. Get it all* running on evolutionary algorithms without any human intervention and come back in maybe a century.

    *And I mean all – including the hardware.

  21. EnlightenmentLiberal says

    I think it’s a cultural problem. Too many people want the government to spy on everyone. I do a lot of political talks like this wit my work friends, and there’s a fair number of my friends at work who balk at my assertion that the CIA / NSA global spying program hasn’t been the first means of detecting and stopping a single terrorist attack. I love Marcus for introducing me to the term “retroscope”, and informing me what this sort of spying apparatus is really good for: finding compromising data on a target who is already known, which sounds awful for stopping crime / terrorism, but wonderful for creating an authoritarian regime.

    If we want to be super serious about security, it has to start at the hardware level too. We need to nationalize all microprocessor manufacture. Hell, I think we need to put all microprocessor manufacture under an international inspection regime like the IAEA, and have all of the spy agencies of the world, including ones hostile to the US, to sit on a board to make technical recommendations. Then, require the NSA, CIA, etc., to use the same chips that normal consumers use, and forbid them from using any other chips. Also forbid the NSA, CIA, etc., from having access to firmware except firmware that is available to the normal consumer. — I like my fantasy dream world.