Some mornings, when your alarm clock fires off, you just roll over and slap the “snooze” button. If you do that long enough, you can get quite good at it; there have been mornings when I hit the “snooze” button 15 or more times in a row, pushing back my wake-up time by as much as 2 hours. I used to know someone who claimed that they could sleep-walk through their morning status meeting, effectively grabbing several extra hours of sleep.
I love sleep and I consider it a positive activity – an outright pleasure – but sleep that is interrupted every 9 minutes, sucks.
In a nutshell, that’s modern computer security. If I had a dollar for every time someone from the press has asked “is this a wake-up call for cybersecurity?” I would have gotten out of the field a lot earlier, with a few thousand bucks more in my bank account. If any of these things are wake-up calls, the consumers of computer security are the customers who immediately grumble, roll, over, and slap the “snooze” button. And, they’ve been doing it for decades. During those decades, the situation has gotten dramatically worse, but who cares? If you’re not going to fix the basic problems that made you hit the “snooze” alarm in the first place, you’re never going to fix the advanced ones. Computer security – especially in the government – has not proceeded in “fits and starts” or “hiccups” – it has been a constant push toward denial that there is a problem, studded with occasional fads that will save us (AI being one of the current ones) none of which will work but all of which will transfer a lot of money into the hands of vendors.
Back in 2015, I was still on the speed-dial for a number of journalists, so when there was a big breach my phone would ring. When Anthem announced that they had managed to leak the personal data of nearly 80 million people, I got phone calls asking for quotable quips and, perhaps, some recommendations. “Is this a wake-up call?” My response was:
I actually did learn something from the Anthem breach; it made me think a bit about metrics, and I realized that counting numbers of millions of data items breached was a bad metric. It doesn’t mean anything, and all it shows is that the media doesn’t understand the problem. But, after more thinking about it, I realized that it also means that computer security practitioners don’t understand the problem, either. I began to think about the question of computer security strategy, realized we didn’t have one, and decided to get out of the industry and be a blade-smith instead. I spent 35 years or so leading horses to water, and eventually, I realized that horses are much smarter than computer security customers because they drink when they need to.
I tried; I really did. Back in the late 90s, I tried to encourage the industry to recognize that general purpose computing is a problem. If it’s general purpose, it’s a computer that’s designed to run whatever you want on it, and that means malware will also run on it. There were basic recommendations I tried to promote, such as the fact that we need to scrap our entire software stack and re-design it – to be better and more reliable and to no longer require system administration. Unfortunately, by 2003 or thereabouts, the industry was shifting the opposite direction – instead of making systems that were better and software that was more reliable, the industry made systems that were self-updating. In other words, software became a moving target, configuration control went out the window, and instead of being expected to occasionally swallow pellets of shit, we got a full-time uncontrolled shit-stream that everyone just accepted because there was no alternative. If you don’t like Intel processors or Microsoft Windows or whatever the shit from Apple is called, what are you going to do? None of us are interested in spending the time to understand what we’re running and we can’t do anything about it, anyway.
By 2005, the global software ecosystem had become a self-updating web of interdependent, incomprehensible, glarp. Around that time, China proposed to the US and Russia that maybe it would be a good idea to build some frameworks for international conventions regulating government-sponsored hacking. The Bush administration was completely uninterested in that, because at that time, the US was top dog on the internet (still is!) and enjoyed being able to hack the whole planet. After all, when you have Oracle, Facebook, Twitter, Microsoft, Apple, Intel, Western Digital and Seagate backdoor’d and in your back pocket, you have the keys to the kingdom; why would any government in their right totalitarian mind give up a capability like that? Besides, “cloud computing” was starting to take off, and it was looking more and more like everyone in the world would give up their ability to control any of their systems because system administration is hard and hard things are expensive and who wants to do that? The US was looking at total dominance of the game-board of cyberspace and had no interest in reining itself in – besides, it is increasingly apparent that the US intelligence community will not allow itself to be reined in.
That’s almost all for the history lesson; there’s one more piece. After 9/11, the US Government threw a ton of money at computer security. I was involved as a “Senior Industry Contributor” in a special panel put together by NSA to recommend what the government should do going forward. The rest of the panel were smart, important, industry figures and the whole experience was interesting and very weird. I realized that the government’s response to the “intelligence failure” of 9/11 was going to be:
- “Whee! A system upgrade!”
- Develop a bunch of offensive tools
I’m not kidding. The FBI, for example, spent a massive amount of the homeland security budget giveaway on buying everyone new laptops and desktops. To be fair, the flint-based bearskin-power computers FBI had were pretty bad. I recall that my minority opinion brief at the panel was the first time I used the formulation: “If you say that a government agency’s computer security is a disaster, and you just throw money at it, all you’ll get is a bigger, fancier disaster.” I was more right than I could possibly have imagined in my worst nightmares. Then there was the matter of offensive tools. Hey, let’s play a game! I’ll put a number below the end-bar and that’s the percentage of government computer security dollars that the intelligence community spent on offense versus defense. Let’s suppose there’s a classified number of billions (around 30) of dollars being spent on cybersecurity – how much of that (once you subtract the “new laptops for everyone!” dollars) was spent on offensive capabilities? Guess. Just for fun.
There were computer security strategists, such as Dan Geer, who tried to encourage the government to see this as a problem that will evolve in a multi-year time-frame. Good infrastructure needs to be designed and populated and kept safe and separate until such a time as we can solve the problem of connectivity with safety, or management with trust. That was an unwelcome message compared to “wow, unlimited elastic storage at amazon cloud services!” and “it’s only $3bn!” so massive amounts of sensitive stuff moved to unknown places and big software frameworks got rolled out, and nobody thought about it as a strategic problem involving lock-in and governance. I’ll give you one hint to the puzzle I posted above: the $30bn in cybersecurity spending post 9/11 was not spent building a government-only elastic storage system; the storage dollars went to Amazon and Microsoft.
Now let’s talk about SolarWinds.
It’s just another management framework for computing. Like many, many pieces of software, it includes an update capability that allows the manufacturer to push out fixes to their code, which get installed without the customer understanding what the fixes do. That makes sense because the customer can’t really do anything, anyway – it’s not as if they have the source code or the engineering capacity to fix flaws in the software, and there’s so much software. Everything nowadays is crap, so the manufacturers need to update their crap because customers won’t take the time to install patches and nobody understands them, anyway. Imagine if the entire computing ecosystem took a look at software maintenance, and shrugged. SolarWinds is useful – it helps mere humans understand what is going on in the networks that they built; networks they don’t understand. It’s all incomprehensible. On top of that is Microsoft Windows, which has its own massive ecosystem of flaws, which runs more software that also has massive ecosystems of flaws. Things appear, things disappear, nobody’s job is to manage that and nobody could do anything about it even if they wanted to.
In that environment, someone executed a transitive trust attack. A transitive trust attack is:
- When A trusts B, and B trusts C, A trusts C and usually doesn’t know it.
The entire software ecosystem is one great network of relationships, virtually any of which can be lashed into a transitive trust attack. When you install the device driver for your graphics card, your Windows desktop system checks the signature on the driver and allows it to run in kernel space, with complete unrestricted access to system memory, the devices, and the CPU. But who wrote the driver? Possibly a consultant. Possibly the manufacturer. Possibly, a spook who works for the NSA. Who knows? Nobody knows. And the chain does not have to stop at ‘C’ – what if the programmer at the vendor who is writing the driver decides to use some XML parser code from some open source software repository. Do you think they read through the parser code and check for backdoors? Nonsense. Now, you have a user running software on their computer that was written by some rando coder on the internet, and the user, Microsoft, and even the management team at the graphic card manufacturer – none of them have any idea. Or, what if someone hacked the graphic card manufacturer’s systems and got the credentials of a developer and added a bit of code somewhere in the hundreds of thousands of lines of code that are coming from who knows where? This is not a system that’s ripe for a disaster; it’s an actual disaster, just nobody has suffered, yet.
I did a paper back in 2013 called The Anatomy of Security Disasters [ranum.com] in which I hypothesized that we don’t call a thing a “disaster” until much later on the time-line than we should. The “disaster” started years before it blossomed; it’s just that nobody recognized it. Security practitioners are struggling to deal with the disasters that happened in the 1990s, when networks grew willy-nilly and everything connected to the internet – they’re not even dealing with the fully-blossomed disasters of “cloud computing” or “automated patching” and “the internet of things.” They’re doing a shitty job dealing with the basics: uncomprehended growth and internet connectivity, hoo-boy just you wait until they come face-to-face with the next generation of disasters. SolarWinds is one of those next generation disasters, and the response is going to be to grab those worms that got out of the can, hammer them back into the can, duct tape the can top back into position, and hunker down waiting for the next one.
The US has built Kant’s internet: the internet that the US wants to live in. Unfortunately, they didn’t follow the categorical imperative; instead they just built transitive trust and backdoors into everything and assumed, stupidly, that nobody would ever deploy goose-sauce on a gander.
So, the details are now predictable: someone put a backdoor in SolarWinds, SolarWinds pushed it to all of their clients – hundreds of government agencies and thousands of large corporations – and all of those systems are now compromised. How do you fix that? Well, in the Anatomy of Security Disasters paper, I point out that some disasters require a time machine to fix; they are unfixable, because we only discover them when they are fully-blossomed. What’s going to happen with SolarWinds? Incredibly, Microsoft (which was also compromised in the attack) pushed out a patch update to Windows that disables the compromised version of the SolarWinds agent. In other words, Microsoft said, “We can put that fire out! We’re the premier seller of gasoline, after all!” Microsoft pushes stuff out to millions of machines all the time, and if you want to believe that the US government has never tried to get Microsoft to push something dodgy in their patch-stream, you don’t know the US government very well.
Since Windows’ kernel tries to do so many things, one of the things it does badly is security. Microsoft tries, but basically they can’t get around the transitive trust problem; they need to support so many device drivers, from so many places, that it’s impossible to know the provenance and quality of all the code. Same for Apple, for what it’s worth, though Apple has done a better job of lip-servicing security (mostly, because Microsoft has shit the hot tub so hard, so often). Meanwhile, Cisco, which has also had vulnerabilities and backdoors, pushes devices that are so critical to infrastructure that companies are reluctant to patch them for fear of down-time. Do you want to thoroughly own the internet? Go work for one of those companies. While you’re getting stock options and free espresso as a developer, you can put a backdoor in any part of the system that will be forward-deployed. Wait a few years for it to propagate and find its way into critical systems, then sell it. Be careful: the people who’ll want to buy it can play rough. One of the programmers who worked for me at NFR went on to work at Intel as a consultant for a while, because he had implemented our super-amazing packet capture code in which we got the Intel etherexpress chip to DMA captured packets into a set of ring buffers that were allocated in process memory. There was a single pointer that the etherexpress flipped back and fort to tell which was the active ring buffer, so the process only had to check one memory address in order to collect incoming packets – no copying, no nothing. I talked to them 2 years later and they said, “you know, I could code a couple mistakes into the driver and nobody’d have any idea they were there, but I could wait a few years then sell them for a whole lot of money.” I’m sure that the idea has occurred to others.
Let me summarize, the current computing environment is a castle made of shit. The roof is shit holding together straw. The walls are straw-bales and a few rocks, held together with shit. The interior is sculpted out of shit. The foundations are massive blocks of hammered shit, and the under-works are dug deep into bedrocks of shit. The walls of the castle bristle with shit-cannons that fire balls of explosive shit, but nobody really knows who controls the cannons because the management interface for those cannons is – you guessed it – pieces of toilet paper smeared with shit. Anyone can put their own shit in there and take over control of the cannons, but the US government agency responsible for homeland security likes it that way because they might need to jack up the Iranians, who designed their shit-castle using US shit-castle technology. The soldiers that patrol the battlements wear armor made of well-polished shit, with swords that look like damascus steel, but actually it’s laminated horseshit and diapers – they are morons who have been trained to obey orders instantly without a thought, which means that anyone who tells them to stab themselves in the nuts, or open the gate, will be instantly rewarded with compliance. “Do you want fries with that?”
SolarWinds is a big deal, but only because it’s the name that’s written on the shaft of the arrow that has been stuck through the software industry’s heart for years.
Approximately 80%. We’ll never be able to know the exact number, though.
I remember reading that the way to put out a fire in a cotton bale is gasoline, and I have real trouble believing that. I’d assume that the way to put out a fire in cotton bales is to drag them into a field and wait for rain and fungi to deal with it. It might take a while but gasoline seems like an expensive and risky way to magnify the problem. Perhaps I need to research this topic.
A few years ago at a conference (tequila was involved) I had a really nihilistic idea for a business that would make a fuckton of money by trying hard not to solve the internet security problem. I discussed it seriously with my friends who were there and we all concluded that it absolutely would work, it’d require a totally charming sociopath to serve as “front man” (e.g.: a Steve Jobs or Elizabeth Holmes) and an up-front investment on the order of $10mn, which is basically nothing. If it worked, it would be worth hundreds of millions, really quickly. And by “work” I mean “get people to throw money at it” not “actually solve problems” – basically, computer security’s F-35. I don’t know if I should do a public write-up about the idea or not.
My friend D.B. Just sent me a link to an excellent write up from Microsoft on how the SolarWinds DLL attack worked. [Note: DLLs and run-time linkable code were Microsoft’s brilliant idea] [microsoft] They even got in a plug for Microsoft Defender; an excellent technology for locking barns after horses have departed them.
The fact that the compromised file is digitally signed suggests the attackers were able to access the company’s software development or distribution pipeline. Evidence suggests that as early as October 2019, these attackers have been testing their ability to insert code by adding empty classes. Therefore, insertion of malicious code into the SolarWinds.Orion.Core.BusinessLayer.dll likely occurred at an early stage, before the final stages of the software build, which would include digitally signing the compiled code. As a result, the DLL containing the malicious code is also digitally signed, which enhances its ability to run privileged actions –
and keep a low profile.
A slow-blossoming disaster, indeed.