Given all the revelations about the NSA and GCHQ spy agencies intercepting the communications of individuals all over the globe, the obvious question that arises is to what extent they were involved in the Heartbleed bug, the weakness in the OpenSSL protocol that enables third parties to extract 64K chunks of information at a time from targeted computers without the hosts being aware, a security problem so serious that it even caused the Canadian government to suspend electronic tax filing.
Suspicion has fallen on the two leading members of the ‘Five Eyes’ nations, the US NSA and the UK GCHQ. There are two levels of questions that can be raised.
- Was the NSA aware of the security flaw all along and did not sound the alarm because they were exploiting the flaw, even though it meant that millions of people would have been exposed to fraud?
- Did the NSA and/or GCHQ actually create the flaw?
Reports are emerging that the answer to the first is ‘yes’. Michael Riley has an explosive story in Bloomberg.
The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.
…Putting the Heartbleed bug in its arsenal, the NSA was able to obtain passwords and other basic data that are the building blocks of the sophisticated hacking operations at the core of its mission, but at a cost. Millions of ordinary users were left vulnerable to attack from other nations’ intelligence arms and criminal hackers.
Naturally the NSA has denied being aware of the bug and the New York Times, ever eager to serve as the government’s mouthpiece, has an article by David Sanger that seeks to absolve them of any involvement.
At the center of that technology are the kinds of hidden gaps in the Internet — almost always created by mistake or oversight — that Heartbleed created. There is no evidence that the N.S.A. had any role in creating Heartbleed, or even that it made use of it. When the White House denied prior knowledge of Heartbleed on Friday afternoon, it appeared to be the first time that the N.S.A. had ever said whether a particular flaw in the Internet was — or was not — in the secret library it keeps at Fort Meade, Md., the headquarters of the agency and Cyber Command.
This strains credulity because, as Riley says, finding such weaknesses and exploiting them is a central part of NSA’s mission to which it devotes enormous resources. Could they have been unaware of such a serious flaw for two years?
The NSA and other elite intelligence agencies devote millions of dollars to hunt for common software flaws that are critical to stealing data from secure computers. Open-source protocols like OpenSSL, where the flaw was found, are primary targets.
The Heartbleed flaw, introduced in early 2012 in a minor adjustment to the OpenSSL protocol, highlights one of the failings of open source software development.
While many Internet companies rely on the free code, its integrity depends on a small number of underfunded researchers who devote their energies to the projects.
In contrast, the NSA has more than 1,000 experts devoted to ferreting out such flaws using sophisticated analysis techniques, many of them classified. The agency found Heartbleed shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency’s toolkit for stealing account passwords and other common tasks.
What about the more serious accusation that it was the NSA that actually created the flaw? It would not surprise me in the least since we already know that the NSA deliberately weakened encryption standards by using its influence in the National Institutes of Standards and Technology (NIST).
As Kim Zetter writes, finding ways to intercept encrypted traffic has long been a major part of the NSA’s efforts.
Cracking SSL to decrypt internet traffic has long been on the NSA’s wish list. Last September, the Guardian reported that the NSA and Britain’s GCHQ had “successfully cracked” much of the online encryption we rely on to secure email and other sensitive transactions and data.
According to documents the paper obtained from Snowden, GCHQ had specifically been working to develop ways into the encrypted traffic of Google, Yahoo, Facebook, and Hotmail to decrypt traffic in near-real time, and there were suggestions that they might have succeeded. “Vast amounts of encrypted internet data which have up till now been discarded are now exploitable,” GCHQ reported in one top-secret 2010 document. Although this was dated two years before the Heartbleed vulnerability existed, it highlights the agency’s efforts to get at encrypted traffic.
Natasha Lennard describes some theories about how the NSA may have created this flaw, saying “For some time, cryptographers have suggested that the NSA has been secretly paying open source developers (developers of open source tools like OpenSSL) to sneak in bugs.” What adds to the suspicions is that although this flaw has been around for two years, so far there have been no reports of its exploitation by non-governmental entities for criminal activities such as fraud. If ordinary cybercriminals created this flaw, they would likely have exploited it quickly before it was discovered.
There is no documentary evidence as yet that this is the case, but post-Snowden history suggests that you cannot go far wrong by assuming the worst about the NSA and GCHQ.
Marcus Ranum says
The bug is relatively simple sloppy programming. A deliberately crafted bug would be just as easy to create and wouldn’t have to go through all the memory-scraping and sifting process -- it’d be something that led to a protocol meltdown (less likely) or a buffer overrun (highly likely) that would allow complete remote strip-mining of the web server.
So, I’d guess “no”
Maybe they knew about it. But I doubt even that -- bugs can remain latent and undiscovered for a very long time.
root.veg says
Almost certainly *not* created by the NSA, the particular code commit in question is out in the open for all to see:
http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=4817504
Unless you believe that Robin Seggelmann was somehow under the NSA’s control… which I think is verging on wacky conspiracy theory.
As to when the NSA found about it, that’s a different question 🙂
AsqJames says
Randall Munroe’s done an excellent XKCD which visually illustrates what’s going on with the Heartbleed bug better than any written explanation I’ve seen.
Without internal info from NSA/GCHQ I think it’s going to be nearly impossible to guess whether this was an accident or not.
If the coding “error” was deliberately introduced, it’s a really clever way to do it. The heartbeat “pings” and responses aren’t logged (or at least they weren’t before now, I’m guessing that may change!). A careful attack would play the long game and scrape just a little data at a time from any particular server. Too much at once might look like a weak DoS attack, but if it’s not detected in real time there’s no way to know whether any system was ever under assault.
That lack of an audit trail and the fact that (as far as I understand it, and I admit that may not be all that far) it’s also exactly the kind of coding mistake that can easily be made even by experienced professionals, make it perfect for the likes of NSA. It would probably be less useful to private sector cyber-criminals as, to get the best advantage, you’d need massive storage and computing power to match up all the scattered fragments of data from the hundreds of thousands of servers they could be collected from.
Lassi Hippeläinen says
“the OpenSSL protocol”
OpenSSL isn’t a protocol, it’s a software library that implements several protocols. The bug affects only the 2012 version of TLS and DTLS. The protocols themselves are OK. If your server uses some other implementation than OpenSSL, you should be safe.
The NSA probably did not create it. The author of the bug, Robin Seggelmann, denies any intentional sabotage. And the code reviewers missed the bug too. But NSA probably lies about not knowing. Someone must have decided that playing ignorant (and incompetent) is a lesser evil that admitting they willingly exposed everybody to a serious danger.
Wikipedia has a good summary:
http://en.wikipedia.org/wiki/Heartbleed
Jörg says
Lassi wrote: The author of the bug, Robin Seggelmann, denies any intentional sabotage.
The Guardian has an interview with him:
http://www.theguardian.com/technology/2014/apr/11/heartbleed-developer-error-regrets-oversight
Dunc says
If the NSA had deliberately created it, I’d have expected it to be less sloppy. Unless you want to believe that they deliberately made it sloppy in order to preserve plausible deniability, but then you’re into real paranoid territory where every exploitable bug in the world suddenly looks like NSA sabotage. People make mistakes.
I am, however, logging this as another datapoint against the notion that OSS is intrinsically more secure because having lots of people looking at the code means that bugs get caught.
jamessweet says
As far as the NSA creating the bug: As others have said, that seems pretty unlikely at this point. It requires a far more elaborate explanation than that it was simply an error, and there’s not strong evidence indicating the more elaborate explanation, so… Yeah, it’s possible I guess, but it seems really unlikely.
As far as the NSA knowing about the bug: I’m somewhat agnostic on this until more evidence emerges. Two years ago I would have been highly skeptical. Today, knowing what we know? I don’t think it’s at all implausible. I also don’t think it’s implausible that they didn’t know, and I don’t find any of the evidence so far to be convincing.
That said: Just the fact that it’s totally plausible that they knew and sat on it, that’s highly disturbing right there, even if it turns out (this time) to not be the case.
jamessweet says
Is it? I’m no open source partisan, but how do we know there aren’t equally serious bugs lurking in IIS, etc., which remain unknown (to the public) because it is not open source?
Or is your argument that when you have well-funded malicious entities (like the NSA), the “more eyes” thing can backfire on you? That could be true…
doublereed says
Gosh, and here I thought NSA was supposed to keep us secure.
Dunc says
We don’t. However, the fact that an absolute stinker like this can sit there for this long is a pretty strong refutation of the idea that the OSS approach means that bugs get caught (in a timely fashion, anyway).
I’m not saying closed-source is better, I’m just saying that it’s a pretty big blow to the idea than open-source is. I’ve worked in the industry for nearly 20 years, I’m pretty sure it’s all bug-ridden shit. 😉
Jörg says
Dunc wrote:
Critical security bugs in the most-bought desktop operating system often only get fixed after someone from the outside reports on them. Fore decades, we often have seen denials, belittlement and feet-dragging by Microsoft, until at some later 2nd Tuesday in some later month a patch got issued. Which often opened other bugs, that subsequently also needed fixing.
When Heartbleed became public, the public fix was issued within hours.
I have been a systems administrator for Unix/Linux, Windows and other operating systems for more that three decades, and I have seen the superiority of Open Source and Free Software multiple times.
richardrobinson says
The word “sloppy” keeps getting thrown around. “It’s too sloppy to have been deliberate.” and so on. I’m a computer engineer, but I don’t program much in my job, and never in C, so I’m not really well positioned to comment on the quality of the code.
But as I understand the defect (as explained in XKCD), the error was entirely in the omission of validation of a parameter. Sloppy design? Absolutely. But this sort of sloppiness expressly doesn’t show up in the code. The omission has no impact whatsoever on whether the code does what it’s supposed to. If you were deliberately introducing a vulnerability that you hope would go undetected for as long as possible, this would be a fantastic way to do it; Introduce a new feature with an easily overlooked design flaw that has absolutely no impact on the operation of the code. It is impossible to know whether the omission was deliberate or accidental.
While I remain utterly unconvinced this particular bug was intentionally introduced (there’s no evidence for this, after all) the justification that it’s “too sloppy” is itself sloppy.
Dunc says
Well, sure, it’ll compile, and a “happy path” test won’t catch it, but still… Rule #1 of writing robust C/C++ code is that you should never, ever, ever simply trust a parameter from the outside world. This error is very closely related to the classic buffer overrun, which every C/C++ programmer should be aware of. It’s the sort of thing that should be drilled into neophyte C/C++ programmers until they automatically validate parameters in their sleep. The design should have accounted for it, the programmer should have watched for it, there should have been a specific unit test to check for it, and even if it managed to get past all of that, it should have been caught in either code review or security review. (Please, for the love of God, tell me these people are doing both on every single commit.)
It’s exactly the sort of trivial error that gives attackers control over your system. There’s absolutely no excuse for this sort of mistake in even the most trivial app, never mind in a security-critical bit of core internet infrastructure.