In the late 1990s, the US Government was setting up a case to argue that hacking equated to terrorism. Because, while it was mostly being used for illicit state-craft, it could potentially be used by terrorists. In 1997, at a keynote for Black Hat Briefings, I warned the hacker community what was coming but – at that time – there was a great deal of “community outreach” being done by NSA – they were hiring hackers (whose work we now see leaking on a regular basis) and it was all very hip and friendly.
While that was going on, the FBI was penetrating hacker groups because (since hacking was equal to terrorism) they had much more leeway as far as intercepting communications, suborning hackers, etc. [stderr] In terms of being easy to penetrate and suborn, the hacker community had to be a laugh-riot compared to any organized crime operation. The hacker community represented a self-selected body of useful idiots who could be used and discarded, or blamed for just about anything. It was around about this time that I began to become suspicious of libertarian-posing politics; many self-styled freedom-loving hackers were very quick to take the establishment’s money and turn against the privacy rights of the population.
Hackers are a convenient assignee of blame for the establishment. Does your data security suck? Blame the Chinese hackers, or the North Korean hackers, or Russian hackers. Are your electronic election systems wide open to attack? Blame hackers – after all, it’s awkward to explain how you spent millions of dollars on badly-designed voting machines, and ignored the warnings from security practitioners who looked at the system design. Or, if you’re a political candidate, why not be utterly stupid about how you manage your email, and then blame hackers when your documents leak out that show how you tried to corrupt your party’s politics? Why not. Hackers are useful idiots – in that sense they are very much like terrorists.
So, perhaps you recall that the city of Atlanta got hid hard by cryptolocker malware – to the point where critical city services were unavailable and about $9 million was spent (not including wasted time and annoyed citizenry) doing incident response. [cnn] As I’ve mentioned elsewhere, this is a problem that an organization elects to have when its IT executives decide that backing up data and performing basic system hygiene doesn’t matter. [stderr] Hint: it does.
I’m suspicious of this one: [bbc] It’s too convenient:
Years of video evidence gathered by police has been lost thanks to a ransomware attack on Atlanta in the US.
Most of the lost evidence involves dashcam recordings, said Atlanta police chief Erika Shields in a local newspaper interview.
The footage was “lost and cannot be recovered”, said Ms Shields.
About one-third of all software used by city agencies and departments is believed to have been affected by the attack, which took place in March.
Anyone want to bet that Gina Haspel is biting her thumb thinking, “I should have said Russian hackers wiped those torture tapes!”?
By the way:
The hearings revealed that the city has assigned an extra $9.5m (£7.1m) to finance its recovery efforts.
That’s a dramatic cost saving over basic configuration management, user controls on desktops, central managed file-service, and a backup tape array or a network-attached storage. All of which would have cost about 1/20th of that.
Police chief Shields told the Atlanta Journal Constitution that despite losing the video recordings, no “crucial evidence” had been compromised.
Dashcam footage was a “useful tool” said Ms Shields, but added that other evidence such as the testimony of an officer would “make or break” a case.
Testimony of officers “making or breaking” cases is the problem.
There is so much wrong in the Atlanta cops’ story. For one thing, dashcam video is evidence and needs to be handled appropriately. Your typical IT evidence-room is not connected to a wire-area network, and especially not to the internet, and includes network-attached storage, hard drive duplication systems, tape or hard drive archival backup and audit systems. I know IT security teams at several FORTUNE500 companies that have very nice set-ups for their evidence rooms. And they’re just worried about stuff like intellectual property and fraud, not murders. There should be a:
- “best practices” set defining a proper set-up of minimal capabilities a cop-cam system must have
- “best practices” set defining a minimal set-up of a cop-cam evidence room
- not allow any police agency to buy ammunition until they have a compliant body-cam system
In many industries, the notion of “best practices” is very important; the idea being that there is an established baseline practice or capability. For example, an accountant’s best practices include certain protections of client files. An accountant that was not keeping up with minimal practices could be argued to be negligent. For some reason, in the US, cops can be as negligent as they care to be, with no apparent consequences. The Chief Information Security Officer of a company is answerable to the shareholders, if there’s a shareholder lawsuit showing negligence. So, in federal government IT, there are repeated massive failures and no consequences. Guess what you get, then? More failures.
Oddly/ironically, encryption (competently used) is one of the important integrity controls for digital evidence. Usually what you want to do is encrypt blobs of evidence using specific keys that can be used to check that they are un-altered and control who can see them. It should be basically impossible for a cryptolocker attack to undetectably alter evidence. For that matter, votes, either.
The bad security around voting machines appears to me to be suspiciously deliberate – it’s as if the establishment in some states wants to be able to de-certify a result and blame the Russians, if the wrong person gets elected. But surely that is too cynical.