More Incompetence

If you asked Jackson Pollock to do a painting representing government computer security, it would look just like every other Jackson Pollock painting.

A network map of the state department’s systems. Just kidding, it’s Jackson Pollock

I feel like it is literally impossible to understand how bad government computing has become; that’s redundant, I’m afraid, because it’s impossible to understand government computing and security is a subset of that. I used to say that computer security is a sub-field of system administration (I still think that) and if you haven’t got a grip on system administration and configuration management, don’t even think about worrying about security – you can simply know “it’s bad.”

Diana Kelley,  who (last time I checked) is IBM’s Executive Security Advisor – basically IBM’s top computer security consultant – and I used to joke around about the “tells” that indicate a security disaster. If you went to a client site and there were a lot of Dilbert cartoons in cubicles, you knew that management was a disaster and therefore security was a disaster. If you asked “how many people have access to the wiring closets?” and people looked puzzled, you could tell it was a disaster. Or, “how do you do configuration management?” if you got a blank look, you were headed into a disaster. We both felt that we could assess a client’s security just by the questions they asked us, no need to ask any of our own.

At one point, I developed something that was akin to the Anthropic Principle, which was probably inspired by a bit of a Henry Rollins rant, in which he said, “you know, nobody ever calls up Kofi Annan to just say ‘Hi Kofi!’ – think how he feels when the phone rings. He knows it’s a disaster inbound and someone is asking him to intervene.” Or, words to that effect. The same thing applies to computer security consultants who do incident response: nobody calls you up and says, “hey if you’re in LA let’s go for Thai food.” It’s always, “can you get a flight to out here by tomorrow morning?” Then, another thing occurred to me: the clients who call you for incident response will almost certainly have certain things:

  • System logs disabled
  • Overly permissive firewall rules
  • No configuration management
  • No idea what devices on the network matter
  • Desktop users have local admin and browse the web and do email with privileges

Diana and I used to call it the “Jeff Foxworthy List” after his stupid “… you may be a redneck” classist humor.

All that being said, I almost don’t need to even discuss the disaster. You, too, know what’s coming.

Buzzfeed reported: [buzz]

A disgruntled employee at the State Department changed the biographies of President Donald Trump and Vice President Mike Pence to say their term was coming to an end on Monday – nine days before President-elect Joe Biden is to be sworn in – two current-serving diplomats with knowledge of the situation told BuzzFeed News.

The president’s biography was changed to read, “Donald J. Trump’s term ended on 2021-01-11 19:49:00,” while the vice president’s biography was edited to “Michael R. Pence’s term ended on 2021-01-11 19:44:22.” The time stamp on Trump’s page changed multiple times, before both pages were removed around 3:50 p.m. and replaced with a 404 reading, “We’re sorry, this site is currently experiencing technical difficulties. Please try again in a few moments.”

Other than Pompeo, are there any gruntled State Department employees? If the premise of the story is that someone edited the web site because they were disgruntled, the pool of suspects is, indeed, large. But that’s not really the story; it’s this tidbit:

Both diplomats said that an investigation into the matter could be a challenge, considering how many people have administrative access to the content management system used for the State Department’s official website.

It’s a “closed system” that is “nearly impossible to hack,” said one of the diplomats.

It appears to me that it’s a system that is not necessary to hack. If “how many” people have administrative access, then it’s not a “closed system” it’s an “uncontrolled system.”

The first question that I’d ask is: “system logs enabled?” and I know the answer would be “no, we turned them off because they were just taking up space.” Second question: “how many users have edit access?” and I know the answer would be “a lot.” Then it becomes a case of forensics – the content management system probably puts edit histories in comments within the document, and it would capture the account name, which is probably “admin” or something silly like that. In classical computer security, this is why “shared accounts” are a no-no and you can tell you’re dealing with incompetents if they use shared accounts rather than creating individual passworded accounts, even if they are all conferred admin privileges.

Like most web sites, the State Department uses a content management system (something like WordPress, which is what FTB runs on) and every content management system maintains logs and controls who can edit what and how. For example, I can’t pop over to Pharyngula and edit PZ’s posts, he could log in as “admin” and edit mine, but it’s “separation of privileges” which is Security Implementation 101. Using a shared admin account deliberately bypasses separation of privileges. Those privilege controls are there for a reason, but State Department’s so incompetent they had trouble figuring out who edited the page?

Normally, figuring out who did it would be a 15 minute job. You’d look at the content management system logs, then see what account did the change, then see what system they connected from (IP addresses are recorded in CMS logs along with timestamp and userID. Some CMS’ maintain versions of the file, so you could verify that the changes were, in fact, committed at that time) Then you’d see who was logged in on that IP address at that time, either through activedirectory logs or endpoint logs. If the site was really set up by a non-incompetent there would be a firewall between the CMS staging system and the rest of the network, and that would corroborate the logs. It’s possible that there’s no staging system (who needs quality control? “what me worry?”) and State Department propaganda department staffers just edit the live site. I wouldn’t be surprised. These are apparently a bunch of dumbasses.

I worked on a project, building the 1st generation of firewalls for the State Department, under program management of one Peter Kurtz, who was a hard-core old school security guy. That was around 1992 or so – I ported my firewall code to State’s preferred platform of the time, which was SCO Xenix because Xenix actually had some pretty cool security capabilities. I had to modify some of my software to work better under the more restrictive security framework, and forklifted that code into my product’s base release, making it better. I don’t know how long Kurtz lasted there but he retired a few years after that and I believe that all the security was ripped out and replaced with “COTS” (Commercial Off The Shelf software) which is code for “just throw stuff together” since being careful about how software is configured is not “off the shelf” – the “off the shelf” configuration of almost everything is pathetically insecure.

Years later, I commented in a talk, “government computing IT executive management has been replaced with people who only know how to read powerpoints from vendors.” I thought that the comment might get a chuckle but, instead, the temperature of the room plummeted and afterward, I walked off stage through the smoke of burning bridges.

Remember: these are the people who are complaining that China and Russia are hacking them to bits. Meanwhile, they’re such dumbasses they can’t even manage a content management system for a public-facing website that is, arguably, semi-important.

------ divider ------

Traceroute tells me that the State Department hosts at Akamai. So, it’s a “cloud computing” set-up. Another bridge burner: “Cloud computing is great for organizations that don’t know how to do IT.” I really believe that: if you don’t know how to set up a CMS, you can host at WordPress and they’ll keep it working for you, for a one-time charge of $200 and $11-$40/month. Then, just work within the perfectly good framework that the cloud providers include, and don’t bypass it by turning logs off, or using shared accounts. The sad fact about a lot of these face-plants is that someone often had to go to some trouble to turn security off. Sheesh!


  1. flex says

    It is also possible that they know precisely who did it, and have decided to handle it internally.

    Phrases like like, “could be a challenge”, it wasn’t a challenge, but it could have been. Or, “How many people have access”, lots of people have access, and we know precisely who it was.

    I’m not saying their security is good, but there are other possibilities which would explain the deliberate obfuscation.

  2. Jörg says


    That was around 1992 or so – I ported my firewall code to State’s preferred platform of the time, which was SCO Xenix …

    Not SCO UNIX? I remember switching from 7bit-ASCII SCO Xenix in the late 80s was a huge relief, because then I didn’t have to fiddle with e.g. keyboard matching files any more so that German umlauts and the like could be entered.

  3. says

    SCO had a B2 evaluated Xenix with mandatory access controls. It was cool stuff; you could apply permissions controls on sockets and shared memory, so I implemented a read-down immutable logging agent that seemed pretty cool at the time. Cooncidentally, right after I did that someone started exploiting a format string overflow in BSD syslogd so I was really happy I did that.

  4. Jörg says

    My customers then were nursery gardens and landscape building companies. Their tools were spades and and forks. I didn’t have to fear attacks on daemons. ;-)

  5. drken says

    To be fair, the website that was hacked was the equivalent of the collection of portraits of the CEO and Board of Directors some companies put in their lobbies. That a disgruntled employee drew a mustache and devil horns on one of them doesn’t strike me as a national security disaster. That they were able to get around what passes for security so easily is disturbing, I just hope the rest of the state dept operates at least a little better.

  6. Dunc says

    That’s not a disaster. This is a disaster. Some choice tidbits that give away more than they perhaps wanted to in a reassuring public statement:

    “A number of SEPA systems will remain badly affected for some time, with new systems required.”

    “Our email systems remain impacted and offline.”

    “Additionally staff schedules, a number of specialist reporting tools, systems and databases remain unavailable with the potential for access to be unavailable for a protracted period.”

    “Work continues by cyber security specialists to seek to identify what the stolen data was. Whilst we don’t know and may never know the full detail of the 1.2 GB of information stolen…”

    You can read between those lines at least as well as I can…

  7. xohjoh2n says

    @4 …only attacks on Vampires

    @6 Helpfully, that page doesn’t have a date on it. I assume it’s talking about xmas just gone, but as far as I know that page could have been up for a decade.

  8. says

    I’m from the private sector and I’m here to help you.

    Exactly! If the government doesn’t “get it” they are going to do a bad job at it, and in that case it’s better to let the private sector do it. On the other hand, that’s not a guarantee of goodness or cost savings.

    In a sense, the government has always outsourced the development of military technology. They bought Colt’s miserable M-16 and .45ACP – iconic but fundamentally flawed technologies. And the F-35 and F-22 and a huge long list of technologies where the private sector did the work and the government’s only role was to select what was the best/easiest/cheapest/whose lobbyists were best – it’s very much like the whole Amazon cloud versus Microsoft cloud versus Oracle cloud. It’s an open question as to whether the government is capable of creating systems like that, or not. But here’s the razor blade in the apple: by doing that, the government gives away its chance to control the technology it’s buying. What Henry Rollins used to call “The menu today is: fish” – your options are limited.

    I have long believed that governments should recognize software and tech as strategic developments, and have a department of technology that is responsible for strategic tech. Of course the DoD would kill that immediately and so would the IC.

  9. Jörg says

    xohjoh2n @7

    …only attacks on Vampires

    My motto is: “For a systems administrator, paranoia is a healthy state of mind.”


  10. blf says

    @3, B2 Xenix (also known as Trusted Xenix) was not SCO, that was IBM. (IBM did try to interest SCO in the B2 system but SCO’s suits decided on a company based in Atlanta whose name I currently don’t recall (for SCO’s Unix; Xenix by that time was considered by SCO to a mature product with Unix being the focus).) Xenix was Microsoft(!)’s modified version of Unix, which was licensed in source form to multiple companies (of which SCO was probably the best known). Since SCO did(? had to?) provide Microsoft with source, it is possible IBM started with that SCO-modified Xenix (I now don’t recall, sorry!).


    The first question that I’d ask is: “system logs enabled?” and I know the answer would be “no, we turned them off because they were just taking up space.”

    Yep, been there… and the backups were (usually) shite as well.

    Second question: “how many users have edit access?” and I know the answer would be “a lot.”

    Yep, been there… One appalling example was a major bank in teh “U”K (which I won’t name) was using SCO Xenix for some stuff (of a more routine nature) and gave everyone superuser (root) access because, paraphrasing, We don’t anyone to be limited. I myself had a run-in of a similar nature when I worked at a well-known large Unix(-mostly) company, and refused superuser access to my workstation, pointing out the company had a team of on-site experts

    In classical computer security […] “shared accounts” are a no-no and you can tell you’re dealing with incompetents if they use shared accounts rather than creating individual passworded accounts […].

    Yep, been there too.


    A variant technical failing is logging clews (usually insecurely as well) as to the passphrases for an account (especially privileged accounts). Two of multiple examples I’ve had to deal with:

    ● On a failed access attempt, logging both the user (account) and passphrase attempted.
     It’s often easy to guess the correct passphrase, and even easier to determine the correct account. Ooops!

    ● When setting / changing the passphrase, one test is to look for using known words, which is often done by using the system’s spell-check program — which often record bad spellings (“errors”), precisely the sort of thing which is often suggested as the basis for a possibly-good passphrase. (This logging is for debugging and dictionary enhancement purposes.)
     That is, the good (new, possible) passphrases are logged. Ooops!

  11. says

    A variant technical failing is logging clews (usually insecurely as well) as to the passphrases for an account (especially privileged accounts).

    There’s that. There are also the companies that email around passwords, as if email is a secure transport. You know “account=root, password=rumplest1ltskin” kinda stuff, Except nobody uses a password that long. (don’t worry it’s in the crack database)

Leave a Reply