Marcus has a long post on hacker mythology — I don’t have his depth of experience on it, but I’ve had a little exposure.
Back in the 80s/90s, I was on the edge of hacker culture. I was cracking games, I was doing a little phone phreaking, I was poking around in that culture, reading the magazines and trying stuff out. My general impression: “This is easy.” A little knowledge about computers — an epiphenomenological sort of knowledge — was easily amplified into some success in breaking into things. I talked with “hackers” online, and was unimpressed. They could talk a good game, but they didn’t understand much. Their primary skill was in bragging.
Then I got a job as a systems manager for an academic unit, working with VAXen for a lot of scientists who just saw them as tools to get a job done, and they needed someone to take care of keeping everything running smoothly. I worked at that for a couple of years. General impression: “This is hard.” You had to dig deep to understand how to prevent harm to the system. Those were big complex operating systems, and you knew all it took was one of those idiots I used to be reading about some hole in one of many subsystems to take advantage, so you had to read everything and keep up with all those DECtapes that came rolling around with technical issues.
I pretty much lost all respect for so-called “hackers” fast, and have never seen the virtue of hiring hackers to improve security. You don’t hire people who know how to smash things with hammers to enhance the security of locks — they don’t know anything you don’t.
jaybee says
I program by trade, and I have never done any nefarious stuff. But I was curious, so about 20 years ago I started reading 2600 magazine, which at the time was mostly focused on phone phreaking but had a fair amount of computer exploit articles. It became quickly obvious that maybe 25% of the article were truthful and original work; 50% were questionable, and 25% were transparent fabrications. There were a lot of posers then.
2600 is still being published, but I haven’t read it since. Not sure what they talk about now.
Marcus Ranum says
Hacking is super easy. You simply need to know how to build systems, then ask “if I weren’t a good system-builder, what mistakes would I make?” Then go look for the mistakes in existing systems. It’s very rare that someone invents a new category of flaw,* but it’s everyday that someone finds another instance of a typical flaw. Most of what we consider to be “internet security” is flaw-hunting and flaw-fixing. The last 25 years of security have been largely a waste of effort.
One of my pen-tester friends makes a good living going to organizations and writing them up this year for having the implementation errors he wrote them up for last year and the year before and the year before that. Most of the big name hackers aren’t doing anything more than looking a little harder or a little deeper. Even Mitnick’s most famous hack (his TCP sequence-guessing attack against Shimomura’s server)(Shimomura’s server was stupidly configured, too) was implemented by someone else an Israeli who went by the handle ‘jsz’. A lot of these hackers are no better than their tools.
(* Paul Kocher’s CPU timing attacks, being an example of a whole new type of flaw)
erikthebassist says
There are script kiddies and then there are elite hackers, some blackhat and thankfully some whitehat. The vast majority are script kiddies and they aren’t worth worrying about if you take the most basic measures to thwart them, but a truly dedicated and competent blackhat can comprise any system, it’s just a matter of time.
I do think there is value in hiring them, but it’s a double edged sword as they can turn on you at any time. It’s best to stick with the whitehats.
Incidentally, the #1 tool hackers use is social engineering (that’s how they got Podesta and got in to the DNC’s servers). Given that any human connected to a given system becomes a vector for attack, Hillary’s server was probably far more secure, given that only a few people has access, than any government email server which has legions of potentially stupid people just waiting to click the wrong link in a phishing email.
But her actions were “reckless” and “dangerous”, yeah right.
Corey Fisher says
So, quick defense: hacking attracts assholes, not all hacking is assholish. However, unless formal verification of security properties are widespread, pen testing is absolutely something that needs to happen – because systems are extremely large, extremely complex, and extremely hard to catch all of the security flaws in. So, dealing with the combination of those two facts is one reason why professionals and academics in the security industry have ethical standards. (e.g., one thing I remember from my security course: reporting bugs to the system owner before letting the public know about a security hole is good practice, because once the public knows about the security hole they can figure out how to get in through it, but the public should know eventually in case other bad actors knew before. There’s more to it, but that’s the basic idea.)
So I would argue the correct response to a messy sometimes-harmful culture here isn’t to disparage the value of pen testing and similar things – it’s absolutely useful, because otherwise nobody would know about any vulnerabilities except bad guys who don’t want to fix them – but calls for tighter ethics and accountability. Compare policing: I generally see BLM calling things like for actual punishment of bad actors (and an end to a culture that protects them), body cameras, and de-escalation training, not for firing all cops.
applehead says
Since the practice of social engineering extends hacking to “phone up a random user with the necessary privileges, pretend to be from the IT dept. and hope he falls for it,” hacking really is child’s play.
wzrd1 says
Most of the “hackers” out there are just script kiddies, using tools written by others, but actually having zero clue about what the tool does and how it operates.
The real “cracker” types, the ones who actually run things like SQL injection, buffer overflows, timing attacks against race conditions, etc are rare.
Case in point, the DNC “hack” wasn’t a hack, it was a spear phishing attack, blended with an internal threat installing a RAT (Remote Access Tool).
An e-mail was sent to targeted users telling them that they needed to visit the enclosed link to change their password. The link was on an external site, which was malicious and some users used it, giving up their login credentials.
That ain’t a hack. Installing the RAT ain’t a hack. Using a RAT ain’t a hack any more than using any other tool is.
I’ve watched hacks go on, both by white hat pen test teams from the NSA and by malicious actors, both being quite creative, while using some really old tricks.
I mean, old as in pass the hash, kerberos golden ticket attacks kind of old, you know, nearly as old as electrons.
One clever individual actually had established an RDP session on a server and dumped into a text editor a program, which was saved with a binary extension and executed to harvest credentials. The only novel thing there was the dumping a buffer into a text editor, which is oh so DOS era old.
Would I hire that individual to secure our servers? Hell no. How can I trust them to not install a backdoor somewhere when I’m not looking and use our servers to attack another company’s servers? How can I trust that individual to not examine my network’s defensive posture for future compromise?
Nope, I’ll stick with what’s worked, the BOFH.
http://www.theregister.co.uk/data_centre/bofh/
I think I just gave my age away. I remember back when Simon wrote that on Usenet…
erikthebassist says
Great post Marcus. I made my comment at #3 just based of what PZ wrote, without having read your post yet, so I now realize how trivial and redundant most of my comment appears to be in that context! Oh well, RTFA strikes again!
I agree with the general idea that most “hackers” are true shitheels, but I do think there’s value in the white hats who truly pen test and expose vulnerabilities, and then honestly report those vulnerabilities to the parties responsible for fixing them, and only them.
FTR, I’m not a coder or hacker or anything of the sort, but I do make a living knowing a lot about firewalls, endpoint security and the like, but from an architectural solutions POV.
I have nothing but respect or the actual security admins who accept the responsibility for keeping systems secure. It’s got to be nerve wracking and keep you up at night.
dvizard says
For one, you are right. On the other hand, there is a certain distinction to be made. Downloading serial numbers to crack a game or, what was it called, Back Orifice to remote control a computer is easy and doesn’t make you a potential hire as an IT security professional. You are a kid (or adult, it doesn’t matter) who’s fascinated with being part an “underground” culture with an allure of magic, and having power. It’s also easy enough to fool yourself into thinking you’re actually doing something complicated because hey, your mom doesn’t understand it, so you’re probably a hacker and you should tell the world about your badass achievements.
On the other hand, the small minority who figures out new weaknesses in systems, which are then used by many would make decent sysadmins (technically. Maybe not personality wise.)
Myself, I never was a hacker of any kind. I was a hobbyist programmer and liked to look behind the curtains of my computer. But the hacker mindset of probing into systems I didn’t understand and slowly figuring out what was going on, was what ultimately led me to getting into science rather than becoming a programmer.
PZ Myers says
Just to clarify: I was more than someone who downloaded serial numbers. I wrote a p-code disassembler to take apart software; I took a soldering iron to my motherboard to enable interrupts so I could step trace through machine code.
It was still a doddle compared to being a VAX system manager. Being able to go through code line by line is a completely different skill from being able to grasp all the disparate bits of an operating system.
Reginald Selkirk says
I maintain one Linux system outside our firewall. The security log (/var/log/secure) is an interesting read. If we get an intrusion attempt, i block addresses. The attempts come from all over the world. Once in a while we get a big dictionary attack with hundreds of attempts. For the last few weeks, things seem to be settling into a new pattern. I may see several attempts per day, but each is restricted to six attempts. They must be going for the common default factory passwords.
wzrd1 says
Reginald, could be Mirai malware scanning about. That’s the nasty that hijacked webcams, cheap routers and some other IOT devices to DDOS various sites recently.
Trying default username and password pairs is one characteristic of that malware.
Reginald Selkirk says
Sounds likely. My box is a file drop without a lot of services, so there’s not much to hit on; ssh and mail. Even if it got compromised we could just wipe it out and not miss it much. ssh to root is disabled. What scares me most is the possibility of deep-seated bugs in essential packages like ssh and openssl. I hate to think that even if I were doing everything right, there is still a chance we could get hacked.
Marcus Ranum says
By the way, though I can’t document it*, I am the person who coined the term “script kiddy.” And its original use was not what it has become: originally I referred to “script kids” as the sort of auditors that were turning up from Arthur Andersen and other big audit companies – “we’re here to check your security” and they’d pull out a list like:
1) is your password longer than 6 characters?
2) does more than one person log in as ‘root’ on your unix machines?
etc.
(* It was in a thread about auditing, I believe on the old firewalls@greatcircle.com, and I’ve searched the archives on several occasions and never been able to find it. I think the current usage of the term is better, but the transformation still amuses me every time I see it.)
wzrd1 says
I remember the big push to patch the Linux servers (mostly Linux, but also other *nix servers as well) for heartbleed. That one was a full court press to get everything patched quickly.
Just as well, we started seeing heartbleed probes hitting on the firewall soon after.
We’re also still playing whack a mole with shodan.io, they keep bringing new scanner IP’s online.
Marcus Ranum says
Reginald Selkirk@#12:
My box is a file drop without a lot of services, so there’s not much to hit on; ssh and mail. Even if it got compromised we could just wipe it out and not miss it much. ssh to root is disabled. What scares me most is the possibility of deep-seated bugs in essential packages like ssh and openssl. I hate to think that even if I were doing everything right, there is still a chance we could get hacked.
I was teaching a class on honeypots with Lance Spitzner at SANS in Boston, around 2002. And suddenly people in the room started clutching their pagers and running out of the room. Finally, Lance looked at his pager and said, “Oh! There’s a remote exploit in SSHD! Class will resume in 15 minutes!” And he ran off, too.
I was the only person in the room who did not care. Because my internet-facing servers weren’t running an internet-reachable version of sshd. To talk to sshd on any of them, you had to send an email to a completely different account on a different system somewhere else on the internet, containing an s/key hash. Getting a correct hash would pop in a firewall rule allowing the IP address that had originated the email 30 seconds to connect. I still have that code somewhere, I bet. It saved my ass on several occasions, and the funny part was that there have been many times people have told me I’m too cautious. I love saying “I told you so!”
Elladan says
Computer security is essentially an asymmetric problem: to be secure, all the security-related parts of a system have to be perfect. The be insecure, there just has to be one program with one flaw. Actually building quality software is one of the hardest things people do, so it necessarily involves huge numbers of people, who occasionally make mistakes.
Add to that a few societal problems:
– Security is about the last thing companies who make products care about.
There’s a reason security audits always find problems: nobody gets promoted for building high quality, well audited, reviewed software. They get promoted for shoveling out features quickly.
– Various governments, primarily the US, are fundamentally opposed to security. They want to be able to spy on everyone, so they compromise and block real security advances at every turn.
Just look at the DNC emails leak: there’s no fundamental reason email needs to be so insecure. Encryption-at-rest, point-to-point encryption, and so forth were all developed and standardized in the ’90s, and then completely ignored (and actively blocked by government agencies). I’ll bet those same agencies, and the politicians who support their actions, are still 100% convinced that building a shadowy world of insecurity and lies is the best we can do, too.
Anyway, the point is that not only is security hard, it’s actively de-prioritized and subverted by society. Hackers are just making use of that.
wzrd1 says
Heh, auditors still use a script and checklist. At least it’s standardized that way, as long as they have a clue about what they’re asking about.
Marcus Ranum says
dvizard@#8:
On the other hand, the small minority who figures out new weaknesses in systems, which are then used by many would make decent sysadmins
Not at all!!
There are weaknesses in code, weaknesses in configuration, and weaknesses in overall system design. Even assuming that knowing something about finding weaknesses made you good at one of those three things, that still makes for a mediocre systems administrator. Knowing about code weaknesses doesn’t make a good coder, by a long-shot – maybe it makes you good at projecting mistakes onto other programmers and verifying if they made the sort of mistakes you’d make. But being a good programmer entails having design vision, an ability to deconstruct and componentize a problem, attention to detail, and creativity. Being a bug hunter actually shows the antithesis of those skills. The same goes for system administration. Great systems administrators automate, systematize, rationalize, clarify, minimize, and monitor. The best systems admin I ever met used to work about 10 minutes a day; everything else was automated so they just sat and watched logs and figured out (carefully) what they were going to do next. Knowing how to find errors in configuration files is interesting, but it doesn’t teach you anything about how to manage clusters of systems as a single image, and how to scale your efforts by a factor of 10,000.
I’m not very impressed by vulnerability researchers, in general. Some of them are incredibly clever. But mostly they’re just looking at the obvious places for the obvious things.
Marcus Ranum says
Elladan@#16:
– Security is about the last thing companies who make products care about.
Nat Howard once said, “security will always be exactly as bad as it can possibly be without everything breaking, and no worse.”
One of the most profound comments I’ve ever heard on the topic. Dan Geer once mis-attributed it to me, but I’m not that wise.
wzrd1 says
For a time, that was true. Now, the US government strongly promotes encryption at rest, point to point encryption and best business practices in configuration.
The DNC e-mails leak was two pronged, malware installed from within and a spear phishing attack that tricked unwary users into a fake password change site. The shame of it is, I’ve seen breaches like that immediately after end users receive their annual information security training on things like phish attacks.
It’s compounded when best practices aren’t followed on baseline configuration, as it just makes it easier for the adversary to gain access.
Application whitelisting and blacklisting is trivial these days, few bother to use that powerful tool and it costs dearly in the end.
I know of one Fortune 200 company that ended up with two SOX audits before the leadership finally paid attention to multiple breaches.
Elladan says
Marcus @ #19:
The government aspect really is significant. As an example, my friend works in banking, and they’re actively forbidden from using strong email security, or any number of other forms of basic security measures in their system by the banking regulator. Why? Well… everything they do has to be subject to hostile audit, after all.
And that’s not even getting remotely into things like the NSA and FBI.
Elladan says
wzrd1 @ #20: For a time, that was true. Now, the US government strongly promotes encryption at rest, point to point encryption and best business practices in configuration.
That’s just not true. What is true is that portions of the US government promote limited forms of these things (e.g. encrypting laptops because they get stolen so much) as checkbox requirements, while at the same time other parts of the government try to subvert internet standards, demand crypto backdoors in everything, covertly install backdoors in core networking gear, and so forth.
This was known about for many years, but of course as with all secret operations was publicly deniable (except for the FBI’s authoritarian backdoor demands). It just stopped being deniable with Snowden.
kaleberg says
I once met a former safe cracker while waiting for a delayed flight. He was the safe expert in a number of robberies. Then he got caught, convicted and did his time. When he got out he formed his own security company specializing in physical security, basically how to keep people like him from breaking into safes and secure rooms. Having met so many phone and computer hackers during my days at MIT, it was interesting to see the same pattern of moving from being a black hat to white hat hacker in another field.
chigau (ever-elliptical) says
I, too, have met many interesting people in airport departure lounges.
really
interesting
The Vicar (via Freethoughtblogs) says
@#6, wzrd1
Sadly, the programmers who enable things like SQL injection, buffer overflows, timing attacks against race conditions, etc. are common, almost legion.
Go look at the documentation of almost any programming language used on the web, or any discussion of security on web-facing database programming, and you will find a long discussion about “sanitizing inputs”. This despite the fact that “sanitizing inputs” is the wrong way to do things in the first place — you should be using query parameterization, and then SQL injection is literally impossible. But even geeks get this wrong and keep thinking they can solve the problem by running the input through a couple of regexes (or whatever). And then they discover, surprise surprise, that they missed a well-known case and got caught with an injection, and now they’d better hope they have a really really good backup of the database because it all got destroyed by some anonymous person running through a proxy in southeast Asia. Oops!
Or there’s the whole reinventing the wheel thing — writing your own date/time class, or whatever, when the API you’re working with already has a full suite for that.
(But I must admit there are problems with not reinventing the wheel, too — the way npm works, for example, puts anyone who uses it at risk for all kinds of nonsense, as demonstrated when a programmer recently pulled a chunk of code which literally did nothing and caused a bunch of sites to stop working because it was reported as a dependency; if they had quietly inserted some kind of spyware, who knows how long it would have taken for anyone to notice what was happening?)
@#22, Elladan
And don’t forget the unwillingness to fund anything — when it costs $50 million, say, to update an existing nationwide system to comply with reasonable standards of security, Congress will happily allocate $5 million and then demand to know why, two years later, the upgrade hasn’t occurred and the system was hacked.
erikthebassist says
Not sure what you mean by “strong email security” but banks are subject to PCI regulations which means they have to meet certain security criteria at a minimum. DLP is important, but so is email security. I configure solutions that include email security software and appliances for banks everyday, and they could be considered “strong” systems. Encryption, 2 factor authorization, content filtering, sandboxing, app control, IPS, AV, AS, AMW, all things banks do on their firewalls, servers and endpoints on a regular basis. There are no laws restricting them from doing so, not in the united states anyways.
madtom1999 says
Pah! Computing’s easy! I’ve been doing it for 42 years now and I must know a good 70% of what it takes to be good at it. The 30% that remains is how to tell managers that the security system they asked for and I’ve just implemented will remain and they will learn just a little bit to find out how useful it is.
The trouble with computing is MS – they sold the lie that computing is easy and they would look after the hard bits. It was a lie then and still is but managers still live by that lie. Having said that I’ve worked with people with 1st class honours in computer science who, if they drove a car like they developed software, would be burning in a ditch with massed bodies piled around them.
As for hacking – its a great way to learn but you have to resist the temptations it offers.
wzrd1 says
erikthebassist @ 26, thanks, I was going to mention PCI compliance. It isn’t a nicety, it’s a requirement if one is conducting credit and debit card transactions.
madtom1999 @ 27, I refer to Microsoft products as “job security”. But, I disagree as to management drivers in wanting Microsoft products.
They like them because of the support contract.
I’ve been in enterprises where an open source product was rejected because, despite the entire IT staff being more than proficient in supporting the product, a support contract wasn’t available. Despite that same staff never utilizing a support contract (I’ve yet to require vendor support for anything that wasn’t a software bug).
As for software developers, yeah, I’ve looked at my fair share of tanglecode, with no bounds checking or any other notion of security over the years.
But, remember the Microsoft developer adage, “a bug with seniority is a feature!’. ;)
More seriously, one driver of crap code is a mindset of getting a product out the door, rather than that of quality first, product features last. Add that with feature creep during the development process, we have the hot mess that we have today.
Don’t even get me started with memory leaks…
The temptations of hacking are the same temptations of senior system administrators, who have god access to the network. Not all that hard to resist, as you’ve got enough crap on your plate already, why generate even more work?
Although, when I was doing both LAN/WAN shop admin *and* IA duties at a US military installation (I literally was running antivirus, web and mail filtering, patch management via WSUS and SCCM, vulnerability scanning and incident response, as the shop was chronically short staffed), I really loved the times that our CERT, US CENTCOM IA and the NSA would evaluate and pen test our network. Everyone learned something new each time (it was an annual evaluation, each group in their own time would perform the evaluation), I’d learn new attacks that they’d use, they’d learn some of my tricks to slow them down.
Annoying was the 2007 cyberattack against the DoD. My installation was the only installation unimpacted by the attack. Iraq, Kuwait, Al Udied AFB, CENTCOM and Afghanistan were all universally infected.
The reason Camp As Sayliyah wasn’t infected was simple enough, one of the very first things configured on all systems was disabling autorun, which is required in the DoD baseline configuration, which we followed. The first thing enabled in antivirus when I assumed control of the AV servers was scan on insertion, again, part of the DoD baseline configuration. The rest, also part of the DoD baseline, applying best business practices.
When I was made aware of the spreading infection, I put our protective measures into incident response mode, upping the logging, hourly AV updating, close monitoring of that logging.
The malware, agent.btz was on every impacted network. Which begs one question that never was actioned, how do you get malware that is spread via USB flash drives from the unclassified NIPRnet to the Top Secret JWICS network, the secret SIPRnet and CENTRIXS if one is actually obeying the law in regards to cross domain information transfer? Why was the DoD baseline configuration not followed? Why were best business practices not followed? Why are you lying about the antivirus solution not detecting the malware when it’s well documented that it does effectively quarantine and remove it?
Or most burning, why did any of those shitbags keep their job after costing the US government one billion dollars in cleanup expenses, only to reinfect their entire fucking networks again a week later?
Of course, those idiots tried to obstruct enforcement of a direct order from the undersecretary of defense to disable USB mass storage, even to ordering me to not disable it.
But, a good BOFH and IASO knows the regulations and I knew that all computing devices are the responsibility of the installation commander, not NETCOM. So, I advised the installation commander of the directive, which he was already aware of and he asked me what I was doing about the order. I explained what the idiots ordered and reminded him that he was responsible for the computing devices, “what are your wishes, Sir?”. He said, and I quote him, “Turn that shit the fuck off”.
I returned to my office, unlocked my computer and hit enter, which ran a script that set Active Directory to disable USB mass storage on all devices that weren’t within the exception to policy objects (which later had three computers added), blasted a script to all installation computers (other than those excepted) and then saved the computer logon script to also modify the computers registries to disable USB mass storage. That gave me three means to disable it.
If I can’t write a script to do it, it just can’t be done at all. ;)
Of course, that corporation was also one where, when I first arrived on that base, the admins were manually entering permissions per user on their organization’s file share, rather than using access groups. When I asked why they were doing things in such a laborious way, rather than using access groups, I was greeted with the look given the chap who first invented a circular component that is intended to rotate on an axle bearing.
You know, the wheel.
A change was swiftly approved to place the users into the already existing user groups and those groups then assigned to the shares instead.
What a bunch of fuckwits!
madscientist says
Ii wouldn’t agree with that opinion of hackers. There are the hacker wanna-bes and script kiddies and there are the hackers. The hackers do have considerable specialized skills and the best of them know quite a bit about all sorts of things like various communications protocols, hardware interfaces and what-not. When a hacker (or a team of hackers) points out that the brakes of some vehicle can be remotely commandeered I tend to get angry with the people who cocked up the brake system and allowed it to be remotely controlled in the first place. Unfortunately with the mad rush to push products out of the door, security rarely ever gets attention. I’ve built many instruments for remote data collection which are far more secure than the control systems at power plants and oil/gas wells and I can’t even claim to put enough effort into securing the systems. The hacker’s job isn’t to tell people how to improve things, it’s to tell people how things are broken. There are a lot of malicious people out there, including criminal gangs, who use the same techniques to take control of people’s computers and data and extort money. Ransomware is doing very well thanks to poor security and hackers help a bit by finding and reporting problems.
Dark Jaguar says
That article seems to make the argument that the only way to hack is to do something bad like break into someone else’s network. What about hacking your own stuff? There’s a number of companies out there right now stripping away capabilities of the user, and while I’m not about to go on a tirade about that “whittling away consumer rights” (I don’t think it is), I can totally get behind someone wanting to alter their own phone or game console to get full control over it. I won’t extend that to someone hacking the “other end” or tricking a server, but so long as it’s someone personal device and copy of the software they are changing, that’s a good use for hacking.
Alt-X says
haha nice PZ. I use to be a courier, we might have bumped into each other at some stage. cool :)