Primer on encryption


Encryption has been in the news ever since Edward Snowden revealed to the world the massive spying operation that the US and its allies in English speaking countries (UK, Canada, Australia, and New Zealand) perpetrates on the communications of people all over the world. The backlash has resulted in some curbs to the US government’s spying powers but the greater impact has been on the increased use of end-to-end encryption on the internet.

I came across this article by Andrea Peterson that describes the state of encryption in simple terms.

Congratulations! If you are reading this on The Washington Post website right now, you’re using encryption — or at least your browser is. The little lock that probably shows up in the URL bar of your browser highlights that our site deploys HTTPS, a process that creates a sort of digital tunnel between you and our website.

That encrypted tunnel helps protect you from governments, Internet service providers, your employers, or even the nefarious hackers who might want to spy on or even hijack your Web browsing while they lurk on the WiFi at your local coffee shop.

When a site has HTTPS turned on, someone trying to get a peek at your online activity can typically see only what site you’re visiting, not the actual page you’re on or what information you might share on a site. So right now, for instance, someone with access to the network you’re connecting through could see that you’re reading The Post, but not that you’re reading this specific article about encryption. Neat, right?

Major e-mail providers, social networks, and all sorts of e-commerce such as online shopping and banking rely on encryption to help keep users’ data safe, often without visitors even realizing it because the encryption is just baked into how users experience the Internet.

The government has been complaining that encryption prevents them from monitoring criminal activities and are calling for the manufacturers of hardware and software to install backdoors to the systems and thus be able to give governments the keys if they need them. But the arguments against allowing that are strong just for technical reasons alone, and pressure has been building on even the commercial tech companies to design systems where they essentially throw away the keys so that even they cannot unlock the messages that pass through their systems from one user to another.

I do not have any expertise in this area but her article seems a little too sanguine that we may soon no longer be at the mercy of the NSA and other government spying agencies. I know there are some serious experts among this blog’s readers who can weigh in on the merits of the article.

Comments

  1. doublereed says

    The two simple ways I’ve seen to protect oneself with extremely little effort:

    Switch search engines: Google, Yahoo, Bing and such track all your searches. No one lies to a search engine. It is one of the most powerful ways to track you. Search engines like DuckDuckGo does not track you.

    Get HTTPS Everywhere: This is a plugin for your browser that just forces encryption everywhere you go. There’s simply no reason to send cleartext traffic anywhere ever. Everywhere should use HTTPS, and once you plug it in you can pretty much just forget about it. It can apparently break some sites that you have to turn it off for, but I’ve never had that issue with it.

    When it comes to government agencies like the NSA that have absurd resources, I don’t think there is any way to get true protection. But you can do what you can.

  2. deepak shetty says

    No expert but HTTPS merely prevents man in the middle type of attacks (Assuming keys are safe) --
    whereas the government simply leans on the companies to get information. So while your communication between you and say your email host is safe -- its not secure because the government just needs the email host to hand over your email -- or allow it to run programs that snoop for suspicious words etc. Even for hosts that encrypt and store data , all the government needs is for the host to hand over the keys.

    @doublereed

    Switch search engines:

    Because your ISP doesnt track what you do 🙂 ?

  3. doublereed says

    Well HTTPS everywhere is more for just general security.

    Tracking search terms is a different vector of attack than going through your history of ISP.

    Again, these are ways to protect yourself with very little effort.

  4. EnlightenmentLiberal says

    From my moderate level of expert domain knowledge (programming is my job), the text of the article looks correct.

    I do not have any expertise in this area but her article seems a little too sanguine that we may soon no longer be at the mercy of the NSA and other government spying agencies. I know there are some serious experts among this blog’s readers who can weigh in on the merits of the article.

    This is a hard question to answer. Your question is very broad, and it’s framed in terms of a “yes / no” answer. I feel like I would have to write a whole article to give a proper answer.

    Let me restrict myself to a single question: Sending a secure email or message.

    If we users take the basic steps outlined in the article, and use proper email services as described in the article, then the CIA is not going to be able to have an automated process in place to capture and read message content.

    However, with dedicated effort, there are still options available to the CIA, and there are still counter-options available to the end user who happens to be a dedicated expert.

    The CIA might try to infect a target computer or set of computers with a software virus that records key-presses and sends them to the CIA, which might eventually allow decrypting the message content, but it probably requires dedicated human effort specific to each instance. To counteract this, standard practices are a help here, including good antivirus and good firewall software. A extremely dedicated user might set up a dedicated open BSD box for this purpose, and barring extremely rare bugs like Heartbleed, this would be effective.
    https://en.wikipedia.org/wiki/Heartbleed
    Of course, this extremely dedicated approach assumes you don’t use that computer for anything else, including general web browsing, gaming, or any other software. That’s tedious, and error-prone (can never accidentally install anything else).

    The CIA can send a physical agent to your house, and install a physical hardware keylogger. Not much you can do against this (beyond the obvious physical security).

    Perhaps the email service or messager program that you are using is actually not secure, accidentally, or perhaps with purposeful action by the CIA. Not much you can do here unless you’re an expert, or you know the right people to trust.

    Without defeating the encryption, the CIA can still easily determine the source and destination of the message, which often is good enough, and especially when the CIA records all information pertaining to all messages and the sources and destinations of those messages. This web of links shows a lot.

    Dedicated users might try using an onion routing service, like Tor, to hide even the source and destination information, but because the government has access to many or all of the nodes of the network (via the ISPs), they actually can break onion routing with enough effort, and again discover the source-destination information. Defeating onion routing doesn’t grant access to message contents, only source-destination information, and they would need to rely on one of the other methods to get the message content.

    Or the CIA just captures you, holds you indefinitely, until you fess up. Probably the most overlooked and most obvious way to beat the end-user. For the U.S. at least, IIRC some courts have ordered people to divulge their encryption keys, and they have ruled that fifth amendment protections do not apply.

    As others have noted, while the article’s approach is really good at ensuring that email message content is kept private, there are a lot of other privacy concerns, such as search history. AFAIK, with google ads and such, google knows every time you visit almost any web site, because they can record when an IP (internet address) views a certain ad on a certain webpage, and relate together all of the ad-viewings from a certain IP (internet address) to see what things you are viewing. https might prevent google, and thus the CIA, from knowing message content, but they still know you visited site X, Y, and Z. Worse, when using different computers from different locations, if you visit some of the same sites, especially if you log into facebook, email, etc., they can relate the different IPs (internet addresses) and know that this is all coming from a single user, you.

    Also search engine history, as mentioned elsewhere here.

  5. fentex says

    So right now, for instance, someone with access to the network you’re connecting through could see that you’re reading The Post, but not that you’re reading this specific article about encryption. Neat, right?

    This statement is factually incorrect.

    The URL, which is not and cannot be encrypted, of that article is…

    https://www.washingtonpost.com/news/the-switch/wp/2015/12/08/you-already-use-encryption-heres-what-you-need-to-know-about-it/

    The part “/news/the-switch/wp/2015/12/08/you-already-use-encryption-heres-what-you-need-to-know-about-it/” is known as the document path and describes what document is requested -- in this case the article in question.

    HTTPS serves two purposes in a web browser, it combines encryption (your traffic is obscured from prying eyes) with supposed identification (you are communicating with who you think you are). The first part is fairly strong, the second very weak and misleading.

    As protection from prying governments it’s mostly useless for the meta data (the unencrypted request for information) suffices for their purposes. In a political sense you’re probably safer not encrypting data exchanged because if put in a position to justify yourself the meta data without context is harder to use against you.

    Your more private data, such as emails and instant messages, is where encryption is required and it is much harder. There is no trivial way to encrypt your email end to end nor certain way to rely on other parties competence in protecting your data.

  6. says

    That encrypted tunnel helps protect you from governments

    Bwaahahaaaahaaahaaaaaaa are you kidding?
    Ok, this person is not qualified to write thing one about encryption. In order to qualify to write about encryption you have to put Edward Snowden’s disclosures in context. You have to understand about the key generation flaws in BSAFE. You have to understand about Clipper. You have to understand about rainbow tables. You have to understand what a compromised root key is and why it is important. You have to understand the history of Verisign, its financial backers, and where most of its talent came from. It is simply irresponsible to blat about something you know nothing about, journalists.

  7. says

    The two simple ways I’ve seen to protect oneself with extremely little effort:

    You cannot protect yourself. It’s too late. That ship has sailed.

    The only way to be invisible to the surveillance state as it currently exists is to be a new/unknown target and hide in the vast sea of potential unknown targets. They’ll be able to strip-mine all your data for years afterwards; the trick is to keep them from even giving you a keyscore value. For 99.99% of us that’s easy: we’re harmless, and we more or less look it. They don’t care about or extramarital affairs (unless we’re a popular general who may be eyeing a political career and needs to be publicly reputation-curbstomped) they don’t care about our minor dope deals (unless we’re a district attorney or a mayor or someone else whose political career needs ending)

    Where the idiots building the surveillance state have failed (smarter people than I, and myself, have said this often and loudly where they can hear; their failure to listen is why I call them “idiots”) is they don’t understand that they have built a system that’s going to almost entirely ever be useful reactively. It’ll be great for strip-mining the txt messages of a mass-shooter or a congressperson that’s not playing their assigned role in the charade, but it’s going to lack predictive power. It’s basic math: you’ve got the false positive problem/base rate fallacy. We knew about this with intrusion detection systems long ago.[1] Predictive threat-modelling algorithms that would detect a terror plot in advance are science fiction; they depend on solving the hard AI problem to a degree of near-human intelligence. Probably even that wouldn’t work because humans have historically proven demonstrably easy to fool both tactically and strategically. They have built this vast apparatus and what it’s going to be very good at is conclusively determining that someone they know did something: did something. It’s going to be a great microscope for detailed analysis, but they have not solved the detection problem because: can’t.

    Various comment-snips in no particular order:
    So while your communication between you and say your email host is safe – its not secure because the government just needs the email host to hand over your email

    They always pursue multiple attack paths. It’s the only rational way to proceed; you should assume the endpoints are attacked strategically (guy from the FBI shows up with a National Security Letter and says “we require a secure room in your data center”) They will also have exploited web server vulnerabilities and yanked the RSA keys out of the server’s memory image. What, did you think buffer overruns are just for injecting shellcodes? And, of course, they get the root keys the traditional way: ask Verisign for them.

    They are not just collecting metadata. If you engage your brain for a minute you’ll realize that a metadata-only solution has zero value. There is no point in building such a thing. They did, however, build a monitoring system; it’s consuming a hell of a lot more than metadata. Hint: “metadata” is “text analysis indicates arabic = TRUE”

    If we users take the basic steps outlined in the article, and use proper email services as described in the article, then the CIA is not going to be able to have an automated process in place to capture and read message content.

    Correct. That’s the NSA’s job.
    And, depending on the service you’re using and how you use it, you’re somewhere between 90% and 100% likely to have your messages captured and analyzed. For one thing, using email systems that encrypt is a great big red flag. Use gpg or something like that and you’re doing the equivalent of marching around with a flag-waving team, mariachi band, and an elephant.
    You used the word “read”; you need to define carefully what you mean by that word when talking about surveillance.
    The surveillance state defines “read” as “a human analyst looks at the message with human eyeballs and does brain stuff about the contents.” A computer programmer defines “read” as …? Does a process “read” a message? Does a massively parallel pattern-finding system that does linguistic analysis better than Strunk and Wagnall’s “read” a message?[2] Or what about if it goes through an FDF2000 chipset running in a paracel pattern-matching engine?[3]

    The CIA might try to infect a target computer or set of computers with a software virus that records key-presses and sends them to the CIA, which might eventually allow decrypting the message content, but it probably requires dedicated human effort specific to each instance.

    Nope; that’s also automated. Basically, what you do is inline huge pieces of the internet, then revector key pieces of traffic to places that will drop malware. So, for example, you’ve got mjr@ranum.com on a list as “someone to monitor” and one day an edge vehicle on a public network (e.g: Verizon’s LTE cloud in San Francisco) sees browser traffic with that as the user-id; the scoring system also indicates outbound email with the target rcpt-to: etc. So the next time the browser goes to some site that has some pointless banner graphic, instead the system already knows what version of what browser I am running on what version of what O/S and sends an exploit tailored to that combination. Poof. This sort of auto-rooting technique is already publicly used by some of the ‘bot-herders. NSA has been doing it for a lot longer than they have.

    Dedicated users might try using an onion routing service, like Tor, to hide even the source and destination information, but because the government has access to many or all of the nodes of the network (via the ISPs), they actually can break onion routing with enough effort

    If you use TOR you have just lit off fireworks and struck up the mariachi band, in terms of subtlety. And you’re toast, because one of the design points of TOR is that it’s secure if and only if enough of the routing points are not cooperating with the aggressor. Uh. Think about that. If you run TOR nodes, you get a sip of the traffic running through them. There are also nasty statistical tricks you can play by varying bandwidth usage on pipes, successively, to measure the impact it has on the routed traffic. Aaand you can gobble everything that’s talking the TOR protocol as red-flagged, and then look for other places to get necessary keys.

    Or the CIA just captures you, holds you indefinitely, until you fess up. Probably the most overlooked and most obvious way to beat the end-user.

    I coined the term “rubber hose cryptanalysis” for this, back in, mmm… 1994 or thereabouts. It’s when your crypto is unbreakable but your kneecaps aren’t.

    HTTPS serves two purposes in a web browser, it combines encryption (your traffic is obscured from prying eyes) with supposed identification (you are communicating with who you think you are). The first part is fairly strong, the second very weak and misleading.

    The second piece is, actually, unused. HTTPS is primarily a virtual pipeline over which almost all authentication is done using repeatable passwords.

    To understand HTTPS all you need to know is that its purpose was primarily to allow RSADSI/Public Key Partners to monetize the block of patents regarding public key cryptography that they owned and were using as a lever to block internet commerce until the first-generation providers came up with a tithing scheme. That tithing scheme was in the form of selling certificate counter-signatures (“Oooh! Your credit card cleared! You must really be Micr0soft.com!!”)[4]

    There is no trivial way to encrypt your email end to end

    Correct. And if you use gpg or some email encryption tool you’ve just fired up the mariachi band.

    What you need to do is what will defeat the NSA every time: operational security. Communicate in plain language using pre-agreed upon innocuous terms. Have several fall-backs. Change once every couple months. If you need to drop a bulk of data, bulk encrypt it and stega-code it into image data or a video stream of your kittens playing that you can put up on some sharing website. Study tradecraft. Dead drops are fashionable again; especially if the drop is an encrypted USB keyfob.

    I did a class on this stuff, oooh, 1997 for USENIX.[5] 3/4 of the way through the class I passed around 4 things of aluminum foil so everyone could make a nice hat. One guy made a great napoleon hat… It was fun to actually teach a bit of tradecraft from cold war deep cover agents. It’s more mundane than Bond, James, but it’s pretty interesting. The technique of setting up terroristfreedom fighter cells remains perfectly effective. Hint: don’t “like” your cellmates on Facebook. With social media you can encode stuff in stuff; there is no chance in hell that you’ll stand out.
    What do you think all the lolcats are? Surely you don’t think people find that stuff funny, do you? 😉

    [1]http://www.raid-symposium.org/raid99/PAPERS/Axelsson.pdf
    I think I was the morning keynote, actually. It might have been the year before. I forget.
    [2]http://mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2008-CVPR-semantic-texton-forests.pdf
    Semantic forests have been around since the mid 1980s.
    [3]http://www.aclweb.org/anthology/X93-1011
    I first encountered the FDF chip in 1986 at TRW’s Redondo Beach facility.
    http://trec.nist.gov/pubs/trec1/papers/23.txt
    [4] Yes, they did that once.
    [5] http://ranum.com/security/computer_security/archives/secure-communications.pdf

  8. says

    I wrote:
    Communicate in plain language using pre-agreed upon innocuous terms. Have several fall-backs.

    Like the 9/11 attack team did.

    And we now know that apparently NSA and CIA were ringing the alarms all over the place before 9/11 but they had the strategic problem I alluded to earlier: it doesn’t actually help the defender if they know where you’re going to attack, or when, unless they know both where and when. And if it’s a creative attack, how.

    The entire surveillance apparatus that was built ostensibly because of 9/11 would still be completely incapable of preventing 9/11. It didn’t work so well in Paris or San Bernardino, either. The apparatus is great for looking at people’s nude selfies and private email after they are sent but it doesn’t work because it cannot possibly work.

    It’s really expensive though.

  9. PatrickG says

    Thank you, Marcus Ranum, for pointing out the egregious flaws in that article. I mean that, quite sincerely. Thank you!

    Also, quoting for fucking truth:

    The entire surveillance apparatus that was built ostensibly because of 9/11 would still be completely incapable of preventing 9/11.

  10. fentex says

    That tithing scheme was in the form of selling certificate counter-signatures

    Exactly my thoughts too, and why I called it weak,. It infuriates me that browser vendors cooperate with the deceit that there is some kind of trustworthy guarantee of authentication encompassed by HTTPS and certificate ownership.

  11. deepak shetty says

    @Marcus
    Good comment. One question

    And, of course, they get the root keys the traditional way: ask Verisign for them.

    My understanding is that if they get Verisigns root keys they can impersonate any website , but they cannot decrypt the communication between the client and the real server (unless they can establish themselves in between via some DNS attack)

  12. Henry Gale says

    There really is no such thing as secure email or web surfing because you have no control of what the other party does with the information.

  13. says

    My understanding is that if they get Verisigns root keys they can impersonate any website , but they cannot decrypt the communication between the client and the real server (unless they can establish themselves in between via some DNS attack)

    That is correct; you have to be in the path of the traffic. That is what those boxes in Rm 641A do.[1]

    When the spokespeople for the government and ${pick one of: facebook, google, apple, yahoo, linkdn, hotmail} make carefully-worded statements like:
    “the government does not have access to our servers”
    they are parsing Clintonly fine -- the government has access to the backend network where they are inside the encryption shell. As Adi Shamir once said, “nobody smart breaks encryption; they find ways around it.”

    The real problem isn’t the man in the middle attacks, it’s direct attacks against the keys themselves. If you have the server key, and a capture of the traffic, you can decrypt it at your leisure. One of the design-points that SSL conspicuously missed was ‘perfect forward secrecy’ -- the idea being that you design your crypto so that you create a random[3] session key, which is exchanged using a Diffie-Hellman exchange, and which is immediately thrown away and forgotten once it’s set into the bulk encryption engine. Note that, since all practical web traffic is ‘authenticated’ using a plaintext password, with SSL as the virtual connection security, a system based on a disposable random session key over a D-H exchange would have obviated any need for all the ‘certificate’ fol-de-rol. Of course perfect forward secrecy would be anathema to NSA -- it would mean that if you had a complete capture of a session you’d still never be able to crack it and there’d never be a master server key to unlock it; game over, man. In this case market realities: Jim Bidzos needed his pound of flesh, conspired with political realities: NSA was terrified that effective crypto might be deployed -- the result was that SSL was brain-dead before it was delivered.

    So, think this scenario through a bit more. Suppose you’ve got an archive in Utah where you collect SSL traffic to/from some uncooperative site; let’s imagine it’s some encrypted email site that hasn’t cooperated with the FBI yet… So you collect everything you can grab going in or out, which amount to a ton of unradable SSL sessions. Well, kinda. Because then one of your hireling “security researchers” discovers a buffer overrun somewhere in the web server software that’s running the site. Apache, IIS, Tomcat, whatever -- they’ve all had server-level vulnerabilities; then you exploit the vulnerability and instead of dropping fork/exec and “/bin/sh” on the stack you drop in a little bit of code that squirts back the server process’ running memory. In that memory, easily found with a debugger, is the server’s RSA key -- which has to be stored unencrypted in memory (otherwise, how could you use it?) A simple form of this would be instead of calling “/bin/sh” you call “uuencode /dev/mem memdump.core” and you get back a nifty encoded copy of the running process’ memory. Then you extract the key from that, walk it over to your archived terabytes of old SSL data, and suddenly it’s now all readable!!! Also readable are all the passwords for all the administrators who stupidly use SSL-‘protected’ administrative tools like Cpanel, SSL web interfaces, etc. When the suckers, excuse me, admins, go to update their SSL keys sometime in the next couple years they’ll give their passwords away while they’re doing it (I see people who rely on SSH fall for this all the time!) and then you can get the new key by just SSH’ing your uuencode command and sucking out the process’ memory including that juicy RSA pseudoprime. Or, the system administrator works for you. Or, the penetration tester works for you. Or… Once you have that key you have all the back-traffic and the odds are very very good that any forward traffic will contain future keys unless the target is psychotic about their tradecraft.

    [1] https://en.wikipedia.org/wiki/Room_641A
    [2] https://wirewatcher.wordpress.com/2010/07/20/decrypting-ssl-traffic-with-wireshark-and-ways-to-prevent-it/
    [3] NSA also attacks random number generators. Go figure! If you ever need to sample randomness, you’ll discover it’s a hard problem! Preferred method: point video camera at lava lamp; sample frames and throw them through a bulk encryption algorithm in 4k blocks using a cryptographic hash of another frame as the block key.

  14. says

    There really is no such thing as secure email or web surfing because you have no control of what the other party does with the information.

    It’s simpler than that. Two people really can only keep a secret if one of them is dead. So the second you take your secret out of your brain and commit it to a computer, you may as well kill yourself because that computer is “an other person.”

  15. says

    It infuriates me that browser vendors cooperate with the deceit that there is some kind of trustworthy guarantee of authentication encompassed by HTTPS and certificate ownership.

    In fairness, the number of ecommerce/eservice vendors who remotely comprehend what’s going on … is very very small. For most organizations, the PK certificate may as well be a +2 Scroll Of Magic SSL Glamour.

  16. brianiverson says

    All these answers allude to the government (CIA, NSA, etc.) obtaining your information. In the big picture of governmental surveillance this is extremely important. But, is that really what keeps me awake at night? No. I worry about the various national and international thieves/crime syndicates trying to separate me from my money (or stealing my important information as step in getting my money -- I am impacted by the theft of information from OPM) or those who may want to maliciously sully my reputation, or police looking for involvement with marijuana, or companies wanting to hire/fire a person and is looking for hidden data, for example. How do the technical details explained/commented on apply to this ‘lower level’ concern?

  17. lanir says

    Lots of stuff here in the comments. A bit too much for me to wade through at the moment. The article seems better at noting things than putting them in context. For example it brings up the idea that the things the government says about limiting encryption don’t appear to be accurate (to put it extremely midly -- they’re fundamentalist luddites about what encryption tools they want the general public to have) but then neglects to ask the real question: so why would the governemnt spy agencies keep bringing up this topic then?

    The article also has one glaring flaw that can’t be overstated. It implies a one-size-fits-all approach is workable for encryption. This is not the case. The government can always get into your computer and digital records the same way they can always get into your house if they want to. Security is almost always a trade-off of usability for protection. For example you lock your front door but you don’t lock every door in your house or business. But if you work at a bank there are definitely more locks inside the building than there are at an average retail store. The bank is acknowledging their greater risk and taking practical steps to deal with it.

    If you want a howto that is more likely to address the different scenarios common to most people as well as a few higher risk scenarios I’d probably start with this one:

    https://ssd.eff.org/

  18. deepak shetty says

    @brianiverson

    I am impacted by the theft of information from OPM) or those who may want to maliciously sully my reputation, or police looking for involvement with marijuana, or companies wanting to hire/fire a person and is looking for hidden data, for example.
    How do the technical details explained/commented on apply to this ‘lower level’ concern?

    HTTPS only does 2 things -- It lets you trust that when you typed https://www.somesite.com -- that you are indeed going to a server to whom an authority has issued it a certificate stating that it is http://www.somesite.com. And it also encrypts the data between your browser and the point at which HTTPS terminates on http://www.somesite.com. That still leaves
    a. Your browser itself (if you have malware or spyware on your machine then no amount of encryption can stop that software from knowing what you are doing -- it could just trap your keyboard keys for e.g.)
    b. You itself -- For e.g. a phishing attack -- Did you click that email that asked you to reset your password ? Are you sure it was http://www.somesite.com and not http://www.s0mesite.com ?
    c. Poorly coded sites that get hacked (Sites that store your password or your credit card in cleartext and then allow SQL Injection attacks)
    d. Poor security at the site you are accessing -- A common problem is an employee taking a dump of all the data and giving it to someone.
    e. The Internet proxy software at your company or their browser monitoring software
    In general -- don’t do anything on the Internet that you dont want people to know about :).

  19. deepak shetty says

    @brianiverson
    Posting again without links
    I am impacted by the theft of information from OPM) or those who may want to maliciously sully my reputation, or police looking for involvement with marijuana, or companies wanting to hire/fire a person and is looking for hidden data, for example.
    How do the technical details explained/commented on apply to this ‘lower level’ concern?

    HTTPS only does 2 things – It lets you trust that when you typed somesite.com – that you are indeed going to a server to whom an authority has issued it a certificate stating that it is somesite.com. And it also encrypts the data between your browser and the point at which HTTPS terminates on somesite.com. That still leaves
    a. Your browser itself (if you have malware or spyware on your machine then no amount of encryption can stop that software from knowing what you are doing – it could just trap your keyboard keys for e.g.)
    b. You itself – For e.g. a phishing attack – Did you click that email that asked you to reset your password ? Are you sure it was somesite.com and not s0mesite.com ?
    c. Poorly coded sites that get hacked (Sites that store your password or your credit card in cleartext and then allow SQL Injection attacks)
    d. Poor security at the site you are accessing – A common problem is an employee taking a dump of all the data and giving it to someone.
    e. The Internet proxy software at your company or their browser monitoring software
    In general – don’t do anything on the Internet that you dont want people to know about :).

  20. John Morales says

    That still leaves [stuff].

    True. It also leaves the OS and the hardware*.

    (Though your general point remains valid)

    * Including firmware.

  21. says

    I am impacted by the theft of information from OPM) or those who may want to maliciously sully my reputation, or police looking for involvement with marijuana, or companies wanting to hire/fire a person and is looking for hidden data, for example. How do the technical details explained/commented on apply to this ‘lower level’ concern?

    Modern beltway-speak: “Yes, the question is what intelligence is collected and what of it is actionable.”
    Strategy-speak: “Who can use what and how?”

    How has the theft of information from OPM substantially increased your risk on any axis? Perhaps it increases your risk of identity theft but the odds are good (I’d put them around 80%) if you’re an adult in the USA that your personal information has already been stolen or leaked at least once. I’d put them around 50% at being leaked twice, 25% at three times. That’s not based on any real metrics; I don’t think the information is available that would allow anyone to make that claim confidently. But -- OPM: so what? By the way, all the hysterical screeching from Washington about it being a great big Chinese intelligence coup: that’s bullshit too.[1]

    those who may want to maliciously sully my reputation

    Why would they bother with actual data? In terms of doing reputational damage on the internets, it’s actually more damaging if someone makes a claim regarding something you can’t support or disprove with evidence. “There are 2 bodies hidden on Marcus’ property!” is more damaging in the internet era than knowing that I bought ‘how to dispose of corpses for dummies’ from amazon.com So … don’t worry about it.

    police looking for involvement with marijuana

    When have the police needed to bother with evidence, lately?

    companies wanting to hire/fire a person

    Companies appear to have no problem doing that, already.

    I think your threat model is off. That’s OK because I think the surveillance state’s threat model is off, too. They think they are going to collect all this stuff and somehow be able to see terrorism before it happens, which is impossible because it involves accurately predicting the future. What they are actually doing is building the machinery by which the state can be turned on itself, if disclosure of secrets is allowed to remain a mechanism of control. What’s ironic about the whole thing is that ‘we'[3] will inevitably be able to turn that mechanism of control in opposition to the state, as well. That washington managed to learn absolutely nothing from Edward Snowden[4] is clear evidence that their strategy has gone horribly off the rails and is crashing through the marshland looking for a place to sink.

    [1] Any competent intelligence service would already have that information; collecting it from OPM would, of course, be useful to confirm the accuracy of the established information. That’s how intellegience gathering is done by competent intelligence services[2] -- you want to layer data and confirming data. The OPM data theft probably would allow various intelligence agencies to confirm their existing assessments of US intelligence community staffing levels.
    [2] The CIA is not one.
    [3] The Opposition. I consider myself a member of The Opposition, at this point in time.
    [4] Snowden fucked up, but his heart was in the right place. He demonstrated that the surveillance state is being constructed not to predict and shape enemys’ actions (because it can’t) but simply to dig up dirt on anyone. The only way to resist that is to stop giving a rat’s ass who has sex with whom, or prefers what drugs, etc. Taking the levers of power away from the surveillance state is remarkably easy and you can see how terrified the establishment was to see the threat of disclosure of homosexuality (a lever it used for 1000 years or so) wrenched out of its hands.

  22. Peter B says

    Marcus Ranum @13 in the paragraph starting with…

    >The real problem isn’t the man in the middle attacks

    …totally steals what little thunder I could offer when he says:

    >a system based on a disposable random session key over a D-H exchange would have obviated any need for all the ‘certificate’ fol-de-rol.

    Such a setup requiring only the exchange of disposable public keys (and the prompt erasure of key related information) prevents the recovery of all messages past and future. IF all parties securely erase unencrypted message text.

  23. EnlightenmentLiberal says

    Marcus Ranum says:

    Correct. That’s the NSA’s job.
    And, depending on the service you’re using and how you use it, you’re somewhere between 90% and 100% likely to have your messages captured and analyzed. For one thing, using email systems that encrypt is a great big red flag. Use gpg or something like that and you’re doing the equivalent of marching around with a flag-waving team, mariachi band, and an elephant.
    You used the word “read”; you need to define carefully what you mean by that word when talking about surveillance.
    The surveillance state defines “read” as “a human analyst looks at the message with human eyeballs and does brain stuff about the contents.” A computer programmer defines “read” as …? Does a process “read” a message? Does a massively parallel pattern-finding system that does linguistic analysis better than Strunk and Wagnall’s “read” a message?[2] Or what about if it goes through an FDF2000 chipset running in a paracel pattern-matching engine?[3]

    Regarding this, I think it could be done. You could keep the message text unreadable and unrecovable except by the intended receipient. A full guarantee is probably beyond the capacity of most individuals. Here’s an example of a real email service that did it.
    https://en.wikipedia.org/wiki/Lavabit
    Too bad the government effectively shut it down. Wikipedia contains a good short description of the events.

    Of course, this comes with lots of caveats noted else-thread. The unreadability and unrecoverability of your message depends on:

    -- Your local computer is not compromised software-wise, e.g. no malfare, software keyloggers, etc. Dittos for the remote computer. This is practically impossible for the average user. For the advanced user, it’s pretty easy to guarantee it (whip up a local version of Open BSD) but then that computer becomes unusuable for most reasonable activities, like using facebook, playing games, or doing basically anything at all except using your properly vetted crypto-message programs.

    -- Your local hardware is not compromised, e.g. no physical hardware keyloggers, nor exotic devices, such as:
    https://www.schneier.com/blog/archives/2005/09/snooping_on_tex.html
    http://www.pcworld.com/article/161166/article.html
    Dittos for the remote computer.

    -- You have solved the problem of DNS spoofing, IP spoofing, and other man in the middle attacks. This is pretty easy to overcome for the dedicated person, but very difficult for the end user. Dedicated persons could meet in person and physically exchange keys on physical USB sticks. From my limited understanding, internet security certificates are meant to solve this problem, but the government can always go to the issuing authority to work around certificates. They could not us the same tactics on two people who meet and trade USB sticks.

    -- Even then, the NSA can easily recover source and target metadata information, although my limited understanding of onion routing networks like TOR is that using onion routing make that extremely difficult or impossible even for the NSA to know the destination of your message. Of course, they know that you’re sending something via onion routing even if they don’t know where it’s going, and so few people use onion routing, and so it’s going to raise massive red flags. For example, IIRC, I’ve read that some experts suggest that half of traffic on onion routing systems is child porn, which means law enforcement now thinks that there’s a 50-50 chance you’re sending child porn. tl;dr they’re going to focus on you, which brings us to…

    -- This only works if you’re not suspected of something. If you’re already suspected of something, they can troll through the non-secure communication that you’ve done (and you’ve almost certainly done so unless you’re a paranoid nutjob who doesn’t actually use the internet for the things that most people do), and they can use metadata link analysis, and many of the other techniques. Regarding my suggestion of having a dedicated computer, if you use another computer to visit facebook, the NSA can link those two computeres together and know it comes from one source, which allows for a lot of information gathering.

    -- Or they can just capture you bodily, put you in prison, demand your keys from you, and keep you in prison until you give up the keys. Again, courts have ruled that fifth amendment protections do not apply. Have fun!

    -- You better have proper encryption on the hard drive too, otherwise the NSA could just read from that. Hope sensitive information isn’t recoverable from the various hardware memory or cache systems either! tl;dr if they get physical bodily access to your computer, you probably lose. And for that discussion, the only secure way to handle this is fire or acid. Taking a hammer to your computer, hard drive, and hard disk platters is not good enough theoretically. You can read data off a smashed hard disk platter. Again, fire or acid is the only 100% surefire way. I’d suggest a prolonged fire, bring the metal above the temperature point where it loses its magnetic properties, the Curie point.

    For the average user, using Lavabit would have been very effective, but definitely not foolproof, for all of the reasons stated above, and probably more that I’m missing right now. Again, for emphasis, that only works if you trust Lavabit to not give up the keys to their email service, something which is actually true for Lavabit -- unlike basically every other email service that I know of. (I don’t actually know. I haven’t looked for secure email services.) And for that stance for security and privacy, the operator of Lavabit shut down Lavabit rather than be forced by court order to give up the keys.

    Of course, because it was actually effective, it was shut down. The problem becomes finding another Lavabit. GL with that. I only knew that Lavabit was trustworthy after they shut down. I know Lavabit is trustworthy because they shut down. That’s a great catch-22.

    For others, I must emphasize that this is for email only. If you are dedicated enough and can rely on instant messages only (where the requirement is both people must be online at the same time to send messages), then you can meet up in person, trade USB sticks, and assuming the conditions above, be quite certain that your message content cannot be recovered by the NSA or anyone else. Again, largely beyond the ability of the normal person, but anyone with good familiarity with Linux and basic programming knowledge should be able to do it.

    Quoted for truth:
    Marcus Ranum says:

    They’ll be able to strip-mine all your data for years afterwards; the trick is to keep them from even giving you a keyscore value. For 99.99% of us that’s easy: we’re harmless, and we more or less look it. They don’t care about or extramarital affairs (unless we’re a popular general who may be eyeing a political career and needs to be publicly reputation-curbstomped) they don’t care about our minor dope deals (unless we’re a district attorney or a mayor or someone else whose political career needs ending)

    It’ll be great for strip-mining the txt messages of a mass-shooter or a congressperson that’s not playing their assigned role in the charade, but it’s going to lack predictive power. It’s basic math: you’ve got the false positive problem/base rate fallacy.

Leave a Reply

Your email address will not be published. Required fields are marked *