A new way for whistleblowers to share secret information


Given the secretive and coercive nature of the national security state, we have come to depend upon whistleblowers to tell us of the abuses that are committed by governments. Governments in turn retaliate by threatening to hand out extremely harsh punishments to those caught divulging information they do not want revealed, though high government officials will freely leak secret information to reporters when it serves their interests and such people not only do not get punished, they are rewarded for such actions and even for their deceptions and lies.

As a result, those who are serving as watchdogs on the government keep trying to develop new ways for whistleblowers to release information in the public interest without getting caught. Conor Schaefer of The Freedom of the Press Foundation says that they are testing out a new system called Sunder, “a desktop application for dividing access to secret information between multiple participants” and are inviting people with some expertise in this field (which rules me out) to help them refine it.

While Sunder is a new tool that aims to make secret-sharing easy to use, the underlying cryptographic algorithm is far from novel: Shamir’s Secret Sharing was developed in 1979 and has since found many applications in security tools. It divides a secret into parts, where some or all parts are needed to reconstruct the secret. This enables the conditional delegation of access to sensitive information. The secret could be social media account credentials, or the passphrase to an encrypted thumb drive, or the private key used to log into a server.

Until a quorum of participants agrees to combine their shares (the number is configurable, e.g., 5 out of 8), the individual parts are not sufficient to gain access, even by brute force methods. This property makes it possible to use Sunder in cases where you want to disclose a secret only if certain conditions are met.

The most frequently cited example is disclosure upon an adverse event. Let’s say an activist’s work is threatened by powerful interests. She provides access to an encrypted hard drive that contains her research to multiple news organizations. Each receives a share of the passphrase, under the condition that they only combine the shares upon her arrest or death, and that they take precautions to protect the shares until then.

Secret sharing can also used to protect the confidentiality of materials over a long running project. An example would be a documentary film project accumulating terabytes of footage that have to be stored safely. By “sundering” the key to an encrypted drive containing archival footage, the filmmaking team could reduce the risk of accidental or deliberate disclosure.

I am passing this on to people who might be able to better assess this and even help in it.

Comments

  1. says

    We should not emphasize any one tool. Whenever a security tool becomes popular, it is a target. The response of the community should be a million paper bags taped under bathroom sinks and park benches, containing superencrypted data based on pre-exchanged keys. And white noise. Lots of white noise.

  2. says

    Elaborating: look at the recent PGP break. Now, everyone (except me, apparently) has been trusting PGP and sending messages full of PGP headers, they are easily identifiable and retro-crackable. This was an obvious failure-mode to some of us serious paranoids, decades ago.

    The bottom line is you need operational security, or the technology is irrelevant. So, now, anyone who uses Sunder is suspicious. What can be learned from metadata? Enough! Let’s say a particular document leaks and winds up on Sunder: did the file size change? (Probably not) Are there traces of all connections that transferred ${filesize}+/- bytes around the time of the leak? (Yes) Are there traces of the same size transaction going to other users within 24hr of the upload? (Yes) Then map their address and it’s “game over man!”

    Anyone reading this: if you are going to leak something, there are traps laid for you to walk into. Traps laid by very sneaky people who think about this stuff a great deal. Your best bet is to use a sole-purpose laptop (not running windows or macos) to write a DVD of superencrypted data xor’d using a random data source then leave it somewhere (e.g.: a particular book in a public library) then use a burner phone to call the journalist and tell them where it is, and leave another DVD with the random data. Watch that dead drop from a distance with binoculars and see how many people show up to collect/copy that DVD. If it’s anyone but the journalist, you’d better get a car and head for the Canadian border fast.

  3. says

    @Marcus, No. 2

    I agree with you 99 percent. Anyone using binoculars would be arrested.

    Better to leave multiple DVDs in multiple locations then walk away and never look back.

  4. Glor says

    @Marcus:
    Have you read the paper? It’s not really a “PGP break”, it’s a “If your client displays html emails, in many clients an attacker can modify the email to add links that will leak the decrypted data (thereby bypassing PGP) to an attacker-chosen server” + “A handful of clients allow the attacker to specify which keyserver to query, allowing leaks”, and only if the client ignores a missing/bad mdc warning. Still bad, but not nearly as bad as an actual PGP break would be.
    Now the press (and EFF) has been running around telling people to uninstall their email encryption software (instead of, say, switching to the ones that the paper found weren’t vulnerable to these attacks, or in most cases, just disabling html email)… making more people communicate unencrypted is not quite the white noise we need I think 😉

    Agreed on the rest.
    Sunder is not gonna help protect whistleblowers, but I can see it as somewhat useful in the described case of an insurance policy, giving you the option that “x people of y need to agree to release this” instead of “1 person needs to decide to release” or “any 1 of y people need to decide to release”. Whether it’s a preferable option depends on your circumstances.

  5. says

    It’s the metadata and possibility of parallelized offline attacks that concern me. The current break is just a side effect of creeping featurism and bad opsec. (Arguably creeping featurism is also bad opsec) But it’s still bad when your ciphertext is available and altering it can cause outbound http -- especially since a lot of email ciphertext sits out in the cloud, ripe for offline attack. Remember, pgp was designed at a time when everyone ran a mail server -- things have changed.

    This is an interesting example of just the kind of thing I am talking about:
    https://www.zdnet.com/article/police-hack-pgp-server-with-3-6-million-messages-from-organized-crime-blackberrys/

Leave a Reply

Your email address will not be published. Required fields are marked *