What I could do, however, was help them out, so long as that didn’t imperil my plans. This was how I found myself in Honolulu, a beautiful city in which I’d never had much interest, as one of the hosts and teachers of a CryptoParty. This was a new type of gathering invented by an international grassroots cryptological movement, at which technologists volunteered their time to teach free classes to the public on the topic of digital self-defense—essentially, showing anyone who was interested how to protect the security of their communications. In many ways, this was the same topic I taught for JCITA, so I jumped at the chance to participate.
Though this might strike you as a dangerous thing for me to have done, given the other activities I was involved with at the time, it should instead just reaffirm how much faith I had in the encryption methods I taught—the very methods that protected that drive full of IC abuses sitting back at my house, with locks that couldn’t be cracked even by the NSA. I knew that no number of documents, and no amount of journalism, would ever be enough to address the threat the world was facing. People needed tools to protect themselves, and they needed to know how to use them. Given that I was also trying to provide these tools to journalists, I was worried that my approach had become too technical. After so many sessions spent lecturing colleagues, this opportunity to simplify my treatment of the subject for a general audience would benefit me as much as anyone. Also, I honestly missed teaching: it had been a year since I’d stood at the front of a class, and the moment I was back in that position I realized I’d been teaching the right things to the wrong people all along.
When I say class, I don’t mean anything like the IC’s schools or briefing rooms. The CryptoParty was held in a one-room art gallery behind a furniture store and coworking space. While I was setting up the projector so I could share slides showing how easy it was to run a Tor server to help, for example, the citizens of Iran—but also the citizens of Australia, the UK, and the States—my students drifted in, a diverse crew of strangers and a few new friends I’d only met online. All in all, I’d say about twenty people showed up that December night to learn from me and my co-lecturer, Runa Sandvik, a bright young Norwegian woman from the Tor Project. (Runa would go on to work as the senior director of information security for the New York Times, which would sponsor her later CryptoParties.) What united our audience wasn’t an interest in Tor, or even a fear of being spied on as much as a desire to re-establish a sense of control over the private spaces in their lives. There were some grandparent types who’d wandered in off the street, a local journalist covering the Hawaiian “Occupy!” movement, and a woman who’d been victimized by revenge porn. I’d also invited some of my NSA colleagues, hoping to interest them in the movement and wanting to show that I wasn’t concealing my involvement from the agency. Only one of them showed up, though, and sat in the back, legs spread, arms crossed, smirking throughout.
I began my presentation by discussing the illusory nature of deletion, whose objective of total erasure could never be accomplished. The crowd understood this instantly. I went on to explain that, at best, the data they wanted no one to see couldn’t be unwritten so much as overwritten: scribbled over, in a sense, with random or pseudo-random data until the original was rendered unreadable. But, I cautioned, even this approach had its drawbacks. There was always a chance that their operating system had silently hidden away a copy of the file they were hoping to delete in some temporary storage nook they weren’t privy to.
That’s when I pivoted to encryption.
Deletion is a dream for the surveillant and a nightmare for the surveilled, but encryption is, or should be, a reality for all. It is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files, or for anything else—unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you—unless they have the encryption key. If all your communications that pass through hostile Australian or British or American or Chinese or Russian networks are encrypted, spies can’t read them—unless they have the encryption key. This is the ordering principle of encryption: all power to the key holder.
Encryption works, I explained, by way of algorithms. An encryption algorithm sounds intimidating, and certainly looks intimidating when written out, but its concept is quite elementary. It’s a mathematical method of reversibly transforming information—such as your emails, phone calls, photos, videos, and files—in such a way that it becomes incomprehensible to anyone who doesn’t have a copy of the encryption key. You can think of a modern encryption algorithm as a magic wand that you can wave over a document to change each letter into a language that only you and those you trust can read, and the encryption key as the unique magic words that complete the incantation and put the wand to work. It doesn’t matter how many people know that you used the wand, so long as you can keep your personal magic words from the people you don’t trust.
Encryption algorithms are basically just sets of math problems designed to be incredibly difficult even for computers to solve. The encryption key is the one clue that allows a computer to solve the particular set of math problems being used. You push your readable data, called plaintext, into one end of an encryption algorithm, and incomprehensible gibberish, called ciphertext, comes out the other end. When somebody wants to read the ciphertext, they feed it back into the algorithm along with—crucially—the correct key, and out comes the plaintext again. While different algorithms provide different degrees of protection, the security of an encryption key is often based on its length, which indicates the level of difficulty involved in solving a specific algorithm’s underlying math problem. In algorithms that correlate longer keys with better security, the improvement is exponential. If we presume that an attacker takes one day to crack a 64-bit key—which scrambles your data in one of 264 possible ways (18,446,744,073,709,551,616 unique permutations)—then it would take double that amount of time, two days, to break a 65-bit key, and four days to break a 66-bit key. Breaking a 128-bit key would take 264 times longer than a day, or fifty million billion years. By that time, I might even be pardoned.
In my communications with journalists, I used 4096- and 8192-bit keys. This meant that absent major innovations in computing technology or a fundamental redefining of the principles by which numbers are factored, not even all of the NSA’s cryptanalysts using all of the world’s computing power put together would be able to get into my drive. For this reason, encryption is the single best hope for fighting surveillance of any kind. If all of our data, including our communications, were enciphered in this fashion, from end to end (from the sender end to the recipient end), then no government—no entity conceivable under our current knowledge of physics, for that matter—would be able to understand them. A government could still intercept and collect the signals, but it would be intercepting and collecting pure noise. Encrypting our communications would essentially delete them from the memories of every entity we deal with. It would effectively withdraw permission from those to whom it was never granted to begin with.
Any government hoping to access encrypted communications has only two options: it can either go after the keymasters or go after the keys. For the former, they can pressure device manufacturers into intentionally selling products that perform faulty encryption, or mislead international standards organizations into accepting flawed encryption algorithms that contain secret access points known as “back doors.” For the latter, they can launch targeted attacks against the endpoints of the communications, the hardware and software that perform the process of encryption. Often, that means exploiting a vulnerability that they weren’t responsible for creating but merely found, and using it to hack you and steal your keys—a technique pioneered by criminals but today embraced by major state powers, even though it means knowingly preserving devastating holes in the cybersecurity of critical international infrastructure.