mar 22, 2016

We cannot have cyber security without cryptography!

**Confidentiality** requires *encryption* of the sensitive data.

**Integrity** is important, *hash functions* let us detect inappropriate modification of data and system configurations.

**Authentication** of users and hosts can be done in many ways, and the more secure methods involve hash functions, encryption, or some combination of the two.

*Cryptography is necessary but it is not sufficient.*

Put another way, we have to use cryptography in a variety of forms. But simply using cryptography does *not* mean that we will be secure.

The cryptography must be used very carefully. Some times the logic (or illogic) of how it’s applied makes the cryptography nothing but added work that adds no real security. Often the cryptography is used inappropriately, or even not at all, because the user interface is — to be frank — dreadful.

Let’s see what’s getting in the way of converting very powerful mathematical tools into useful and practical cyber security. There are some nicely skeptical papers analyzing why practical systems fail.

The first paper you should check out is **Another Look at “Provable Security”,** by Neal Koblitz and Alfred Menezes. Yes, I was just recommending another paper by them in earlier blog posts on elliptic curve cryptography, here and here. They are two highly respected cryptographers, the second is a co-author of the Handbook of Applied Cryptography.

Another paper by the same pair of authors, with an even better title, is **The Brave New World of Bodacious Assumptions in Cryptography.**

Then check out their web site, Another Look at Provable Security.

These papers and the web site point out that many of our assumptions of security are based on faulty logic. Consider what we tend to say about RSA:

**1.** If you could find the two prime factors of a 2048-bit number (roughly 600 digits in base 10), you could break 2048-bit RSA.

**2.** It would be so hard to factor a 2048-bit number that we don’t need to worry about that happening, unless we suspect that our adversary has built a general-purpose quantum computer of adequate scale to run Shor’s algorithm on a problem of that size.

The problem is that we leap to an illogical conclusion: Since there doesn’t seem to be an easy way to factor 2048-bit numbers, RSA must be safe.

**That’s not at all what the logic says!**

Let’s take an analogous look at the **monoalphabetic substitution cipher.** With such a huge name it should be secure.

There are 26! = 403,291,461,126,605,635,584,000,000 possible monoalphabetic substitution ciphers for a 26-character alphabet. That’s about 2^{88}, so a brute-force search through that key space would take about 2^{8} or 256 times as much work as searching the key space of the NSA-recommended Skipjack cipher.

But…

It would be ridiculous to launch a brute-force search of the key space! Read **The Gold Bug** by Edgar Allan Poe, the original cypherpunk. He presents a clear explanation of how to break a monoalphabetic substitution cipher with just a few dozen characters of ciphertext.

So yes, *if* you could casually search an 88-bit key space you *could* use that to break a monoalphabetic substitution cipher. But there are far easier and faster ways to do the same thing!

**Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice** by several authors describes how many vulnerable sites use the same 512-bit group, meaning that a week of computation leads to compromise of 7% of the top million HTTPS servers on the internet. Even the 1024-bit cases could be broken with nation-state resources, providing access to 18% of popular HTTPS sites, 66% of IPsec VPNs, and 26% of SSH servers.

Riffing off the 1955 book ** Why Johnny Can’t Read** by Rudolf Flesch, a series of articles have criticized the awful interfaces of cryptographic plugins.

Start with **Why Johnny Can’t Encrypt,** by Alma Whitten and J. D. Tygar in 2005. They open with: “User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near nonexistent.”

Next: **Why Johnny Still Can’t Encrypt: Evaluating the Usability of Email Encryption,** 2006, Steve Sheng, Levi Broderick, Colleen Koranda, and Jeremy Hyland, concluding that PGP 9 wasn’t an improvement on PGP 5.0.

Then: **Why Johnny Still, Still Can’t Encrypt: Evaluating the Usability of a Modern PGP Client,** 2016, by Scott Ruoti, Jeff Andersen, Daniel Zappala, and Kent Seamons, concluding that 10 years and the rise of web based mail interfaces haven’t helped.

It’s not just e-mail plugins. **Why (Special Agent) Johnny (Still) Can’t Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System** by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, and Matt Blaze showed that the state of the art in 2011 produced two-way radios for local to federal law enforcement that *should* be able to encrypt but left gaping holes.

Learning Tree’s System and Network Security Introduction course explains the cryptographic primitives. Check out these papers to see how to more carefully use them, gaining real security and not just the illusion of security!