The theory goes: You cannot have perfect security against all possible threats all the time for free. Usually, we expect our applications to incur a cost (usually in terms of CPU, memory, or electricity usage) in order to be secure. It seems logically correct that, if you need more security, your cost must therefore be higher.
Fortunately, this is not always true! Sometimes, given a choice between two solutions, the more secure option costs less than the insecure one.
Security Trade-Offs Can Be An Unfair Trade
In PHP development, a lot of people use insecure functions, such as rand()
or mt_rand()
, to generate random numbers for cryptography purposes. When our team suggests a more secure alternative, such as the compatibility library for PHP 7's CSPRNG functions we maintain, we often encounter severe resistance from other developers, usually citing "performance concerns".
But as luck would have it, PHP 5's CSPRNG functions are actually faster than using mt_rand()
. PHP 7's functions are even faster (and more robust, on newer Linux kernels, thanks to getrandom(2)
).
Given that the CSPRNG is both faster and more secure than the weak PRNG, there is no trade-off here. Using a CSPRNG is superior in basically every measurable way and you gain nothing by sacrificing security.
Even Self-Proclaimed Experts Drink the Security Trade-Off Kool-Aid
Let's look at another example: MTProto, the cryptography protocol used by Telegram.
It's easy to pick on MTProto, since a recent IACR paper by Jakobsen and Orlandi demonstrated it's vulnerable to chosen-ciphertext attacks. However, this finding should surprise precisely no one involved in cryptography.
Despite being a constant recipient of criticism from security experts, Telegram claims that their protocol is more secure against DDoS attacks than an authenticated encryption construct. They also insist on using SHA1 simply because it's faster than SHA2.
We haven't been able to verify the claims of DDoS resistance (we don't have access to a botnet, nor do we have any intention of ever infecting anyone's computer with malware to build one), but the performance impact of protocol design is easy to conceptualize:
- An AEAD construct, such as AES-GCM or ChaCha20-Poly1305, will verify before attempting decryption, and is not vulnerable to chosen-ciphertext attacks to decrypt messages.
- Telegram's protocol will decrypt before attempting verification, and is vulnerable to chosen-ciphertext attacks to decrypt messages.
Telegram chose a protocol that is designed to take longer to fail than it needs to. They attempt to justify their implementation simply because they use faster (and much weaker) cryptographic building blocks.
Despite claiming that their cryptography protocol design team consists of six ACM champions and several Ph.Ds in math, Telegram still made questionable choices.
Even when the security trade-offs are real, the choices that other people make might be totally misguided. Even self-proclaimed experts might be wrong!
(Also: Don't ever rely on Telegram for private communication. Signal is way better engineered.)
Libsodium Does it Right (as usual)
Of course, when talking about security trade-offs, not everything is doom and gloom. We've written about libsodium before, but for the uninitiated: Libsodium is a conservative and opinionated cryptography library that prioritizes security, side-channel resistance, and usable interfaces above performance. (The code is also a pleasure to read compared to most C projects, to boot.)
But despite this emphasis on security, libsodium ends up being much faster than its alternatives. For example: Libsodium 1.0.7's Curve25519 implementation is is faster than ECDH over NIST's P256 curve. Most elliptic curve cryptography is faster than classical RSA or Diffie Hellman, despite being more secure (against, in particular, index calculus attacks).
Security, simplicity, and speed? Yep, it's definitely possible. Unfortunately, application performance isn't the only thing people want you to believe you have to sacrifice to be secure.
Other Harmful Claims of Security Trade-Offs
There is a belief, promoted by some employees of some governments and their contacts in the media, that privacy and security are antipodes-- that the two concepts are at opposite ends of the spectrum. The story goes: If you want more privacy, there is a security trade-off. If you want more security, there is a privacy trade-off. This is ridiculous and totally misses the point of computer security.
Why do companies invest the time and money to secure their web applications, if not to protect their customers' data from being observable by untrusted parties?
In other words: They increase security to guarantee privacy.
The semantics here are important. Any seasoned technologist without an agenda can tell you: security and privacy mostly overlap. (Also: privacy is not secrecy.)
The Lesson
Security trade-offs are counter-intuitive. Any time someone tries to convince you that you must compromise something else in exchange for security (or vice versa), question them. Especially if it's you trying to convince yourself.
Bake security into the foundations of your applications. Don't make it something that has to be reasoned about down the line by non-experts. For example: If a developer has to ask themselves if they should generate a nonce randomly or if it would be okay to just pass the same value every time, you have failed.
And if you reach the point where you have to make a choice between a secure option and an insecure option that might be better by some other metric, make sure you actually document and measure this trade-off. You might find that the benefit of the insecure choice is negligible, and that you therefore should opt for security.