Most of the software security vulnerabilities known to man are preventable by careful development practices. For example, [SQL injection can be prevented](https://paragonie.com/blog/2015/05/preventing-sql-injection-in-php-applications-easy-and-definitive-guide) by separating the user-provided data from the SQL query. We've covered quite a few of these topics before. Read [*A Gentle Introduction to Application Security*](https://paragonie.com/blog/2015/08/gentle-introduction-application-security) if you're not familiar with web application security. However, even if you're trying to do everything right, eventually we all make mistakes and ship exploitable software.

The Case for Automatic Updates

Once a security bug exists in your customer's networks, preventing a security breach involves a lot of moving parts, but most importantly: 1. Identifying the security bugs before criminals do. 2. Fixing the security bugs you've identified. 3. Getting the patch deployed in all of your customer's networks. Consider the timeline below: The Life and Death of a Software Vulnerability (Timeline) The time between point 0 and point 1 might be years. For the Linux kernel, it's about 5 years on average. There are a lot of ways to reduce this number, and most of them involve automated testing and manual code review from security experts. The time between point 1 and point 2 might be weeks or months. Our average vulnerability identified-to-fixed time window is less than 24 hours. Most organizations do not enjoy the same agility or expertise. Most development teams have no control over the time between points 2 and 3 -- the amount of time it take for the fix to be applied (and the vulnerability to be neutralized) after the update is available. By making updates manual rather than automatic, you're forcing your customers to take all the responsibility for making sure that *your* mistakes don't hurt their business. Only a very small minority of your customers might prefer the responsibility of verifying and applying each update themselves. By and large, most people do not have the security awareness, time management, and discipline to undertake this responsibility 24/7/365. Automatic security updates reduce the interval between points 2 and 3 from **possibly infinite** to **nearly zero.** That's clearly a meaningful improvement over manual patch management.

What's the Practical Risk in Outdated Software?

The problem of outdated software is well-studied by the information security industry. According to [Verizon's 2015 Data Breach Investigations Report (PDF)](https://paragonie.com/files/misc/verizon_2015_dbir.pdf), for example, when a software vulnerability was used... > 99.9% of the exploited vulnerabilities were compromised *more than a year after the CVE was published*. Earlier this year, Wombat Security reflected on similar studies in an article titled [Out-of-Date Software and Plug-ins Compound End-User Risk](https://info.wombatsecurity.com/blog/out-of-date-software-and-plug-ins-compound-end-user-risk). An article on Help Net Security examines findings from F-Secure and echoes the problem of [companies relying on out-dated software and putting their users at risk](https://www.helpnetsecurity.com/2014/04/02/the-dangers-of-using-outdated-software/). The danger of outdated software is supported by both the data and by simple logic: If criminals are aware of a specific vulnerability in a software product, it doesn't matter that the vendor published a security patch if most of the companies that use their product will remain vulnerable when criminals want to exploit it. ### It Seems So Obvious, But... If 99.9% of real-world software exploits are preventable by keeping software up-to-date, and the danger of unpatched vulnerabilities is well-known among security professionals, the obvious question is, "Why doesn't everything update itself automatically?" Implementing a secure automatic update mechanism is a nontrivial engineering feat that most programmers don't even know where to begin addressing. And often the ones that think they do are uninformed about the risks and complexity.

Elements of a Secure Automatic Update System

**Automatic updates** are a specific instance of a more general problem: [secure code delivery](https://defuse.ca/triangle-of-secure-code-delivery.htm). But in addition to the problems involved with securely delivering code from the developer to the user (with a healthy dose of distrust towards the delivery infrastructure), you have additional engineering challenges to contend with due to the automatic nature of the whole process. In order to automatic updates to be secure, you first must make [secure software updates](https://paragonie.com/blog/2017/09/supply-chain-attacks-and-secure-software-updates) possible without automation. The components of a **secure automatic update system** are as follows: * [Offline Cryptographic Signatures](#offline-digital-signatures) * [Reproducible Builds](#reproducible-builds) * [Decentralized Authenticity / Userbase Consistency Verification](#decentralized-authentication) * [Transport-Layer Security](#transport-layer-security) * [Mirrors and Other Availability Concerns](#availability) * [Separation of Privileges](#privilege-separation)

Offline Cryptographic Signatures

Cryptographic signatures provide a (*very strong* inductive) proof of authenticity. Every update package should be [digitally signed](https://paragonie.com/blog/2015/08/you-wouldnt-base64-a-password-cryptography-decoded), using (listed in order of preference): 1. **RFC 7479** (Ed25519, or Ed448 -- modern elliptic curve cryptography). 2. **Ed25519** in conjunction with **SPHINCS-256** (post-quantum cryptography). 3. **RFC 6979** (Deterministic ECDSA). 4. **GnuPG**, which shines in this use-case and doesn't require you to even care which algorithm it's using. 5. **RSASSA-PSS** with 2048-bit keys, SHA256, and MGF1-SHA256 and e = 65537. 6. **Foot-bullety ECDSA** (i.e. you're responsible for generating random nonces and dodging invalid curve attacks, and worrying about whether or not your curve is twist-secure). This is ranked lower than RSA because most libraries that only implement non-deterministic ECDSA are riddled with side-channel attacks. But the most glaring problem is the risk of nonce reuse. The standard was broken. RFC 6979 fixes it. RFC 8032 uses safer building blocks. For PHP developers: you can get Ed25519 from [libsodium](https://paragonie.com/blog/2016/05/solve-all-your-cryptography-problems-in-three-easy-steps-with-halite) or secure RSA from [EasyRSA](https://github.com/paragonie/EasyRSA). The PEAR bindings for `Crypt_GPG` are available via Composer:
composer require pear/crypt_gpg
Since cryptographic signatures require a key pair (one secret/private key and its corresponding public key), simply publish/hard-code your public key with your software and use it to verify the signatures of update files.

Reproducible Builds

**Reproducible builds** is a concept that's mostly relevant in compiled languages, where you can compile the source code on your own machine and get a byte-for-byte identical binary blob as the one the developer published and signed. This makes it more difficult for an attacker (or the developer) to slip in a trojan without being detected. Reproducible builds necessarily require software projects to be **open source**. Closed-source (proprietary) software cannot satisfy this requirement. For PHP developers, if your deliverable is a PHP Archive (.phar file), your users can use [Pharaoh](https://paragonie.com/project/pharaoh) to verify that the .phar you publish and the .phar built on their machine have no differences. If any differences are found, you can then examine them to verify that they're benign. (This is more useful than just comparing checksums.)

Decentralized Authenticity / Userbase Consistency Verification

To prevent targeted attacks: 1. All updates should be checked into an append-only audit log. 2. The local copy of the audit log should be validated among trusted peers. The audit log can either mean implementing something like [Certificate Transparency](https://www.certificate-transparency.org/) or simply publishing a SHA-384 hash of the file's contents and additional metadata onto a cryptocurrency ledger. When downloading an update file, after checking the cryptographic signature (which is a strong assertion that it was signed by the developer), one should check with their trusted peers to validate that they see the same copy of the audit log. If they unanimously respond affirmatively, then you can rule out any likelihood of a silent, targeted attack being deployed (especially if the infrastructure doesn't know who your trusted peers are; Tor helps here). This feature only guarantees that attacks can't be targeted to one user without alerting the entire network to the presence of the attack. (It does not, however, stop an attacker who breached the infrastructure and pilfered the signing keys from releasing an update in the first place.) In addition to being forensically useful, it's a deterrent for the most sophisticated adversaries (whom, for the most part, want their activities to remain undetected).

Transport Layer Security

For every developer that believes the solution to `curl http://foo | sh` is simply `curl https://foo | sh`, there must be three that believe a GnuPG signature obviates the need for HTTPS. This is incorrect. **You need both.** Consider the following scenario: 1. You release version 5.5.1 of your software widget, with a valid signature. 2. A vulnerability is found in 5.5.1, so you patch it and release version 5.5.2, with a valid signature. 3. Because you're using HTTP, an active attacker replays version 5.5.1 and its valid signature when an update request is emitted. 4. Your users' system remains on the vulnerable version until the attacker lets them upgrade. This is known as a replay attack, and cryptographic signatures alone don't remedy them. The knee-jerk response to a replay attack is to upload your secret key to the server and do online signing (either with timestamps or challenge-response authentication). This defeats the purpose of an offline digital signature: To be secure even when the server hosting the files is breached. You could also build a really complex system of offline signatures (see: most Linux distros and their `RELEASES.gpg` or equivalent). But why bother adding more complexity to your life when you can simply use HTTPS instead of HTTP and man-in-the-middle-assisted replay attacks disappear?

Mirrors and Other Availability Concerns

The other way for a criminal to prevent a security release from getting applied automatically to targets of interest would be to simply launch a sustained Distributed Denial of Service attack against the update server. Fortunately, if you've implemented all the features above, your protocol already has enough resilience to allow you to decentralize your infrastructure. This means you could either: * Have a wide range of mirrors, hosted in a diverse range of networks with a diverse portfolio of internet service providers, OR * Simply upload copies of every update file to the BitTorrent network and let your users seed updates. Trojan horse malware wouldn't a concern for your users either way; your software is already verifying cryptographic signatures with your public key. Either strategy greatly increases the cost of a sustained DDoS attack and reduces the likelihood of it having the intended effect.
The mail critical security updates must go through.

Separation of Privileges

Once you've obtained an authentic, peer-verified, reproducible-from-source copy of the security update from a highly-available decentralized networking infrastructure... what to do with it? There are two schools of thought here: 1. The update process can be initiated by the HTTP process when a user accesses the application that needs updating. * This requires the software to be able to write to itself, which greatly reduces your defense-in-depth to traditional web vulnerabilities, but ensures out-of-the-box automatic security updates are available for users who don't have the knowledge or time to set up privilege separation. 2. The update process must be initiated by a different user than `apache`/`www-data` (or equivalent). The software is unable to write to itself, and in fact can only be overwritten by another user account. * This is more secure, but carries the risk of more people disabling automatic security updates out-of-the-box. Your threat model should dictate which avenue you pursue. [CMS Airship](https://paragonie.com/project/airship) went with the first option because we believe the risk of a 1day vulnerability to be much higher than "the source code is, effectively, world-writable" concerns. **If you have a more sophisticated ops team, by all means go with the second approach.** Just keep in mind [AviD's Rule of Usability](http://security.stackexchange.com/a/6116/43688): > Security at the expense of usability, comes at the expense of security. An intermediate solution does exist here: Your HTTP process writes to a named pipe (or a world-writable file), and another process (e.g. a daemon or a cron job) kicks off the update process as a more privileged user when this pipe/file is written to.

Secure Automatic Update Implementations

Although not immediately useful for PHP developers, [The Update Framework](https://github.com/theupdateframework/tuf) is definitely the best starting point for secure automatic updates. In addition to the features covered in the [TUF Specification](https://github.com/theupdateframework/tuf/blob/develop/docs/tuf-spec.txt), it supports [a process for augmenting the protocol](https://github.com/theupdateframework/taps). If you're using CMS Airship, read up on [how we implemented secure automatic updates](https://paragonie.com/blog/2016/05/keyggdrasil-continuum-cryptography-powering-cms-airship).

Summary

Automatic software updates are a boon for security, not a detriment, so long as the following precautions are taken: 1. Updates must be cryptographically signed. 2. Updates must be reproducible from the source code. 3. Update consistency must be decentrally verifiable. 4. All communications must be conducted via TLS (i.e. HTTPS not HTTP) to prevent MitM-DoS attacks. 5. Update files must be mirrored wide and far to prevent DDoS attacks. 6. Updates may be handled by a separate user account than the one used by the HTTP server for better defense-in-depth.

Changelog

* **2016-10-25** - Added a section mentioning some implementations. * **2016-10-25** - Intermediate solution suggested by [Taylor "Riastradh" Campbell](http://mumble.net/~campbell/) in the privilege separation section.