**Application security has a checklist problem.** There's an old article titled [*The Six Dumbest Ideas in Computer Security*](http://www.ranum.com/security/computer_security/editorials/dumb/). When someone attempts to secure an application or a network using a checklist, they're committing the second fallacy in that list: "Enumerating Badness." (Related: [security through configuration](https://paragonie.com/blog/2017/01/configuration-driven-php-security-advice-considered-harmful) rather than competent design is common in the PHP ecosystem.) Until recently, a few checklists were given a pass because they were generally considered reputable among information security professionals. The reasons usually given vary from "At least this checklist isn't *that* bad" to "It helps bridge a gap between security teams and development teams". One such "christened checklist" was the infamous OWASP Top 10. And then OWASP published [their draft for the 2017 edition of the OWASP Top 10](https://github.com/OWASP/Top10/blob/e0964951fe87defce769720ab83138c37bebf60e/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf). The [reactions](https://twitter.com/tqbf/status/851467779073028096) and [criticisms](https://sakurity.com/blog/2017/04/24/owasp.html) were equal parts appropriate and ferocious. The addition of "A7. Insufficient Attack Protection" in the 2017 edition was enough to prompt a lot of information security professionals to decry the OWASP Top 10 project as a useful security tool. I'm arguing that this doesn't go far enough. It's time to face facts: ## There Are No Good Application Security Checklists There are several problems with security checklists: * Checklists (explicitly or implicitly) beg readers to interpret their list order as an indicator of priority. * Checklists assume congruent granularity. * Checklists are finite, which almost inevitably leads to *enumerating badness* (which, as argued above, is a stupid idea in computer security). * Different stacks have different risk profiles, and this nuance is not captured by any Top X vulnerability list. For example, the OWASP Top 10 list doesn't provide any guidance on using secure randomness, avoiding race conditions (which are a problem for designing crypto-currencies), or [side-stepping cache-timing attacks](https://paragonie.com/blog/2017/02/cryptographically-secure-php-development). Depending on the project in question, these might be *very* important. A weak random generator [could become a potent backdoor](https://paragonie.com/blog/2016/01/on-design-and-implementation-stealth-backdoor-for-web-applications). The problem isn't that the team behind OWASP is corrupt or incompetent. The problem is that *checklists are the wrong tool for the job*. ## A Better Idea: Vulnerability Taxonomy Inspired by biologists' efforts to classify the variety of life forms on Earth, I wrote [*A Gentle Introduction to Application Security*](https://paragonie.com/blog/2015/08/gentle-introduction-application-security) which classified vulnerabilities based on four main types, and then drilled down into specifics. At the highest level, you have: * Treating Code as Data, or vice versa * Unsound Logic * Operating Environment * Cryptographic Flaws (side-channels) If you look into the first category, you can get more specific: * Treating Code as Data, or vice versa * Buffer Overflow * SQL Injection * Cross-site Scripting * Local/Remote File Inclusion * Unsafe deserialization * ... To be clear: I'm not saying that *my* specific classifications are the ones that we should commit to forever. They're merely an example. I trust that the security industry at large can refine this proposal, going forward. Building a taxonomy model for vulnerability classification has a lot of advantages over a checklist. You can teach software developers the core fundamental lessons of each major classification ("don't let user input alter the program in any way", "make sure your logic is sound", "keep your software up-to-date and well configured", "cryptography requires expert care"). This transforms vulnerability mitigation from "here's another item to memorize and hopefully apply when you develop software under time pressure" into "here's a slightly more specific instance of the core lesson, so if you forget how to prevent these specific vulnerabilities, you'll probably remember the core lesson". Additionally, like security research itself, a taxonomy isn't ever really "finished". As new vulnerabilities are discovered, they can be inserted at an appropriate depth in the tree of vuln. This is more compatible with the mindset of a security researcher than a rigid short list that only gets updated every few years. It's high time we rethought how we approach application security. I think the taxonomy model will work where a checklist failed.