Bug Bounty Programs
The past few years have given us bug bounty services and crowd-sourced security analysis, typically facilitated through platforms like HackerOne and BugCrowd. Without prejudice towards either platform (or another unknown to us), we strongly advise anyone with technology experience to participate in bug bounty programs.
The experience is priceless, and developers may discover they know far more about security than they originally thought. Even if you never find a vulnerability, exposure to different bug bounty programs can offer valuable insight to the attitudes that different companies take towards application security. Particularly, which vulnerabilities they consider invalid.
A Common Exclusion
One of the most common vulnerabilities that bounty programs will explicitly exclude from the scope is, "CSRF on Logout." This means that, for most projects, if you can log a user out of the application by sending them to the logout screen involuntary (e.g. sending <img src="/logout.php" />
in a message), you will not be rewarded a bug bounty for reporting this and, most importantly, the team will not invest any resources in fixing it.
However, there are certainly scenarios in which a lack of CSRF protection on the logout page of an application can impact the security of your application. To understand how, let's first refer to the CIA triad: Confidentiality, Integrity, and Availability.
A lack of CSRF protection on logging out of an application can, in some instances, leveraged into a Denial of Service attack (which attacks a service's availability) on your users. For example: If the latest message a user received is always loaded when they first log in, and that message contained a payload to log them out of the application, they will always be logged out as soon as they log in.
If your users are unable to access your platform, this can affect your business and destroy your reputation. Yet many still do not consider it a vulnerability, because it does not give an attacker elevated access to server resources or a user's computer. To make matters frustrating for security researchers, the vendor is often correct. Usually, there is no avenue for exploiting this deficit to trigger an immediate denial of service after user authentication.
Non-Exploitable is Still a Concern
Earlier this month, our Chief Development Officer identified cryptographic weaknesses in the encrypted chat feature of the mobile app for the new social network, Minds. One of the issues he found was that they were encrypted messages using RSA encryption with PKCS1v1.5 padding, which was proven to be vulnerable to Chosen Ciphertext and Padding Oracle attacks by Daniel Bleichenbacher. In 1998. No cryptography implementation in 2015 should consider deploying such a broken RSA padding in their communications protocols.
But yet, as Matthew D Green told Motherboard in an interview about our report on Full Disclosure, the vulnerability was not exploitable. The question that you might ask, then, is "If this isn't exploitable, why does it matter?"
The reason the vulnerability our team member found was not exploitable is that the Minds mobile client failed silently if decrypting an incoming message was unsuccessful. Neither the recipient nor the sender is notified. In fact, the sender is never aware if the message they sent decrypted successfully either.
We wager it probably wouldn't take a particularly skilled social engineer to tell any development team that their decryption fails silently and convince them to make it inform the sender so they can try again. With this simple "fix" in place, Bleichenbacher's attack becomes possible and you can begin stealing other peoples' RSA private keys.
Even if a particularly vulnerability in your application cannot be exploited today, changes made in the code tomorrow could inadvertently result in a roadblock to exploitation being removed.
Not all Security Issues are Vulnerabilities
When securing your application, it's tempting to focus on vulnerabilities and rely on objective metrics like CVSS scores. However, this strategy has its limits.
Consider, for example, a 2000-era eCommerce website that used PayPal for payment processing but did not use the PayPal SDK. In our example, after you successfully purchase an item in the PayPal window, it redirects you to /checkoutComplete.php
which updates a field in the database and redirects you to /thanks.php
.
What would happen if a user, instead of completing the PayPal purchasing workflow, immediately navigated to /checkoutComplete.php
? Odds are, you would end up reading the words "Thank you for your purchase" and your illicitly free item would arrive in the mail within a few business days.
Problem: Potential Impact Estimations are Difficult
Estimating the potential impact of a given security vulnerability requires not only knowing the immediate consequences of an exploitation attempt, but also fully understanding:
- The entire application (including back-end code, as demonstrated by second-order SQL injection vulnerabilities).
- How the application might change in the future (i.e. what changes could increase the severity of this vulnerability?).
- How the application is used by most people.
Solution: Don't Even Focus on Vulnerability Impact
Instead, aim for zero security issues, full stop.
We find that focusing on the potential impact of a vulnerability places people in a reactive security mindset, rather than a proactive one.
Impact can be helpful in ranking vulnerability reports (e.g. SQL injection on your products page is probably more important than missing CSRF protection on your logout page), but using them to filter or reject potential security problems can leave your team blind to real problems in your application.
When a vulnerability is found in one of our projects, we don't just fix it. We ask yourselves:
- What mindset was the developer in when they wrote the vulnerable snippet of code?
- What assumptions did this developer make that turned out to not be true?
- Could other parts of the application have similar, undiscovered issues?
- Could other projects have the same vulnerability?
Securing an application creates a moderate amount of extra work for yourself in the short term, but far less in the long term. We believe this trade-off is worth it, and consider the short-term cost to secure an application to be a wise investment.
(99 Little Bugs image courtesy of Reddit r/ProgrammerHumor)