It’s tempting to speak about security in binary terms: fixed or not fixed, patched or unpatched, secure or insecure. Reality, though, is more about shades of gray and probabilities than absolutes. It’s also about limited resources and endless prioritization—always with the awareness that the stakes are high and any security gaps you fail to address could potentially allow for a successful attack with any number of consequences.
Knowing for a fact whether something is fixed or not is especially important for high-level decision-making. Whether it’s a critical vulnerability holding up a new release, a zero-day in production causing a flood of questions from anxious customers, or an old issue that resulted in a data breach now being investigated, a lot can ride on having trustworthy vulnerability status information. At the same time, a lot can go wrong along the way, and unless your decisions are based on reliable and regular testing, arriving at a resolution is like building a house of cards.
Before you can say “it’s fixed,” there are two things you need to know: what exactly you are fixing and how to tell if it’s fixed. Whether patching a third-party product or implementing a fix in your own code, plenty of pitfalls await along the way to eliminating a vulnerability.
A partial fix is no fix
Incomplete or ineffective fixes are the first cause of false hopes in security. All too often, a fix is done to make an error go away, stop a build failing, or simply close the ticket and get on with work rather than address the root cause of a vulnerability. Ideally, a security fix should receive as much QA attention as any other commit (if not more). The catch is that while you might have well-defined suites of unit and regression tests for your application, security testing is a very different story that requires specialist skills to perform manually and specialist tools to automate.
Taking SQL injection as an example, a superficial fix for a vulnerability report that says “SQLi on page XYZ” might be to filter the inputs of a form for SQL special characters. Without exhaustive testing, that may seem good enough to close the ticket or even pass a basic automated test—but there are many more ways to inject SQL into the same parameter, and there might also be other vulnerable parameters on the page. Worse, a quick-and-dirty fix might plug one vulnerability only to introduce another.
The only way to confidently approve security fixes is to put every single change through a full battery of up-to-date automated tests and don’t push code to production until these pass. To learn how this works in practice, see our post on hunting down vulnerabilities that includes a video demo showing how automatic testing and retesting can catch a superficial SQLi fix and enforce a proper resolution.
Temporary measures live the longest
For production systems, remediation often starts (and ends) by blocking a known attack vector using a web application firewall (WAF). Ideally, this should only be temporary until a fix is deployed to remove the vulnerability that makes the attack possible. All too often, though, blocking a single attack ends up being the permanent solution, with the underlying vulnerability still in place and ripe for exploitation using a different attack.
Relying solely on blocking is a type of superficial remediation that presents a major risk. Bypassing firewall rules is a fundamental skill for penetration testers and malicious hackers alike, so it’s pretty likely that a different attack against the same vulnerability will arrive sooner or later. Granted, there are legitimate situations where you can’t fully fix or patch a product, like when no patch is available or testing has shown that fixing one vulnerability would break something else—but these should be the exception, not the rule.
The best practice should always be to fix the underlying vulnerability as soon as possible and automatically retest to make sure the issue is truly gone. Runtime blocking is fast but fragile while fixing in the app is slower but more robust. You really need both, with accurate automation at all levels.
Patch that patch before you patch
Patching third-party software might seem easier than fixing your own code because somebody else has done the dirty work and you “only” need to deploy the patch. But even assuming that a patch is available, can be deployed, and won’t break anything (and these are already big assumptions), patched doesn’t always mean fixed.
Especially for widespread and high-impact vulnerabilities, it’s common to have a whole succession of patches (the MOVEit Transfer hack sprouted three in just first month). Apart from incomplete fixes rushed out under time pressure, this can also be the result of increased scrutiny. As the vulnerable product is suddenly being probed and examined by more researchers and attackers than ever, new vulnerabilities or attack avenues are often discovered, resulting in cascading patches.
Seeing as every patch should be tested before deployment in production, and you first need to actually find out that you need to deploy it, it’s often hard to confidently say you have “everything” patched. For example, you may just have finished patching a high-profile vulnerability when you learn there’s already a new patch that may or may not apply to your specific installation. What do you say when somebody asks you if your company is vulnerable to CVE such-and-such? Ideally, you should have a way of quickly testing your entire environment to check if an attack is possible. This should be done independently of verifying and deploying patches, not to mention maintaining a product and dependency inventory to check if you’re affected in the first place.
If you don’t fix them, even the known knowns can get you hacked
2023 saw several high-profile reports of CISOs being held legally responsible for security breaches. Putting aside the specifics of each case, these stories serve as a reminder of the importance of accurate security information for CISOs to act upon. What if everything indicates a vulnerability has been fixed, but the company gets hacked anyway? Was the patch ineffective? Was it misreported as applied when it really wasn’t? Was it applied everywhere except one forgotten instance? Was it still in the queue for proper fixing when attackers found a WAF bypass?
Cybersecurity may be complicated and notoriously fuzzy around the edges, but when it comes to testifying in court that you did everything right, you can’t beat a paper trail with solid test results.
Fix but verify: Test, retest, and automate
Vulnerability testing using a good quality DAST tool is a non-negotiable part of any effective application security program. By automating testing in a continuous process integrated into the development pipeline, you can keep an eye on your current external security posture while also testing and retesting in pre-production. You can even automatically retest internal fixes to make doubly sure they are doing their job. That way, you have an unavoidable extra layer of security checks to catch exploitable issues before they get you into trouble.