Vulnerability disclosure is an important process to keep our systems safe. What if the disclosure itself is also vulnerable?
Vulnerability disclosure is the process of bringing information about flaws in operating systems, applications, firmware and business processes into the public domain. The purpose is to ensure that product vendors fix the flaws while users can mitigate against them before those same flaws are also found and exploited by bad guys.
The vulnerabilities are usually discovered by security researchers who specifically look for them. Since cyber criminals and adversarial nation states are also looking for them -- and there is no way of knowing whether they have also found them -- it is essential that they are fixed as soon as they are discovered and before they are exploited. Vulnerability disclosure by good guys is an essential part of this process.
We need to understand three terms: vulnerabilities (a flaw or bug in code); exploits (a methodology used to manipulate the vulnerability); and patching (fixing the vulnerability by the vendor and implementing the fix by the user).
Zero-day vulnerabilities
It is useful to understand the concept of zero-days (0-days). 0-day vulnerabilities are known to a potential attacker, but not yet known, and therefore not yet fixed, by the vendor. In consequence, there is in theory no defense against a 0-day exploit developed for a 0-day vulnerability.
Technically, the term applies to the period between discovery and fix. This is a variable time depending on how long it takes the vendor to fix the problem. In practical terms, however, the vulnerability remains a 0-day vulnerability for individuals until the vendor's fix is implemented on the user's computer.
A complication to this description -- and a concept that is important to this blog post -- is the vulnerability that is only known to, and kept secret by, bad guys. It has been discovered, so it is technically a 0-day vulnerability. But it is not known to the general public. It effectively does not exist – but it does. These are the infamous 'unknown unknowns' of security, and they can be exploited by criminals at any time.
Disclosure
Vulnerability disclosure is an important process in keeping our products secure. Security researchers find the flaws, report them to the vendors, who then fix them. That's the theory. When it works, it works well; but it doesn't always work.
There are two basic approaches to vulnerability disclosure from the researchers, which are characterized by the terms 'full disclosure' and 'responsible disclosure'.
Full disclosure
The concept of full disclosure implies the immediate and full publication of all the details of the discovered vulnerability – possibly including an exploit to demonstrate the vulnerability. Followers of this doctrine believe it is the only way to ensure the vendor fixes the problem with all possible haste.
Fixing code flaws can be a time-consuming and costly exercise for the vendor, and acknowledging that they have sold software with bugs can damage their reputation. Historically, many companies have taken the view that the fewer people who know about the weakness, the more secure their product remains (and, of course, it saves them time, money and reputation). So, they do nothing, or do it very slowly.
However, researchers are aware that if they can find the flaw, so can others. There is always the possibility the flaw is one of the unknown unknowns that could be exploited -- possibly on a massive scale -- at any time. The only solution is for the vendor to fix the problem as quickly as possible, and full disclosure is considered by some researchers as the best way to ensure this.
The weakness in this argument is that full disclosure also tells the bad guys about the flaw. Full vulnerability disclosure often degenerates into a race between the vendor and the bad guy -- one to fix it and the other to exploit it.
Responsible Disclosure
Responsible disclosure is the route preferred by almost all vendors and security firms. In its raw form, this is simply the private disclosure of the vulnerability to the vendor alone, with no public disclosure until after the vulnerability has been fixed – if ever.
In most cases the researcher applies a time limit. If the vulnerability isn't fixed within a certain time, it will be disclosed publicly. In practical terms, responsible disclosure is a compromise between what the vendors would like (no public disclosure) and full disclosure. Bug bounty programs have emerged where vendors offer programs that entice researchers to responsibly disclose bugs in return for compensation and recognition. These programs are somewhat controversial and warrant further discussion in the future.
Google's Project Zero research team is currently experimenting with a slight variation. Vulnerabilities are disclosed privately to the vendor, giving that vendor 90 days to fix the problem before public disclosure. Now, however, public disclosure will not occur until after the 90 day period regardless of how quickly the vendor fixes the vulnerability. This is an attempt to help alleviate the 'patch problem'; a separate issue that nevertheless has a strong impact on 0-day vulnerabilities.
The patch problem
The disclosure theory is that researchers discover and disclose vulnerabilities (at this phase, most likely 0-day vulnerabilities); the vendors fix the vulnerabilities and update their products; and the user updates (patches) the installed product. The problem is that users are poor at updating their installed products. Until they do, for them the vulnerability is not merely a 0-day vulnerability, it is now a 0-day known by criminals. This is a major cybersecurity problem.
The Equifax hack of 2017 was performed via a security flaw that had been disclosed and patched two months before the breach occurred – but Equifax had failed to implement the patch on all its servers.
Even the infamous WannaCry and NotPetya global ransomware outbreaks were enabled via a vulnerability that had already been fixed by Microsoft. In this case, the ransomware spread rapidly around the world using a vulnerability known as EternalBlue. EternalBlue was one of several 'secret' vulnerabilities held by the NSA for its own use. It was stolen and made public by a hacking group known as the ShadowBrokers -- and has been regularly used by different criminals and different malwares ever since. Although technically it ceased being a 0-day vulnerability the moment Microsoft fixed it, it remains effectively a 0-day for all users who haven't implemented the EternalBlue fix.
0-day vulnerabilities secretly stored by the world's intelligence services for their own offensive and defensive purposes is a further complication to the disclosure theory.
Government 0-day stockpiles
Government intelligence agencies operate -- and in some cases have published -- what they arcanely call their 'equities process' (NSA here; GCHQ here). Put simply, this is an admission that they hold stockpiles of 0-day vulnerabilities, and a confirmation that in some cases they will not disclose them to the vendors. The agencies argue that if the vulnerabilities pose more of a threat to the national economy than they provide an opportunity for offensive intelligence operations, they will disclose them -- but not otherwise.
In other words, they are 0-days and they will remain 0-days if the agencies feel they may be useful to their own purposes. The infamous Stuxnet attack against the Iranian nuclear project (thought, but not proven, to be a joint exercise by the NSA and Israel) used several 0-day exploits to achieve its purpose.
This means that a researcher finding a new vulnerability has no way of knowing whether that vulnerability is already known to an adversarial nation, and could be exploited to dramatic effect at any time. This in turn adds pressure on how quickly the researcher should disclose the vulnerability and how long the vendor should be given to issue a fix.
Both sides of the equities process have been witnessed over the last few years. Undisclosed NSA stockpiles were stolen by ShadowBrokers and leaked to devastating effect (WannaCry and NotPetya), while CIA 0-days were leaked in what has become known as the 'Vault 7' leaks.
More recently, the NSA disclosed a major vulnerability to Microsoft that was patched by Microsoft in January 2020. This shows the equities process working to the benefit of people and business, although in reality we don't know whether the decision to share was altruistic, or because it too, like EternalBlue, had been stolen, or even if the NSA had become aware that foreign agencies were also in possession of it.
Summary
The disclosure problem has no solution. All we can do is try, wherever possible, to choose the least bad option. The current consensus is that this is some form of responsible disclosure giving vendors the opportunity to fix the vulnerability before it is made public. But this takes no account of vulnerabilities that are already known to, and perhaps even being exploited by, criminals and adversarial nation states. Nor does it offer any solution to the possibility that the vulnerability may be held for use by an adversarial nation, or held and stolen from our own intelligence agencies.
To some degree, however, the problem is academic. Even where vulnerabilities are found and rapidly patched by vendors, we the users are very bad at implementing the vendors' patches. Those vulnerabilities effectively remain 0-day vulnerabilities for any unpatched system. The danger now, however, is that criminals can take and reverse engineer the patch, discover the vulnerability and develop an exploit for it sometimes within days of the patch.
The irony here is that disclosures that lead to fixes that we don't implement leaves us at more risk than ever.