While withholding a zero-day’s existence can provide some government advantage, it can potentially harm the rest of us and break many elements of the global internet if vulnerabilities aren’t disclosed and patched.
By now, you probably know what a zero-day vulnerability is: In simple terms, it's the discovery of software and hardware coding errors that can be exploited by attackers. Some of these errors are found by government researchers, intentionally looking for ways into foreign agency networks to spy on their enemies. Sometimes, our governments and even some private companies keep deliberately mum about these vulnerabilities for many years.
The government has its reasons for withholding information: for example, it could benefit a particular law enforcement effort to bring some criminal to justice. Alternatively, it could be used as part of an overall intelligence collection operation to determine who is doing the spying. Or it could be used for active countermeasures to attack the bad guys. These are all good reasons.
While withholding a zero-day’s existence can provide some government advantage, it can also harm the rest of us and potentially break many elements of the global internet if these zero-days aren’t disclosed and quickly patched.
How to decide which vulnerabilities to keep secret and which to patch?
It turns out there is a little-known US federal entity called the Vulnerabilities Equities Process (VEP) that has this responsibility. It involves representatives from 10 cabinet-level agencies to make these decisions. (These members include representatives of the Department of Homeland Security, the Treasury and State Departments, and as you would suspect, the CIA and the DoD Cyber Crime Center, among others.) And thanks to some very smart and dedicated researchers, we now know a lot about how the VEP works and what its motivations are — as well as its limitations.
The VEP arose after the Heartbleed bug was made public in 2014. We quickly learned that the NSA had found this zero-day two years earlier and kept it to themselves. Eventually, hackers revealed the nature and operation of this and other government-discovered zero-days (remember WannaCry?) to the public, and of course, used them for various nefarious exploits of their own. That's the danger of secrecy, of course: eventually, someone will figure the zero-day out and recreate it in the wild. The VEP was formed to manage these zero-days and bring more order and coordinated purpose to their use.
Initially, the inner workings of the VEP were shrouded in secrecy, but thanks to the efforts of Electronic Frontier Foundation and the Electronic Privacy Information Center, the VEP processes have been made public several years ago in this series of reports. What I found interesting about the VEP origin stories is that it spanned several years and its charter was revised several times across three different administrations (Bush, Obama, and finally, Trump) before we got some transparency into its processes.
Since then, a security researcher, Dr. Lindsey Polley, has done amazing work to take a deeper dive into this entity and has numerous recommendations on how it can better formalize their processes based on sound ethical principles. Polley was recently interviewed about her doctoral dissertation on a recent Lawfare podcast, and I spoke to her last week as well to obtain further insights.
The gradual reveal of the VEP is an interesting example of when government actually works as intended: a shadowy organization is eventually brought to light, where citizens can examine its operations and understand its most inner workings, all without compromising its secrecy. That isn’t to say that we know everything about it, but we now know enough where we can provide some intelligent advice and suggest improvements. “The VEP is not appropriately postured to make the most informed decision about how zero-days are used,” she writes in her dissertation. “Proper steps should be taken to protect the vulnerable population.”
Polley’s policy recommendations are quite carefully thought out and include specifics on how to make the VEP more accountable, how to balance the classified and unclassified parts of its mission, have more precise focus on what constitutes the reasoning behind not divulging a zero-day, and whether the VEP should remain focused on software vulnerabilities.
She also mentions how there is no outside-of-government representative on the VEP: it is all agency folks. Polley recommends that the group be expanded to include this, since their work affects private industry and the general public and their POV should be heard during their deliberations. Having some kind of outside perspective can be helpful in keeping the entity accountable to both Congress and the general public, although Polley is quick to admit that this isn’t going to be easy to implement.
She also has developed a new “virtue-based ethics framework for software vulnerability-oriented public policy” that is quite clever, and could form the basis of future VEP decision-making. She said to her knowledge this is the first such formulation for any cybersecurity-related instance, and you’ll have to read the details in her dissertation to fully understand what she has constructed.
One aspect that she brings to light is having policies that govern how zero-days should be purchased by the government. Yes, indeed, this has happened and continues to happen, both for information gathering and clandestine operational reasons. But these purchased exploits are exempt from any current VEP review, which some privacy advocates consider a major legal loophole. Figuring out how to close this loophole is also a delicate process, because part of the utility of these zero-day purchases is their inherent secrecy.
Does Polley think her ideas will gain traction?
She certainly hopes so. Polley thinks that she has a firm foundation to help make the group more transparent and responsive. Certainly, there isn’t anyone else that has brought together the various inner workings of the VEP with her public policy experience.