Nicolas Papernot of the University of Toronto sees CyberSec & AI Prague as an opportunity to discuss guiding principles for AI
As the artificial intelligence research community comes together For CyberSec & AI Prague next month, guiding principles are on Nicolas Papernot’s mind. “I’m very excited about this conference because at this time this community has an opportunity to contribute foundations to significant security approaches in the area of machine learning and artificial intelligence,” says the assistant professor of engineering at the University of Toronto and Vector Institute.
“I’m very excited about this conference because at this time this community has an opportunity to contribute foundations to significant security approaches."
Papernot, whose research interests are at the intersection of security, privacy, and machine learning, believes there is growing recognition that machine learning exposes new security and privacy vulnerabilities in software systems. But the tech community's understanding of the nature and extent of these vulnerabilities remains limited. As cybersecurity researchers explore this new area, their approach can devolve into an “arms race” that balances risk with the cost of protection. Keeping up with the hackers becomes a never-ending struggle.
“What we still need to work out as a community is a more principled approach,” he says. “There are many aspects of what a good security mechanism in machine learning and AI should present. We can inspire ourselves by looking at key principles.”
In his talk at this very future-facing conference, Papernot is going old school, turning to design principles enumerated by Jerome Saltzer and Michael Schroeder in their 1975 article “The Protection of Information in Computer Systems.” That text laid out now-classic values of cybersecurity software, including an open design that is as simple as possible.
Researchers in cybersecurity and privacy adopted many of Saltzer and Schroeder’s suggestions a decade or two ago and “now we’re seeing the benefits,” Papernot says. “Right now in the privacy space we know a lot more about how to defend than we know how to attack. That difference, in large part, is due to the fact that the privacy world has spent a lot of energy as a community defining what privacy should be. The result is a framework called differential privacy, which captures all attackers regardless of what they know or can access.”
Building systems that proactively adhere to key design approaches empowers the AI and machine learning community to rise above the arms race, in which attackers have an advantage, Papernot said.
Other fascinating speakers from the overlapping worlds of cybersecurity and artificial intelligence are coming to the capital of the Czech Republic on Oct. 25 for CyberSec & AI Prague. Attendees will represent the world of cybersecurity, and engineers will likely build both their knowledge base and professional networks. Students will also discuss their own ideas displayed in a “poster session.”