At the Enigma Conference this month in San Francisco, Avast researcher Sadia Afroz answers that question
Is machine learning useful for cybersecurity?
It’s a fair question, given the significant challenges involved. Threat actors using adversarial algorithms can compete with cybersecurity programs, and building artificial intelligence programs without fairness biases is its own struggle.
So is machine learning in cybersecurity even worth it? The answer is yes. This month Avast artificial intelligence researcher Sadia Afroz will explain to San Francisco conference goers at Enigma 2020 how to build robust machine learning systems to defend against real-world attacks.
“We need a systematic approach to model the adversary of a machine learning system in security. Different papers using different incomparable adversaries make it hard to track progress in this area.”
In a paper written with Avast’s head of AI Rajarshi Gupta, Afroz calls for new industry-wide consistency in cybersecurity. “We need a systematic approach to model the adversary of a machine learning system in security. Different papers using different incomparable adversaries make it hard to track progress in this area.” (See more about her talk here.)
Sadia Afroz spoke to the CyberSec & AI Prague conference in October. Avast and the Czech Technical University will co-host CyberSec & AI Prague again this year. Find out more.
A research scientist at the International Computer Science Institute at UC-Berkeley (ICSI), Afroz works with the Avast team of data scientists and threat researches from the world’s top schools including Berkeley, Stanford, MIG and Charles University. and Avast Software. Avast’s user base of over 435 million makes up one of the world’s largest threat detection networks, allowing us to build the best machine learning models and stop cyberthreats before they do harm.
Our advanced Artificial Intelligence system uses machine learning to automatically collect and extract data from our entire user base — then trains every security module. After finding a new malware sample, our products are automatically updated with new models, providing crucial, up-to-the-second protection
Sophisticated threat prevention in today’s world does not rely on a single machine learning engine that provides a silver bullet to all cyberattacks. Instead, it is a combination of multiple machine learning engines that work together to defend against attacks. The engines work across devices (both on the cloud and on PCs and smartphones), they use static and dynamic analysis techniques, and they are deployed in many of the layers of our defense engine.
In order to evaluate new and unknown threats, we’ve built a unique and sophisticated machine learning pipeline that allows us to rapidly train and deploy malware detection models within 12 hours. We also employ advanced techniques like deep convolutional neural networks to enhance our malware detection models. New security threats can appear suddenly and take new and unknown forms; in such situations, our ability to update our models rapidly ensures our users remain protected.
This next-gen security technology and the data from our massive user base give us a clear advantage against hackers – and competitors. And it is this technology that allows us to automatically detect and block high-profile threats such as WannaCry, BadRabbit, NotPetya ransomware, and the Adylkuzz crypto-mining attacks, without requiring a single product update.
Another example of Avast AI can now work in your home. Avast Omni is a new type of security that protects you everywhere you connect. The network security solution was recently honored for its innovation by the Consumer Electronics Show.
In our pilot episode of Avast Hacker Archives, Avast CISO Jaya Baloo interviews Hardware Security Trainer and Researcher at SecuringHardware.com Joseph FitzPatrick, who shares his “Aha!” moments in the world of hacking.
The Private AI Collaborative Research Institute aims to bring the private sector together with academics to create a collaborative environment in which researchers can advance privacy-focused technologies for decentralized AI.