A discussion of how – and why – adversaries are using artificial intelligence to juice up malicious activities
When antivirus (AV) software first arrived in the late 1980s, the science of combating computer viruses was very straightforward.
AV kept close track of known malicious files, and then quarantined or deleted any known malware that had managed to embed itself on the protected computing device. At its core, AV still does that today.
Threat actors, of course, responded by engaging AV vendors in what has turned out to be a decades-long contest of one-upmanship. They quickened their pace of creating sprawling families of malware, putting AV vendors in an endless chase to identify, and blacklist, new malware variants as quickly as possible.
What began as a game of checkers, quickly advanced to chess and then to 3D chess. That brings us to today, where AV vendors and malware distributors are engaged in a 3D chess match -- infused by artificial intelligence, or AI.
I recently visited with Rajarshi Gupta, head of AI at Avast, who gave me a breakdown of how threat actors, today, are leveraging AI to support their malicious activities. Here are excerpts of our discussion, edited for clarity and length.
Acohido: Can you frame how AI has come into play dealing with adversaries?
Gupta: We’ve really pushed the frontiers of AI in the last decade in things like video, scene-understanding, natural language processing and even driverless cars. But, if you think about it, security is the only domain where we have to deal with a true adversary. It’s the only domain where someone who is very smart, and who has every economic incentive, can use the best tools available, including AI. To combat this, we need to utilize the best tools, and use them better than the dark side. That’s why we’re seeing the security industry continuously adopt more and more AI techniques to do battle with the black hats.
Acohido: And, conversely, AI is being increasingly leveraged by the attackers?
Gupta: Yes. There’s really nothing new in the basic cat and mouse chase that’s been taking place for 30 years. It’s just that both sides are now using AI to improve their respective games.
Acohido: Can you walk me through an illustration?
Gupta: A common use-case is faked URLs. We are very good at identifying malicious content that is put up on a URL -- and at taking down that URL quickly. Five years ago these faked URLs used to be a string of random letters and numbers. Then our detection algorithms got smarter and identified them as bad URLs. The malicious actors needed to keep inventing new URLs and jumping to new URLs.
So the attackers moved to using AI to generate URLs based on three or four words randomly picked from a dictionary, like lionfreezertable.com. Of course, our detection algorithms had to get cleverer, and they did. So the attackers advanced to contextual URLs. They’ll take the name of a real institution and a geographical location, like the National Bank of Scotland, and then create a URL for the Scotland National Bank, which doesn’t exist. By stringing together contextual words, they can make it look like a faked URL is valid.
Acohido: What about when it comes to delivering the actual malware?
Gupta: The end game, obviously, is to generate and distribute malware that can evade our detectors, and we are seeing the black hats use AI techniques to do that. One way is to generate automated code, and insert it into files and change the malware such that it evades our detection algorithms. Another way is to test code samples against our backend, to try to fool our antivirus classifiers. We’ve seen examples where someone will zero-out certain bytes of the latest iteration of a malware variant, and they’ll try this trick 400 times in succession. The intent is to figure out which parts of the file are being detected by us as being malicious. In fact, during 2018, we identified 422 different Avast clients being used in such manners to attack our backend.
Acohido: Let’s say the attacker is successful in getting malware onto my computing device. As a consumer, what’s my larger concern?
Gupta: Once the attacker has infected one device in the home, it’s now possible to use AI to automate the ability to spread out and infect other devices inside of the home. Imagine if you can run an attack with scripts that get smarter and smarter at looking around and finding other devices to spread to, and this happens in an automated way. We’ve not seen a lot of this yet, but we feel that the AI techniques are available to accelerate these sort of automated attacks.
Acohido: Are homes more vulnerable that companies?
Gupta: If you think about it, a smart home, with many connected devices, is more at risk for this sort of automated attack than, say, an enterprise. When someone has infiltrated a particular company, they’ll likely lie in wait and spend a significant amount of time running manual scripts to get to what they’re after. The potential payoff is much higher, and that presents a different security challenge.
In a home, the potential reward isn’t as great and the attacker can’t afford to spend days, or weeks probing every home. The incentive is greater for him to have an automated attack to jump across to other devices in a home, as quickly as possible, and then repeat that process in a thousand homes quickly.
Acohido: At this point in time, where are the hot spots?
Gupta: The bad actors are using AI techniques exactly where we have robust AI defenses, like malware detection, phishing mitigation and malicious URL detectors. These types of attacks are called Deep Attacks. The security industry has been working hard for many years, raising the barriers, so the bad guys naturally need to up their game.
In general, security teams have stayed ahead of the curve. At Avast, we’ve continued to push the envelope with AI to combat the bad actors, and keep hundreds of millions of our users safe and secure.