Despite some widespread fears, artificial intelligence can help cybersecurity by blocking fake news and other threats
Headlines about ethical AI and stories about the dangers of artificial intelligence make me wonder if there would have been articles about ethical hammers or the dangers of knives had media and social media existed back when they were invented. Hammers and knives are indeed dangerous, but are also essential tools we cannot imagine living without.
AI is far bigger and broader than any Stone Age tool, inviting comparisons to the harnessing of electricity, or to the internet – technologies that changed practically everything in our society far beyond what could have been predicted. Intelligent algorithms work in more subtle ways, empowering existing systems and creating entirely new ones. Data analysis becomes far more powerful, revealing patterns the human mind could never imagine. Efficiencies and productivity blossom in everything from chip design to weather prediction.
So too do weapons become more powerful, and here I’m not talking about sci-fi robots, or even the real robots in viral videos that are always followed with hundreds of “we’re doomed!” mentions. There’s no reason to believe that our amazing intelligent tech creations will turn against us any more than our hammers and knives, but just like them, they are dangerous in the wrong hands.
Automated cyberattacks have become the norm, and they are increasingly being enhanced with AI tools. Phishing attacks can harvest and assemble personal data from across the internet to compose and send millions of messages. Learning algorithms track which messages have the best results and adjust their tactics for the next round. Automated attacks probe every possible device looking for points of entry, and with billions of new devices coming online every year, many of them with terrible security, the defense badly needs an AI boost as well.
Garry Kasparov and Rajarshi Gupta at the InfoSecurity event in Mexico City.
There are few people who can talk as knowledgeably about providing that security boost as Rajarshi Gupta, the head of AI at Avast. I recently enjoyed sharing the stage with him, and several excellent Mexican dinners, at the InfoSecurity event in Mexico City. He presented some remarkable statistics about cyber-attacks happening in real time, and discussed how the Internet of Things is creating new problems for security researchers. You can install security software on your PC and your phone, but what about your toaster? It may not seem like much of a threat if a hacker takes over a home appliance (even if it’s one that can set your house on fire, as Rajarshi mentioned!), but it can be the weakest link in a chain into your network.
With so many new devices coming on, and adaptive malware that can spread like wildfire, Rajarshi’s mission is a critical one. Machine learning is indispensable because it is the only way for security tools to keep up with malware that is evolving constantly. A standard database of virus signatures is out of date in seconds.
Resources play an important role in this constant battle. As Rajarshi explained in one of our interviews with a Mexican news magazine, personal and corporate defense can usually be overwhelmed by the massive resources of state actors. National hacker armies are usually focused on each other, but more and more they are going after softer targets like power grids and, famously, the email accounts of political party leadership. Hacking can change the geopolitical world cheaper and faster than any tank battalion.
Deterrence is the only effective defense in these circumstances. I know defense is Avast’s job, and very important even if “hacking back” isn’t allowed, but my philosophy as a chess player was that the best defense was a good offense. That’s true in the geopolitical game as well, but in this case it’s the threat of a good offense that matters. There is too much cyber-territory to cover for defense-only to work well, or for very long, at the nation-state level.
AI also holds promise for helping us defend ourselves, and our society, from fake news, fake video, and other forms of misinformation attacks. The human eye and ear might not be able to tell the difference between a real video and an AI-altered fake, but AI-detection algorithms can. Ideally, fakery can be eliminated before it’s distributed, or labeled automatically before it’s published.
It will be a long and difficult fight, but we’ve fought similar battles before. Some experts even thought that email spam would be the end of the internet, but many of us barely notice it anymore. Note that it wasn’t only client-side filtering that did the job, although it’s still important. Stiff penalties including jail time were meted out to spammers, creating a strong deterrent effect. Regulations for maliciously altered videos and fake news will also run into free speech considerations, but the precedent is there for waging these fights successfully.
Human and machine intelligence will have to combine to create the robust structures required to fend off these threats. I’m glad AI and people like Rajarshi Gupta are on our side. For more details, read his article about our Mexican adventure here.
The rise of online tribes requires new strategies for countering the spread of misinformation. This is because your entire identity is wrapped up in a tribe and once you're a member, leaving can come with dire consequences.
Defining criteria for creating the ideal "tech hate stack" on platforms including Parler, Facebook and Telegram.
Over time, computer malware has moved from being largely harmless to having significant real-world consequences. This transition revolves around money, as new profit avenues arose for hackers when more aspects of our lives migrated to the online world.