DEF CON Interview: The rise of AI, machine learning, and their impact on security

Marina Ziegler, 31 July 2017

Avast Security Ambassador Garry Kasparov spoke at DEF CON. We talked to him and our Threat Intelligence Director Michal Salat about man-machine collaboration.

At the hacker conference, DEF CON, in Las Vegas, Garry Kasparov, Avast Security Ambassador and former chess world champion, discussed the rise of artificial intelligence, and how humans and machines can interact to get the most out of each other, in his presentation “The Brain’s Last Stand.” Prior to his speech, I chatted with him and Michal Salat, Threat Intelligence Director at Avast, about artificial intelligence, the interaction between man and machine, and their impact on our society.

Marina: Garry, in your speech abstract you wrote that humanity should embrace intelligent machines, instead of fearing their rise. We are in the middle of the 4th industrial revolution. Robotics are used in production, autonomous vehicles enter our streets and we are ever-connected. As processes are getting more and more automatized, an enormous wave of job losses may lie ahead of us, and we increasingly depend on machines that are vulnerable to cyber attacks, and to cyber espionage and manipulation. How can you be so positive about our digital future?

Garry: It’s in my nature to be optimistic and I’m excited about what the future can bring us. Intelligent machines empower us to do space exploration, and to explore our oceans. As machines are getting more intelligent, we finally can discover things we have been trying to understand for centuries, and we will discover things we didn’t know exist. Machines will take over jobs and there will be a rough period of job losses for many, however with the digitalization, too, new types of jobs will get created that we can’t imagine today. Just a decade ago, social media managers, and thirty years ago antivirus analysts didn’t exist. With the evolution of technology, new types of jobs will be created, often more specialized ones than what we have today.

Marina: As our world becomes more and more digital, are we at risk of losing control over the machines we are creating?

Garry: Machines can recognize patterns and analyze vast amounts of data. However, to be able to do so, they need to be fed with algorithms developed by humans. There is still a gap between true intelligence and what machines can achieve.

Marina: In this regard, was there a gap between Deep Blue’s intelligence, IBM’s chess computer you played against twenty years ago, and human intelligence?

Garry: The computer played on the level of a chess grandmaster, however, despite its speed, able to analyze twelve million positions per second, it doesn’t mean that the machine was intelligent. It was able to recognize patterns and analyze high amounts of data. We humans aren’t continuously top-performing, we make mistakes. If a machine is playing on a higher level, it will win. In the end, the development of Deep Blue was a dead end, a result of IBM’s development of parallel processors, a pyramid of processors that was able to work very fast.

Michal: Speed is also one of the key advantages of machine learning in security. The speed at which the data is generated and processed is important, because security threats spread and morph extremely fast. Most threats have very short longevity, some only exist for a few minutes. Before being detected they try to morph into something else, which is one way malware tries to escape from automatic detection systems. Machines can act much faster than human analysts. Moreover, the ability to leverage a high quantity of data is essential to detect threats automatically, and we need a high variety of data. The quantity of data determines potential insights we can get and whether our machine learning network can actually work efficiently or not, and a high data variety is essential. We need to feed our engine with clean and malicious files, so it can learn how to distinguish between them.

Garry: This shows very well that in order to function, machines need to be fed by algorithms, and this is still done by humans, in this case by human threat analysts.

Michal: Absolutely. Human threat analysts are needed to train the machines, and in malware analysis we humans still win in certain areas, for example we are able to leverage our expert knowledge better, and to operate and understand threats in a broader context, related to social and geopolitical contexts.

Marina: What if machines are abused for sabotage and cyberwarfare?

Garry: This already is happening, but the problem aren’t the machines, but the people who use them. Technology itself is agnostic; its power to promote good or evil depends on who controls it. This has always been the case. Dictators and terrorists see the Internet and social networks as a new communications pathway to spread their views, propaganda and fake news. Because we cannot keep technology out of their hands, we must do our best to safeguard it against impending dangers. Every disruptive invention can be used for good and for evil, and brings with it unintended consequences, necessitating another cycle of innovations to address them. It’s an eternal cycle. We cannot go back; we must stay one step ahead.

Michal: It’s a cat-and-mouse game we see every day in our threat intelligence work. As new technology evolves, like the Internet of Things, our lives become more convenient, but cybercriminals see new opportunities to attack the user and their devices to collect user data they can monetize in some way, or to abuse hacked devices to create a botnet, taking down websites and important server infrastructures. Therefore, in security we are constantly working on solutions that are a step ahead of the bad guys.

Marina: So are machines only dangerous if they are abused by humans?

Garry: I believe that the dystopian fear from intelligent machines that develop their own will and take full control over our lives and the world is nothing but a good story for Hollywood. Whenever new technology evolved in our history, critics predicted that something horrible will happen, but we’re still here. It’s part of our civilization history that machines take over work that previously was done by humans. But in the end, artificial intelligence will never be able to overtake us humans. Machines don’t have passion, they don’t dream, and therefore cannot be creative. Great literature, music and art cannot be broken down into algorithms.

Thank you, Garry and Michal, for these interesting thoughts and insights.