Garry Kasparov talks AI and privacy ahead of Avast’s CyberSec&AI Connected

Malea Lamb-Hall 23 Jul 2020

Garry Kasparov, Chess Grandmaster and renowned commentator on human rights, AI, and cybersecurity, looks ahead to his appearance at Avast’s CyberSec&AI Connected this October

Garry Kasparov is one of the greatest chess players of all time. In 1985, aged just 22, he became the youngest chess world champion in history. His matches against Deep Blue, the IBM super-computer, brought the potential of AI to the world. 

Garry has become a hugely respected commentator and author on AI, human rights, cybersecurity, and politics, making him an ideal speaker at Avast’s virtual conference CyberSec&AI Connected. The event brings together leading minds in AI, security and machine learning from around the world to examine AI for privacy and security. 

In this extract from our CyberSec&AI Connected blog, Garry, who is a Security Ambassador for Avast, talks about his appearance at the conference as well as his views on AI, privacy and security. 

Your battles with Deep Blue did more to bring AI into the public consciousness than any other event. How much did Deep Blue help people grasp the potential of AI and how much did it perhaps lead to people misunderstanding what AI is?

It did a lot of both, in hindsight. Every watershed moment in technology creates a lot of hype, misconceptions, and then, over time, it becomes mythology. As the saying goes, we call something AI until we find a way to make it work, then we call it something else. This is why arguing about if Deep Blue was AI or not is irrelevant when talking about public perception and results. Deep Blue won, that’s what mattered. It played world champion level chess, which was its only purpose. It was a shock, not just to me, that suddenly forced the world to confront a future with machines competing for intellectual work, not just physical labor and routine tasks.

Computer experts and philosophers can debate method and output endlessly. But as an advocate and observer, not an expert or philosopher, what matters to me in the end is what it can do, not how it does it—even when that is fascinating! That is, how our intelligent machines help us, how they advance our understanding of the world, make us more productive, safer, healthier, everything our tech has always done.

You have been vocal about the advantages AI can offer society. Why do you think there is so much fear and confusion around AI? 

Partly because we fear anything powerful and new, any changes we cannot predict. Nuclear power, the internet, AI, they are too big to comprehend, and so that leads to some instinctive fear. Next, our society has become wealthy and risk-averse, so anything that looks like it might upset the status quo might be bad. Will it replace too many jobs? Will it replace ME? And the news always loves a scary headline. It’s a shame, because AI is a huge opportunity for growth in practically every dimension, but if we don’t press forward ambitiously, the negative aspects like job losses will only gain, without the benefits growing fast enough.

Lastly, there’s the impact of decades of sci-fi about killer robots, super-intelligent machine dystopias, etc. This is part of a cultural trend toward tech-phobia that coincided with the environmental movement in the 1970s, although the anti-nuclear sentiment had a role as well. Instead of being amazed by incredible new tech, like robots and AI, we immediately turn to wondering how they might harm us, which is ridiculous.

Kasparov Chess

You’ve said previously that progress in areas like AI cannot be stopped and that if you restrict growth in AI in Europe and America, another region will simply move ahead. However, you’ve also acknowledged that companies that generate vast amounts of private data such as Facebook and Google do need more public control. How can that be brought about?

I don’t like the word “control,” because transparency and pressure from the public tend to lead to regulations, not takeovers. These giant companies have unprecedented access to the lives, the data, of billions of people. Relying on profit motive, shareholder interest, and media investigations alone to navigate this would be irresponsible. I’m no fan of government interference or heavy regulations that might stifle innovation, but there is massive public interest here, so public oversight is required.

This isn’t a matter of uninformed lawmakers telling Silicon Valley how to do things. The pressure should be toward transparency and accountability, and bigger issues about the right to privacy and data control.

Will AI strengthen or erode human rights? And how would you like to see AI be utilized to improve or secure them?

I always say that tech is agnostic, it’s a tool. Was the invention of the hammer good or bad for human rights? But tools are good for human advancement, and that, eventually, is good for human rights. But the present is always what’s on everyone’s mind. Regarding human rights, people tend to think of the potential for AI to help dictators surveil their subjects, as privacy nightmares. Or how algorithms can build prejudice into systems. They want to protect human rights from AI. That’s the bad news.

But AI can also help us root out inequality, discrimination, when designed and directed well. AI can help track refugee populations, analyze bomb patterns to attribute war crimes, and keep an eye on the bad guys just as they try to keep an eye on everyone else. Again, it’s a tool, and there’s no way to make a tool so that it only does good in the world, as nice as that sounds.

How can we educate people, especially the younger generation about AI? After all, AI is going to reshape their world and the work they do in it? Is there enough public debate on the topic?

The young people are the ones who are going to reshape the world with AI, because they’ll grow up with it, not see it as an alien or a science project. 

What are you looking forward to discussing in your fireside chat at CyberSec&AI Connected?

I’m always most interested in where cybersecurity and AI come together with trends in society—privacy and rights, of the roles public agencies and private companies play, and, of course, where the individual is in the middle of it all. And it would be wrong to completely avoid discussing the unique moment we’re all living through today. I know we all need a mental escape from the crisis, but the pandemic is also shaping our professional world, our technological world. It is affecting how we deal with information and accelerating what we need and expect from our intelligent machines.

Your fellow keynote speaker at CyberSec&AI Connected will be co-founder of Tor, Roger Dingledine. What would you like to ask him?

I’d like to ask about how he sees Tor and other privacy technology fitting into the problem of accountability online. In human rights advocacy, anonymity can be a life-saving necessity. There’s a reason dictatorships don’t want anyone to ever be anonymous, online or off. But anonymity also contributes to everything from cybercrime to online harassment and other types of abuse. Balancing responsibility, holding people accountable, and providing necessary protections is a difficult task. Is total anonymity, like unbreakable encryption, simply both a boon and a curse, forever? Or can we have it all?


To see the full interview with Garry and hear more about his thoughts on AI, privacy and cybersecurity, visit our CyberSec&AI Connected blog [insert link]. To secure your place at CyberSec&AI Connected, visit our booking page. A virtual event, you can join the conference from wherever you are in the world and be part of a truly international audience of AI, machine learning, and cybersecurity experts. Secure your spot here.

--> -->