Ethical AI requires ethical humans

Garry Kasparov 19 Feb 2019

Can AI be evil? Garry Kasparov explains how AI works, and where ethics come into play.

Davos 2019, the annual meeting of the World Economic Forum, took place in January in its namesake city of Switzerland. Unsurprisingly, AI emerged as one of the leading topics of discussion (with 40 sessions dedicated to it, second only to US-China trade). Most of the conversation, however, centered on articulating abstract principles about the importance of ethics in AI, or, at best, calls for collaboration and research in this area.

I do believe in the importance of expressing and debating the problems we will likely face as AI continues to grow in strength and scope. It will have an impact on the world beyond that of the internet—eventually. But we have reached a point where action is necessary, too, not just talk. Repeating high-minded principles without proposing concrete mechanisms for implementation or enforcement is a way for companies to superficially address the problem, while avoiding doing anything that impacts their bottom line. Ethics cannot be solely a publicity problem any more than security is.

Some of the points made at Davos echo sentiments I have expressed in the past and are useful as diagnoses of the impact AI could have on business and society. One view I’ve been expressing for years in the face of increasing dystopian, anti-tech sentiment, is that our technology is agnostic, which I was pleased to hear CEO of Salesforce Marc Benioff emphasize. The question of how we integrate it into our world becomes all the more important when we acknowledge this fundamental characteristic. It will empower us tremendously, for good or evil, the way splitting the atom can generate useful energy or make a devastating bomb.

The abstract conversations in Davos often sounded like they believed we can create AI that is inherently good, or at least incapable of evil. This is much like believing we can raise human beings this way, an obvious fallacy. Of course, humans have free will and AI shows no signs of that, no matter how much autonomy we give them. But ethics aren’t chess. We can’t trivially design machines that are more ethical than we are the way a programmer can create a chess program that is far better at chess than they are. One key is to use them to reveal our human biases so we can improve ourselves and our society in a positive cycle.

The admission is only a starting point. Chief executives and policy leaders must use this understanding to adapt their practices—and hiring a chief ethical and humane officer, as Salesforce recently did, is a welcome gesture. Ideally, ethical behavior is also profitable, but if it’s not, it must be enforced—the very definition of law—and enforcement is what the companies dread.

Another view expressed frequently at the Swiss gathering was the importance of humans working collaboratively with AI. As I argued extensively in my book Deep Thinking, and as a recent report now includes in its proposed best practices for integrating AI—the technology works better when humans play a role. AI has not demonstrated the potential to overtake humans in creativity or judgment. Its strengths are in assimilating large quantities of data, tracking patterns, and making predictions, not in recognizing its biases or interpreting nuanced social contexts.

Companies understand this, which is why social media platforms have employees dedicated to ensuring that their information-filtering algorithms are functioning according to plan. Similarly, medical professionals will still need to sign off on recommendations made by AI systems, applying their more intimate knowledge of the patient’s circumstances. And automated service systems will not displace human labor so much as free up workers’ time to deal with situations beyond an AI’s grasp. And that grasp is ever-expanding—we are training our replacements after a fashion—so we must stay ambitious with our goals in order to keep ahead.

How, then, should business leaders and policy makers apply these takeaways? I doubt that the strategy most commonly put forth by the technology industry will create meaningful change. According to this model, companies, without any oversight, should take steps on their own to mitigate the potential downsides and dangers of AI. Some of the measures that are required might impact profits, at least in the short term, and we have seen, time and again, the Silicon Valley giants prioritizing shareholder interests. The latest example: even after years of heightened scrutiny, Facebook admitted in January that it had been paying teenagers $20 per month to install a “research app” that downloads all of their phone and internet activity. It is difficult to put one’s faith in entities that have abused the public’s trust so consistently. In the realm of AI, too, I suspect that half-hearted measures that will generate good publicity but likely skirt any substantive changes in behavior are the best we can expect.

That means there is a role for government to play, sooner or later, within sensible limits. A comparison, not too far-fetched I think, can be made with the railroad barons and financial titans of the early 19th century. In those cases, too, the vested interests were simply too overwhelming and against the public interest, and Teddy Roosevelt’s initiative to break up the trusts was appropriate. While I am far from an advocate for big government, having seen its worst possible face as a Soviet citizen, there are situations that call for regulation and oversight. The OECD has already begun holding conferences on AI to bring industry, academia, and government together, like one last year where I contributed a brief opening video.

AI can’t be made intrinsically ethical more than other technologies just because it’s “intelligent.” There was no “ethical steam power” or “ethical radio,” only ethical uses of it that required norms and regulations. We’re still well behind the curve of “ethical internet,” with abuses outstripping norms and laws. That doesn’t mean we should give up, only that we should try harder and do a better job.

US politicians are starting, albeit belatedly, to recognize the importance of the debate. Popular freshman Congresswoman Alexandria Ocasio-Cortez recently tweeted about the danger of bias in AI, which can magnify human assumptions at unprecedented scale. The government of Singapore is a step ahead: it released a framework on ethical AI at the Davos summit, with guidelines for the private sector on how to implement AI in a transparent, accountable, and “human-centric” way. Singapore has one of the world’s most favorable business climates, but its leaders recognize that long-term growth depends on promoting and enforcing clear standards. Ultimately, companies want the stability needed to make predictions about future performance. Abdicating responsibility over regulation to the private sector alone just leaves the field mired in uncertainty—a poor outcome for lawmakers, investors, and consumers alike.

While guiding frameworks are a useful starting point, what might solutions look like on a more practical level? One possibility that comes to mind in light of the latest massive data breach, called Collection #1, would be a centralized government service that notifies people when their accounts have been hacked. There have been scattered attempts at building attempts at building this tool – with some success. It would be ideal to add the credibility and possible authority of a public watchdog  at some point in the future.  Imagine how powerful such a notification system could be in helping people to mitigate the damage of stolen information, drawing attention to the gravity of the problem, and in acting as a repellent against such attacks in the first place.

As always, I end on a note of caution and personal responsibility—it is ultimately up to the consumer to protect her digital information, particularly as AI continues to advance rapidly while regulatory frameworks struggle to catch up. Do the little things you can control. Or, in the more poetic maxim attributed to Oliver Cromwell, “Trust in God and keep your powder dry.” Make yourself less vulnerable to hackers by using a password manager, never reusing passwords, and being wary of emails that request information about your accounts. Reusing passwords when breaches are inevitable is like having the same key for your car, your house, and your bank accounts.

As Davos 2019 confirmed, economic and political leaders are still grappling with how best to monitor the digital world—and I’m realistic enough to believe they may not have the public’s best interests at heart. In this climate of rapid technological change, I would encourage everyone to be optimistic about the possibilities that AI and other groundbreaking technologies can unlock, while remaining highly vigilant of how they interact with these new tools.

--> -->