Fighting hate and saving speech

Garry Kasparov 1 Aug 2017

Garry Kasparov on how the internet magnifies what is already a delicate balance between regulating defamatory language and allowing for free expression.

 A new law passed in Germany in June requires social media companies to delete content that qualifies as hate speech within 24 hours, or face fines starting at $5 million and reaching $57 million. The law and the controversy in which it quickly became embroiled has raised many important questions regarding the nature of hate speech, the tradeoff between free expression and regulation, and the roles of private companies and government authorities in overseeing, to name just a few.

The problem of monitoring digital content came up prominently in last year’s presidential election in the United States, so many of these issues are already at the forefront of public debate. But Germany’s history places it in a unique position in terms of free speech, raising the stakes for any hateful commentary. With strict laws outlawing hate speech already on the books—carrying a punishment of up to 5 years’ imprisonment—you might consider these new laws simply an extension of existing policies. But, as I have been consistently aiming to do through my monthly blog posts here, we have to ask two crucial questions: how does the internet transform this issue, and how should we respond?

A good starting point for our discussion is gaining greater clarity on what counts as hate speech in the first place. If we can’t agree on the types of statements that cross the line, then how can we expect to find a common regulatory solution? The legislation recently passed in Germany simply transfers the definition used in the offline sphere there to the world of Facebook, Twitter, and YouTube. What is already illegal as a public statement, such as a denial of the Holocaust published in a print magazine or spoken on a television news program, would be equally unacceptable as a post or comment on the internet.

This raises an obvious problem, namely that the national borders within which Germany can regulate its public discourse don’t exist online. A post generated in a country without stringent hate speech laws would have to be removed from the internet in another nation, setting aside whether its true origins can even be determined. And what does “origin” even mean online? The citizenship or residence of the individual who made the post? The location of the server on which the post resides? The home nation of the corporation who owns that server? You begin to see why the internet has been such a boon for lawyers! Is this what we want for a universal platform that intrinsically transcends state boundaries?

After all, if Germany can exert enough pressure on a social media company to enforce its censorship standards outside of Germany, what about the standards of other nations and groups, especially authoritarian ones? Religious fundamentalists of quite a few religions already press to ban images of women wearing attire they consider inappropriate. China would be happy to extend its domestic internet censorship of names of dissidents and terms like “Tiananmen Square” to the entire internet. Patrolling the limits of hate speech online is much harder than patrolling the national borders of a nation-state.

We can see how this ties back to our central question: how does the internet magnify what is already a delicate balance between regulating defamatory language and allowing for free expression? We must contend now not just with the politics and culture of a single country, but with a globalized digital sphere that brings dozens of ethnicities, languages, and religions into contact. Often, the confrontation with so many discordant worldviews can be jarring. How can we create policies that allow individuals to gain new perspectives through their online experience, without negating universal conceptions of right and wrong?

On this topic, I believe we cannot be relativists, arguing that every opinion deserves equal standing. Objectively, certain countries and cultures are ahead of others in what I will call “moral evolution.” America’s Founding Fathers did not see slavery as amoral; today, we find the practice morally unconscionable. I believe other countries will come to the same conclusions about certain belief systems that they currently sanction. The Universal Declaration of Human Rights would be a fine objective to make truly universal. In the meantime, we must find a way to coexist on platforms so many of us share—and that means instituting regulations that respect our most deeply-held values without inhibiting cultural sharing and learning.

The debate over the substance of hate speech is inseparable from practical concerns about its policing. On this point, too, current legislation does not offer a satisfactory answer. It shifts the responsibility from the government to tech companies, to either create large task forces of employees for this purpose, or to develop human-trained algorithms that can make the call. While some cases are obvious, the exceptions are where complications arise both technically and ethically.

A westerner might think that the swastika is a clear case, and one easily detected by an image matching algorithm. That might be accurate 95% of the time, but this hated Nazi symbol is quite common in Asia, even today, especially in India, Nepal, Sri Lanka, and China, as a Buddhist, Hindu and Taoist symbol, and where its traditional and religious uses predate the Nazis—and Germany. A knowledgeable human can tell the difference between the swastikas at a neo-fascist rally in Hamburg and a mosaic in an Indian temple, but can a machine? As usual, context is everything, and context is exactly what machines are bad at, limited as they are to their data set and strict rules. You can expand the machine’s context by giving it even more data, especially about the person posting, but then we crash into the privacy issue. How much information should we have to provide to a company or government to prove our innocence to an algorithm?

But as the saying goes, the Devil is in the details. Both humans and computers will have to work harder to determine context, intent, and impact. What kinds of guidelines will tech companies put in place for workers charged with flagging and taking down content? What calculations will they embed into the AIs tasked with removing defamatory material? Where in this process can we, as concerned citizens, be sure that our values are being captured and embedded into the mechanisms that are going to govern our public discourse?

As usual, I do not pretend to have the ideal answer, or even state that one definitely exists. Defining hate speech, let alone monitoring it and controlling it (and even if that is desirable), is an enormously complicated undertaking. My goal here is to delineate the parameters for an informed discussion, which I hope the above questions accomplish. The larger goal is to keep making progress toward realizing our ideals, even while admitting perfection will never be in our grasp.

I also need to issue a few words of caution, given my very personal experience with repressive governments. The line between outlawing defamation and all-out censorship is a very easy one to cross. As we consider the tradeoff between free speech and controlled supervision, we would do well to keep in mind that restrictions passed in the free world with good intentions can, and will, be misused by authoritarian governments. Russia’s notorious anti-gay propaganda law, for example, bans any type of speech that “endorses homosexual behavior to minors,” and it has also been abused for political persecution. Such a broad and vague restriction means that enforcement is entirely up to the Kremlin-controlled legislature, which can pick and choose instances to prosecute based on its whims. It is an easy and convenient way to silence all non-traditional or dissenting views that threaten the regime.

Similarly, laws against “extremism” may sound like a good idea in places struggling with radicalized populations spreading hateful propaganda and calls to violent action. But in Russia and other places battling authoritarian regimes, any opposition to the government is quickly labeled extremist and banned, from pamphlets to rallies to websites, with arrests usually to follow. Democracies are not immune to abusing such laws, it is true, but at least there is political recourse, debate, and a free media to push back.

Legalistic language that attempts to codify hate speech is not a true solution. Instead, we should aim for broad language that enshrines our overarching principles, which will be much harder for cynical regimes to subvert, and which will keep the door as wide open as possible for free expression and individual freedom—to keep the free world free. The best we can do to protect against truly hateful speech, while retaining the freedom of expression essential to human development, is to define and refine the moral framework of the society we want, online and offline. The specifics of what exactly can be said, where, when, and so on, will always be debated, of course. But it’s not so bad to have the Devil in the details if the angels are painting the big picture.

--> -->