It is our responsibility to find ways to embed human values into today’s technological structures to create the online discourse we want.
The topic of fake news is at the forefront of our political debate, now more than ever. What gave rise to its ubiquity? How do we combat it? Is this just an inevitable outcome of the Internet opening up a wider space for communication?
In my previous post, I noted how accustomed I am to trolling in the comments sections of my online articles and social media. But I refuse to believe that this phenomenon is an inescapable aspect of our latest communication technologies. There is nothing in the structure of the Internet itself that necessitates the truth getting lost in a swamp of alternative facts and personal attacks. It may be easier for this to happen now, given the sheer volume of information that can be conveyed online, but it is still our responsibility to set the standards for the discourse we want. As I write in my upcoming book on AI and human + machine collaboration, Deep Thinking, technological advances can present new obstacles, but overcoming these challenges will make us more creative—by obliging us to innovate forward, rather than retreating from the progress we have made. So let us think creatively, to see if we can solve new problems without restricting old freedoms.
One response to incessant trolling online is to throw up our hands and conclude that this kind of activity is part and parcel of the Internet, and that if we want to have a discussion where everyone who participates is informed and considerate, we have to move offline and into a face-to-face setting. The more productive approach, I believe, is to ask ourselves how we can combine the best elements of human interaction with the extraordinary potential of technology. A Norwegian news company recently provided an example of doing just that.
NRK has implemented a pilot quiz on certain stories, which requires readers to answer several multiple-choice questions before being allowed to comment. The benefit is two-fold: deterring professional trolls by making the process more time-intensive, and, for genuine readers, stipulating that they have at least a minimal understanding of the facts before they post instead of ranting without reading more than the headline. In other words, the method targets both deliberate misinformation campaigns, as well as the danger of a discussion based not on real evidence, but ingrained biases.
This intervention aims to reintroduce distinctly human standards into online discussion. If we hope to facilitate productive debate about the future of our society, we need to preserve a shared framework of knowledge and norms of respectful communication. There is no reason we cannot do this on the Internet, as we continue to take advantage of its power to democratize information. We just need to find new ways to embed our values into today’s technological structures. One way to do so might be a news quiz, but there are countless other possibilities not yet explored. Do human moderators still have a role to play in regulating virtual discussions? Can algorithms be trained to moderate human civility, the way they can filter out profanities in text messages and weak passwords on banking websites? I don’t purport to have the answers myself. We must ask ourselves what we value and what measures we are willing to put in place to create the online environment we want.
At the same time as we think creatively about how to maintain the quality of our discourse, we must also safeguard its security. This is yet another example of new technology pushing us beyond our limits, to find solutions to problems of increasing complexity. How can we apply the above strategy of melding the best that humans and machines have to offer, now to tackle our security challenges? We might make password management more user-friendly, expanding on the approach of companies that provide password encryption software to eliminate the need to memorize dozens of log-ins. Or we might make the exchange of personal data for free services more explicit—reminding users when they sign up that they are conceding access to their personal information.
Regardless of the exact measures, we must tap into our unique human strengths and shortcomings, in order to make actions that ensure security more intuitive and appealing. That is, more human. No security system will work if people refuse to use it, or don’t understand how to use it effectively. In some cases, security becomes a matter of collective defense, where a small fraction of people can put millions at risk. It is likely we will need to implement regulatory coercion, the way that people are obliged to vaccinate their children before sending them to school—and for the same reason. For example, a manufacturer who produces a device that is easily exploited and turned into an online weapon has to bear responsibility. What about individual users who fail to patch their personal computers, allowing them to join botnets that can cripple the Internet? Should those machines be cut off from the Internet preemptively? At what point does cyber-security become a matter of legal accountability when others are put at risk, like drunk driving?
Each of us has the responsibility, as a citizen of the digital age, to think about how we contribute to the conversation. We are also accountable for keeping our information, and by extension the broader system, safe. Often, these responsibilities are intertwined, such as when Facebook uses the information it gathers on users to determine which news stories should appear on their news feeds. On the security front, we’ve ceded authority over our personal data, potentially allowing it to be exploited by third parties. But, perhaps more importantly, we have set ourselves up to become gradually less and less tolerant of views that conflict with our own as the information we encounter is increasingly customized for us.
The negative consequences of such algorithmic conveniences may not seem particularly dangerous, but over time they can change the fabric of our society, online and off. However, if we rise to the occasion, these challenges provide the opportunity to hone our human problem-solving skills. Problems will continue to arise, as is the nature of innovation, but we should view them in a positive light—a chance to show that we still have the edge over the technology we design.