It's easy to find any viewpoint on the web. The problem is that often we have no idea who is producing this information or the motivations behind it.
People often trust their social media contacts more than traditional advertising, and more readily believe everything from restaurant reviews to political opinions when it comes from a social network. An amusing or tragic anecdote that may not be representative of any trend has a better chance of going viral than a well-reported story with lots of analysis and facts. As always, there are marketers, abusers, propagandists, and outright criminals who are quick to exploit these psychological biases. The person who attacks you over a political tweet may be a paid troll or a robot. The spam you ignored in your inbox is now served into a trusted social media feed. The phishing attacks and malicious links you’d never click on in an anonymous email may now appear as a recommendation from a good friend, or a celebrity you admire. These attacks may even be customized by AI to target you perfectly, the way you get shopping or movie recommendations.
As a frequent commentator on politics and world events in the digital sphere, I have often found myself the object of vicious attacks by commenters. Often, it is clear that these commenters are not trying to engage with the ideas I am presenting. I am happy to debate, even to argue passionately, and value the Internet as a forum in which opposing sides can present their beliefs and work to find common ground. Unfortunately, the trolling that frequently dominates online comment sections does little to advance this type of discussion. The degree of hatred displayed drives many people away entirely, and can be especially damaging to young people. If the trolls are winning, it illuminates a deeply troubling aspect of our new digital landscape.
It is increasingly easy to find a view, somewhere on the web, to support almost any opinion, no matter how far-fetched or factually unsubstantiated. The problem is that often we have no idea who is producing this information and, thus, their motivations behind promoting it. We praise the virtues of having so much information at the tips of our fingers; the world seems more transparent than ever, no topic of interest out of reach. And yet, we know shockingly little about who is putting all this information out there, and why. So, do more facts, interspersed with plenty of so-called “alternative facts,” make us better informed, or just more confused?
It’s easy to forget, but it’s worth keeping in mind, as we navigate all this data: There is only one way to tell the truth, but countless ways to lie. In face-to-face interaction, the ability to get away with deception is more limited. You have to look your interlocutor in the eye as you mislead him, a difficult feat for many. More traditional media also have more built-in protections against spreading falsehoods, such as market-driven accountability. If a newspaper prints two flatly contradictory stories in one issue, its readership will undoubtedly raise objections about the quality of the reporting—and any credible paper has a “Corrections” section to hold itself accountable and to correct the record. Nor is there any doubt about the origins of the stories, making it relatively easy to hold the publishers accountable. New media, however, does not have the safeguards imposed by definite boundaries. There is no way to scroll through the 6,000 Tweets posted to the site every second, and attempt to corroborate or disprove each user’s version of reality. We occupy a virtual space in which countless absurd versions of events can coexist, without ever having to overlap and call one another into question. We select our own channels of reality by following certain people, by only reading and watching certain outlets. We live in a world in which we can create an information cocoon in which our preferred interpretations of events are never challenged.
This current state of affairs is the result of shifting much of our society’s discussion into a new and uncharted sphere—the Internet—without applying the norms that govern public life. Just because one can sit in the privacy of her own home as she types a blog post does not negate the deeply social nature of hitting "publish." If we don't work to create a space in which opinions are voiced with respect and reason, then we allow the potential of the Internet to be usurped by those would exploit its powers for their selfish gains. Right now, while the majority subscribe to standards of decency, rogue dictators sponsor propaganda-spouting troll operations, and repressive regimes censor all self-expression that threatens their control. The few abusers abuse the openness that makes the Internet such a transformative force.
This asymmetrical situation between the majority of users versus a small fraction of abusers is mirrored in more concrete security matters. One piece of malware can spread and disrupt the lives of millions of people and companies at very little cost to the creators. Social networks are proving to be fertile ground for malware, using social engineering to exploit the trust users have in their friends. On Twitter, phishing attacks are far more effective than the ones that are sent by email. On Facebook, clicking on what you think is a friend’s post quickly turns you into another vector of the same virus that infected your friend’s account. Fake posts, fake tweets, fake news, fake accounts, real damage—it’s a global crisis of security, transparency, and trust.
The solution is not a rigid regulatory regime, but a move towards normalizing certain forms of conduct and rejecting others. The gatekeepers in Silicon Valley and elsewhere can help by creating systems that reward positive conduct. More intelligent tools will help protect us from breaches in trust and security. But in the end, the problems of social networks are social problems, and developing social practices takes time. We can begin by refusing to tolerate those who step outside the bounds of acceptable behavior. When we read what are undoubtedly blatant lies, we must, as responsible consumers of information, refuse to reward them with our attention. We must hold respectable media outlets, as arbiters of truth, to the highest standards. The first step toward reclaiming control over our digital space is to face mistruth head on—the stakes are too high to allow lies to shape the future of global conversation.
Garry Kasparov on how the internet magnifies what is already a delicate balance between regulating defamatory language and allowing for free expression.
Avast Security Ambassador Garry Kasparov spoke at DEF CON. We talked to him and our Threat Intelligence Director Michal Salat about man-machine collaboration.