Although it has the potential to be a powerful democratizing tool, social media can easily become another proxy
The phenomenon that we now call ‘social media’ arguably started around the turn of the millennium with sites like Friends Reunited, at first offering a simple way for the internet to allow old friends to reconnect with each other. With the advent of Myspace, Facebook, Twitter and other services as the 2000s drew on, the concept of allowing people to connect with each other online was expanded, adapted and iterated upon in various ways. To start with, the purpose was simple – to provide novel online social interaction focused squarely on the users.
But social media now is no longer a novelty, or merely a fun way to use the internet. In 2020, it’s estimated that over 3.8 billion people worldwide use social media, and these online spaces have started to influence our lives in ways that few of us could have expected 15 years ago. People do more than socialize on Twitter, Facebook et al; we get news, discuss politics, organize, engage in activism and live a significant proportion of our lives online. We have even started calling people or accounts with many online connections "influencers" because they have a real potential to sway the opinions and actions of their followers.
The forces at play
The astonishing power that has grown in social media platforms presents the governments of the world with two problems: Information manipulation and big tech corporate power. First, vested interests can manipulate social media from without to spread "fake news", misinformation or propagate a particular spin to suit their own ends. This allows them to influence public opinion even to the outcome of elections. Second, social media businesses themselves hold a great deal of power to influence their own platforms from within. They can manipulate, restrict or control what users share with each other, allowing them to protect their own interests from government regulation and potentially sway government policy.
Further reading: What does the Twitter hack mean for the future of democracy?
Outside interests
To understand how external groups can manipulate social media to their own ends, we need to understand ways in which information spreads on social media. Although going into the fine detail of how algorithms operate, the mechanics of Twitter trends or which posts are likely to gain traction could be useful, what’s more important for our purposes today is the distinction between two ways in which information propagates.
Organic and inorganic sharing
We can differentiate the ways that people share information on social media into two categories. The first is when content is taken at face value, and we usually assume social media works in this way; people discuss their interests, share information with friends and generally behave without ulterior motives. We can call this organic sharing, as it emerges and propagates in a naturalistic way, spurred by users’ genuine motivations and reactions.
Converse to organic sharing is inorganic sharing; this is what happens when any person or organization exerts control over the propagation of information. Inorganic sharing is not necessarily bad or malicious; some types of inorganic sharing are how social media platforms support themselves. This can include paid promoted posts, advertisements, and so on. However, not all inorganic sharing is endorsed by the platform; sometimes social media can be manipulated in order to force the spread of information – and when this information is ideologically loaded or politically manipulative, inorganic sharing becomes a big problem.
How inorganic sharing harms democracy
The last few years provide an excellent demonstration of inorganic sharing – as well as the social damage and discord that can arise from it. Politically manipulative inorganic sharing has been rife over the course of several pivotal votes in western countries recently; this includes the 2016 U.S. election, the 2018 midterms and the UK Brexit referendum – and is reportedly continuing to happen across social media.
On September 14, 2020, Politico reported, “Trump’s top counterintelligence official, William Evanina, has agreed that Moscow is seeking to attack the election. He told lawmakers last month that Russia aims to ‘denigrate’ Biden ‘and what it sees as an anti-Russia establishment’. Those efforts — plus influence campaigns by China and Iran — are ‘a direct threat to the fabric of our democracy,’ he said in an earlier statement.”
Inorganic sharing has been a problem over the last decade – and increasingly so during the last five years. The Cambridge Analytica scandal has demonstrated how targeted advertising can be used to manipulate a vote. Facebook users’ personal data was collected and analyzed, allowing highly targeted political adverts to be delivered, potentially influencing both the Brexit vote and the 2016 presidential election.
The question of Russia’s influence on the western democratic process has also been repeatedly raised. While few can agree on the exact extent and nature of Russian interference in recent votes, it has been confirmed that several foreign nations – including Russia, China and Iran – have used social media to attempt to sway the U.S. midterms and other public votes.
Further reading: An elections security progress report: Black Hat edition
The use of sock puppet accounts (online identities used for purposes of deception) gives us an example of inorganic sharing that does not depend on paid, promoted posts. Malicious users can use disposable email addresses or spoofed contact information to run several – sometimes dozens – of different accounts. If these users act as an organized group, it’s an effective means of manipulating social media algorithms. This can make certain political stances appear to have more popular support than they really do, increase the number of users exposed to certain beliefs or harass other users and inhibit their ability to express themselves genuinely. We have known for a long time that sock puppet accounts are used to give products favorable reviews; extending a tested principle for political ends is hardly a leap.
Political trolls
While some may still regard ‘Russian interference’, professional trolling or organized sock puppet groups as conspiracy theories, they are confirmed, documented and well-researched phenomena. Russia’s Internet Research Agency (IRA), for example, has received various indictments and criminal charges in recent years for interference in US politics; this includes an indictment from Washington D.C. for interfering in the 2016 elections.
The IRA’s modus operandi involves internet campaigns, sock puppet accounts, social engineering, false flag operations and any method of political agitation and subterfuge available. As well as the 2016 elections, the group has interfered extensively in the Black Lives Matter movement, creating fraudulent communities which purport to both oppose and support BLM. It seems most likely that the IRA’s end goal is to increase populist agitation on as many sides as possible, and aim for generalized unrest and destabilization of western democracy.
What can social media do?
Social media depends on its users to generate most of the content and discussion. This limits platforms’ options for dealing with harmful inorganic sharing. Potential solutions must necessarily be broad-strokes changes to site policy and procedure that could involve the ‘collateral damage’ of restricting non-harmful sharing. For example, in the wake of harmful political advertising used to spread misinformation, Facebook has recently decided to implement an across-the-board ban of any political advertising on its platform, beginning the week of the 2020 US election. This will prevent the kind of targeted, manipulative advertising that the website has struggled with, but cuts off any revenue from more legitimate political adverts.
This type of restriction also offends many Americans. The right to free speech is not merely enshrined in the constitution, it is embedded in the American psyche. The consequent distinction between genuine opinion (acceptable) and malicious manipulation (unacceptable) is difficult.
Facebook’s action follows the example set by Twitter in October 2019 to eliminate political advertising from its platform. In a thread, Twitter CEO Jack Dorsey (@jack) detailed some of the difficulties and intentions with preventing political manipulation through advertising. "Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes…These challenges will affect ALL internet communication, not just political ads. Best to focus our efforts on the root problems, without the additional burden and complexity taking money brings."
The ideal solution to groups like Russia’s Internet Research Agency would be for platforms to shut down all fraudulent accounts, eliminate sock puppets and ban users acting in bad faith solely for their own political agenda. In practice, this is impossible. A ‘professional troll’ would be almost impossible to distinguish from a genuine user with inflammatory views, and social media moderators depend on other users reporting bad actors and a manual review process in order to avoid wide-ranging bans that target innocent users.
There is also an issue with the way social media operates: People generally enjoy interacting with content they agree with and what makes them feel good, and will avoid that which they find distasteful. This naturally keeps users in online "echo chambers", interacting more with people who agree with them than otherwise, even without the influence of targeted advertisements. In terms of social media itself having negative effects on democracy, however, this is just the tip of the iceberg.
Threat to democracy from the nature of social media
Lobbies and corporate influence
Inorganic sharing turns social media platforms into unwitting accomplices in the erosion of democracy. But social media also directly and consciously interacts with politics in various ways. The social media sector is de facto a part of what we would consider ‘big tech’. The lobbying power of the tech sector has been growing year on year, and any corporation will push for its own interests. The influence of tech-based corporations over politics, especially in the United States, has prompted some politicians to turn their efforts to resisting the tech lobby. For example, Senator Elizabeth Warren’s manifesto pledge to "break up big tech".
Europe is also planning to limit the power of big tech, and is set to propose a new Digital Services Act by the end of 2020. This will increase social media's responsibilities and liability for content on their platforms.
The international experience
The corporate nature of social media can inhibit democracy in the ‘free’ countries of the West. The mechanics of social media and the free sharing of information and ability to connect people can also become a huge threat to undemocratic or autocratic nations’ regimes. This leads to social media often being restricted or banned outright in various nations; some countries have their own national social media networks which are under the state’s supervision, while other nations simply restrict social media altogether.
Russia
Russia’s approach to social media has been growing steadily more insular and restrictive in recent years. While many of the largest social media platforms are still accessible in Russia, there are proceedings underway to ban Facebook, Instagram and YouTube. Other sites and services, such as the social messaging app Telegram, have already been banned for refusal to make users’ encrypted data available to Russian authorities. Even with many western social media platforms still legal, Russia’s own social networks – which have high levels of state control – are the most widely used within the country.
China and North Korea
China and North Korea are notorious for having some of the most restrictive and censorious approaches to social media in the world. Both allow their citizens to use only state-approved social networks. Interestingly, many of the social media platforms that are Chinese focused, like Weibo or WeChat, are accessible from western countries, creating a sort of one-way isolation for Chinese users. In North Korea, except for a privileged elite, citizens are restricted to a heavily monitored national intranet. This means that western users can neither see into North Korean social media, nor can most North Koreans see out.
Iran
Iran represents an interesting middle ground between Russia, China and North Korea’s social media restrictions. Although most western social media platforms are banned within Iran, many citizens are still able to use VPN services to access sites like Facebook and Twitter. In fact, despite state censorship, Iran is the 20th most prolific nation on Twitter. This gives Iran a heavier censorship policy than Russia currently has, for example, but much less stringently enforced than that of China or North Korea.
The important point to note from these examples of social media abroad is that even autocratic countries with a strong control over their populations are afraid of the potential for unfettered social media to cause unrest among their people. Social media within western democracies is largely unfettered.
The problem of self-regulation
Social media works best for its users when the state allows it to operate freely, but the business itself neither pushes its own political agenda nor allows organized agitators to push theirs. The most market-friendly option for western social media companies is to self-regulate. Government intervention can often be stifling for the marketplace, so if social media companies are able to adopt and enforce their own standards of practices, this will allow the best combination of freedom and safety for users.
But self-regulation is a bigger problem for social media than other industries because of the prominence of user-generated content in the company’s business model and the need for user engagement for monetization. This gives businesses a twofold problem: Firstly, It’s impossible to effectively regulate all users of the site at all times. Though a social media site can have rules and policies to govern acceptable behavior from users, enforcement largely depends on other users reporting breaches and the response time of the manual review team. The second problem social media companies face is that even self-regulation is almost inherently harmful to the business model. Most revenue on social media is made by advertising, but restrictive or heavily policed online spaces will make users less responsive to adverts and reduce the company’s income.
Even without the obstacles to self-regulation, the ability for big tech to manipulate public opinion and the power it can wield to push for friendly legislation remains a great concern to governments. Whether these lobbying efforts could extend into social media companies actively trying to influence democratic elections remains to be seen, but we know for certain that these platforms can be used to push their users towards specific ideologies – and even actions.
A recently removed Facebook group, Kenosha Guard, has been linked to the Kenosha shooting in August. The page had been reported several times for inciting violence, but because the language used by the group was oblique and non-explicit in its calls for violence, they were found not to be in violation of Facebook’s rules. The use of plausibly-deniable language is an example of how social media can be used to incite violence; without directly calling for it, but with posts that are likely to encourage one or two people within a broad audience to commit it. It also shows how difficult it can be to craft community guidelines that prevent implicit calls to violence without censoring innocent users and falling foul of the right to free speech.
In conclusion
In his new book, "Life After Privacy" (reviewed here), Firmin DeBrabander comments, “We have opted to act politically through proxies, that is, elected officials. And, as the Federalist Papers indicate, this governing model was meant precisely to limit the impact, influence and political activity of ordinary citizens”.
To put it another way: The idea of "government" is to take direct decision-making away from the general populace and place it in the hands of representatives - a governing class which takes on the responsibility of decision making on behalf of the electorate. Whether we consider this a good thing or a bad thing is down to the context and implementation of this democratic model: It can be either elected representatives trying to act in the best interests of their country’s population, or it can result in a class of elites who hold power over the people. Either way, for better or worse, direct involvement in policy is taken away from the general population.
If we are not careful, social media can become another proxy. Decision making can be delegated to the loudest noise in the media, whether that is acceptable organic noise or unacceptable manipulated inorganic noise.
The irony is that social media has the potential to be a powerful democratizing tool. It gives people the opportunity to generate a more united voice with more effective organizing; it directly threatens oppressive forms of government and allows groups dedicated to political activism to form. However, social media allows for both genuine, organic activism and forced, manipulated activism. Groups can be formed either by people coming together to change the world for good, or vested interests wanting to manipulate public opinion for their own gain.
Social media would be a powerful force for democracy were it not for humanity’s unerring ability to subvert good things to their own ends. The sheer power and size of big tech social media means that large organizations – such as adversarial nation states – have the opportunity to attack the very nature of western democracy.