Putinising ourselves

Dealing with disinformation by regulation is dangerous. Britain should reframe the debate to undermine dictators and strengthen democracy

Features
Russian Prime Minister Vladimir Putin (L),Gazprom Chief Executive Officer Alexei Miller (2nd L) and former German chancellor Gerhard Schroeder (3rd L) look at a screen as they attend the inauguration of the Nord Stream Project information mount at the gas compressor station "Portovaya" outside Vyborg, 2011 (ALEXEY NIKOLSKY/AFP via Getty Images)

We love media freedom and loathe disinformation. The paradox was on display in July this year, when the British government hosted a Global Conference on Media Freedom, with participants who have faced vicious attacks and censorship in their home countries. In the same month, the government received concerned responses from free-speech campaigners to its white paper on online harms. This proposes a mandatory duty of care on tech companies, forcing them to show they are mitigating both clearly illegal activity and what the government terms “legal but harmful” content (such as disinformation). The message was clear: the white paper would pose a threat to freedom of expression and privacy, potentially breaching international and European human rights laws. The dilemma is sharp. We can strengthen freedom of speech, human rights and deliberative democracy for the 21st century. Or we can ape the political logic and language of authoritarian regimes. The signs, so far, are that we are stumbling down the latter path.

The white paper is right to demand that online platforms take more responsibility for illegal material and behaviour. From Facebook-fuelled ethnic cleansing in Myanmar to the continued circulation of child pornography online, tech companies need to comply with national and international law and with international human rights legislation. The government proposals wisely refrain from suggesting that the companies be liable for every piece of content on their platforms; that is all but impossible for technical reasons. Instead it demands that they install systems to mitigate the dissemination of illegal content. You might compare it to ensuring companies have the equivalent of fire safety systems installed in case of a fire—the internet equivalent of the mandatory need for an office to be installed with sprinklers and extinguishers. Regulators would check, for instance, that internet companies have installed a system where illegal content could be flagged quickly and easily, and would have to prove they were responding in a timely manner.

Less wise is applying the same duty of care to “harmful but legal” content, which includes everything from online bullying to disinformation. Unlike child-abuse images, or bomb-making instructions, these are not legal concepts. Clamping down on them unavoidably runs counter to international legislation on freedom of expression, which requires that curbs on free speech be “prescribed by law”, not on the basis of opaque agreements between platforms and politicians. Breaching the duty of care risks fines, business bans and even imprisonment. These will chill free speech: tech companies would rather take material down than face the risk of such punishment. A proposed “super complaints” procedure allows users to demand action when technology companies breach the duty of care. One might expect that this would primarily protect free speech—allowing people whose content has been wrongly removed to seek redress. Instead the white paper stresses that the procedure should be used the opposite way, for individuals to demand that companies take material down.

There is more at stake here than just complaints procedures around content moderation. The white paper’s logic and language are skewed towards suppression of freedom of expression, not its defence. They invoke an idea of speech as somehow inherently dangerous, something that people need to be protected from. This chimes with Russian and Chinese ideas of a “sovereign internet” where governments control content and manage cross-border information flows. David Kaye, the UN Rapporteur on Freedom of Speech and a professor at the University of California Irvine, warns the democratic world against giving cover to a “rhetoric of danger” that restricts legitimate debate.

One of the motivations for regulation, ironically, has been the covert online campaigns waged by the Kremlin to influence democratic processes in the US and Europe. Yet the British response risks imposing exactly the sort of ideas for governing the internet that the Kremlin is promoting.

Authoritarian regimes may of course enact censorship irrespective of how democracies regulate the internet. But Britain sees itself as a global leader in setting standards for media freedom and information policy. Policy-makers should therefore always be asking themselves what is the difference between an authoritarian internet and a democratic one, and how regulatory proposals may strengthen the latter, or at least avoid damaging it. By taking on the language and logic of authoritarian regimes, our government risks reinforcing a censorial cycle: the more authoritarian regimes such as Russia launch covert online influence campaigns in democracies, the more democracies adopt a framework and policy logic these regimes favour. In the struggle against Putinism, we end up putinising ourselves.

We already see this in the raft of proposed remedies for “fake news”. Russia and Singapore cite a German law to take down “illegal” content—including blasphemy—in justification for their own punitive legislation. In Don’t Think of an Elephant!, the cognitive linguist George Lakoff defines winning and losing in politics as being about framing issues in a way conducive to your aims. Defining the argument means winning it. If you tell someone not to think of an elephant, they will end up thinking of an elephant. “When we negate a frame, we evoke the frame . . . when you are arguing against the other side, do not use their language. Their language picks out a frame—and it won’t be the frame you want.”

Defenders of free speech must, however, recognise how online disinformation is qualitatively different from older forms. Erroneous content is not new, but technology now makes it possible to disseminate it at unheard-of rates, targeted at specific audiences. As the law professor Tim Wu argues:

The most important change in the expressive environment can be boiled down to one idea: it is no longer speech itself that is scarce, but the attention of listeners. Emerging threats to public discourse take advantage of this change . . . emerging techniques of speech control depend on (1) a range of new punishments, like unleashing “troll armies” to abuse the press and other critics, and (2) “flooding” tactics (sometimes called “reverse censorship”) that distort or drown out disfavoured speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots.

In short, he argues, speech itself is being used as a “censorial weapon.” The chants of “Four legs good, two legs bad” in George Orwell’s Animal Farm have moved beyond the confines of the barnyard. So how can we respond to the specific challenge of digital-era disinformation, while strengthening democratic ideals and freedom of expression?

First, we should avoid regulating legal (if untrue) speech as a category. Instead we should build on existing legislation to deal with disinformation in specific contexts. 

An obvious place to start is electoral advertising. Current regulations around transparency and accuracy of political advertising and election integrity focus on traditional print and broadcast media. These must be updated to address the new reality of online political microtargeting, where adverts are created in the millions with different messages aimed at niche audiences. We need a legal requirement to create an easily searchable, real-time repository for all election-related ads, allowing anyone to see who paid, and where they are targeted, and which personal data are used for that targeting. Moreover, as the Coalition for Reform of Political Advertising and Incorporated Society of British Advertisers propose, all factual claims in the political ads should be pre-cleared and the ads should be watermarked to show their origins. This regulatory function will have to be placed under the auspices of a body such as, for example, the Electoral Commission. Since British political parties pulled out of regulation from the Advertising Standards Authority, they have, in the neat phrase of Full Fact, the independent fact-checking organisation, “chosen to hold themselves to lower standards than washing powder sellers”.

One difficulty here is definition. Electoral law currently defines electoral material as “material which can reasonably be regarded as intended to promote or procure electoral success at any relevant election.” The use of social media in the Brexit campaign shows how online ads are being used all the time to shape political outcomes. Even if political parties and government agencies provided full, up-to-date information on content, targeting reach and spend, this would not cover the full spectrum of political ads, such as issue-based ads by civic groups, proxies and allies. It may be that all paid-for content should have this level of transparency attached.

Another approach is via public health legislation. This could ensure that internet users are informed of the risks propagated by disinformation regarding, for example, vaccines. Foreign-sponsored interference, where not covered by regulation around election integrity and political advertising, could be addressed under national security policy. This offers not only legislative, regulatory and counter-intelligence responses, but also diplomatic channels for the resolution of foreign interference. The pushback might, for example, be in asymmetric responses such as economic or visa sanctions.

An excessive focus on disinformation or the even less helpful term of “fake news” not only brings an inevitable collision with freedom of speech; it is also largely impractical. One danger is prioritising: responses easily degenerate into a “whack-a-mole” approach. More importantly it misses the point that information operations, such as the infamous Internet Research Agency campaign, sponsored by the Russian state, in the 2016 US presidential election, can use neutral or even accurate content in their activity. Rather than deceptive content, these campaigns are marked by what Facebook calls “coordinated inauthentic behaviour” or what one could term “viral deception”: where the actors disguise both their true identity in order to deceive people, and where material is promoted in an inauthentic way to make it look more popular than it is.

If we reframe “disinformation” as pertaining less to content and more to behaviour and identity, such as using technical means to artificially disseminate certain content—we get away from the problem of regulating speech. Instead, the focus is the systemic use of technology to deceive people. These means include bots, cyborgs and trolls that purposefully disguise their identity to confuse audiences; cyber-militias whose activity seems organic but who are actually part of planned campaigns full of deceptive accounts; and the plethora of “news” websites free of journalistic standards that look independent but are covertly run from one source, all pushing the same agenda.

The issue here is not the right to anonymity (an essential component of online safety, particularly in authoritarian or controversial environments). It is the right of people to understand how the information environment around them is being shaped. What is organic and what is orchestrated? Has reality been engineered? Do we have the right to know if we are interacting with a bot?

California has taken a first legislative step in this direction. Bots used for commercial or electoral purposes must now reveal their “artificial identity”. The common-law presumption against lying, cheating and deceiving people can extend online.

Could such transparency go further? We may imagine an online life where any person would be able to understand what one might call their personal information meteorology. Why computer programs show you one piece of content and not another. Why an ad, article, message or image is being targeted specifically at you. Which of your own data have been used to try and influence you and why. Whether a piece of content is genuinely popular or just amplified. Ideally such information should be instantly available. One could click on or hover over a piece of online content and immediately be able to see provenance and context.

No single technological solution will provide all this. What matters is framing the issue in a way that demands more information, not censorship, putting the user, not the government, at the centre of decision-making. We would become less like creatures acted upon by mysterious powers we cannot see, which make us scared for reasons we cannot fathom, and instead would be able to engage with the information forces around us as equals. 


This is an adapted version of a working paper presented to the  Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression at its meeting in Santa Monica, California, earlier this year.