Vian Bakir and Andrew McStay

Fake News has gone mainstream. Its propagandistic and commercial success has been fueled by the growth of social media and online advertising. 2016 may have witnessed the first “Fake News Election” but it certainly won’t be the last. Vian Bakir and Andrew McStay investigate what drives fake news and how democratic institutions can fight back.

 

What Drives Fake News?

On 7 November 2017, the Parliament’s Digital, Culture, Media and Sport Committee will close its call for written submissions to its Fake News Inquiry. This seeks solutions to news stories, or what appear to be news stories, with deliberately misleading information in their content or context, that are widely shared on social media. The Inquiry is a response to political campaigns across 2016 that combined deception with voter profiling and targeting online. This included the UK’s European Union (EU) referendum campaign battle between the Leave and Remain camps, and the USA’s presidential election campaign battle between Donald Trump and Hillary Clinton. In both cases, fears were expressed that fake news had misled the electorate and undermined confidence in the electoral outcomes (both narrow victories for Leave and Trump).

To attempt solutions to the contemporary fake news problem we must understand its two key drivers: propaganda and commerce.

 

Propagandistic Drivers

Various propagandistic drivers have become apparent since 2016. A key driver is Russian disinformation and selective leaks designed to meddle in the US election. There is increasing evidence that Russia hacked the Democratic National Committees’ emails, leaking these on Wikileaks, and on new websites like DC Leaks. The material in these leaks was capitalised upon by Trump, to discredit Clinton further, repeatedly accusing her of corruption and ‘bad instincts’. For instance, during the US presidential election campaign, there were many far right pro-Trump outlets, like Breitbart, pumping out deceptive, emotive messages (often short videos and captioned images denigrating Clinton as corrupt and under FBI investigation). Fake news stories claiming to be based on the stolen emails also proliferated, such as the unevidenced story (repeated by The New York Post and Fox News) that the charity, the Clinton Foundation, had paid for Chelsea Clinton’s lavish $3million wedding.

Journalists have suggested that fake news amplifying the leaked emails’ propaganda value, together with pro-Trump agitprop, spread via automated bots on Twitter and via false and hijacked Facebook accounts, were part of Russia’s disinformation strategy to discredit and disrupt the American election. For instance, The New York Times describes the fake Facebook account of Mr. Melvin Redick. Redick’s posts consisted of news articles reflecting a pro-Russian worldview, also posting about DC Leaks to Facebook groups, thereby keeping the discussion about Clinton’s emails alive on social networks. While his Facebook profile says he was educated at Central High School in Philadelphia and Indiana University of Pennsylvania, neither has any record of his attendance.

Such Facebook content was likely created by Russian company, the Internet Research Agency, linked to the Kremlin. In an exposé on the Internet Research Agency, The New York Times estimates that its approximately 400 employees create content for every popular social network: alongside Facebook, this includes Twitter, Instagram, LiveJournal (popular in Russia), VKontakte (Russia’s version of Facebook), and comment sections of Russian news outlets. Adding evidence to the existence of a Russian disinformation strategy, in September 2017 Facebook disclosed that it had shut down some 470 fake accounts and pages that they believe were created by the Internet Research Agency and used to buy $100,000 in advertising (3,000 adverts) pushing emotive, divisive issues (such as race, gay rights, gun control and immigration) between June 2015 and May 2017. Facebook estimates that the adverts were viewed by around 10 million American users. Indeed, the circulation of such propaganda is boosted by commercial drivers of fake news.

 

Commercial Drivers

Capitalising on algorithms used by social media platforms and internet search engines, ordinary people are trying to make money from fake news websites. For them, fake news and its sensationalist content acts as clickbait. Income is produced by attracting attention to the fake news website and serving behaviourally targeted adverts. The fake news publisher is paid by the ad network on the basis of how many visitors the website receives. In turn, each visitor allows an ad network to serve an impression (the unit for how many times an advert is served and judged to have been seen). Revenue also comes from click-throughs (the act of clicking on an advert to reach a webpage or other content owned by the advertiser).

During the 2016 US presidential election campaign battle, journalists traced part of the upsurge in fake news stories spread across Facebook, to enterprising computer science students in Veles, Macedonia. These students found it financially lucrative to create outrageous fake news stories on American politics, often plagiarised from right-wing American websites, repackaged with catchy headlines, and shared on Facebook. Most of these websites published sensationalist and false pro-Trump content: the largest of these sites have Facebook pages listing hundreds of thousands of followers. For the Veles locals, it is pro-Trump stories that maximised revenue in 2016: their experiments with left-leaning content simply did not perform as well.

Counter-measures have since been taken. Solutions have focused on social networking platforms’ role in encouraging users not to share fake news stories, thereby reducing audience size (and associated financial incentives) for fake news. In Facebook’s written submission to the Fake News Inquiry in April 2017, it highlights that since mid-December 2016, it has teamed with fact-checking websites to flag content that seems fake; it has been testing its algorithms to see if it can make fake news stories appear lower in Facebook’s News Feed; and it has eliminated the ability to spoof domains, to reduce the prevalence of sites masquerading as well-known news organisations.

However, action by Facebook alone will not solve the fake news problem. While certainly much fake news is accessed through Facebook, it is not the sole channel for sensationalist and misleading content. Rather, the problem is the ecology of behavioural advertising that funds fake news sites. An obvious actor here is Google, but there are many other behavioural and programmatic advertising networks including lesser-known networks such as OpenX, Tribal Fusion and 33Across.

In Google’s written submission to the Fake News Inquiry, it outlines measures it has taken since November 2016 to help stifle fake news. These include new ad network policies against misrepresentation, which target website owners who misrepresent who they are and who deceive users with their content. Google reports that a month after it changed its ad network policy to warn off publishers who misrepresent who they are or provide deceptive content , it had identified 550 leads for sites suspected of misrepresenting content, including impersonating news organisations. Google subsequently removed nearly 200 publishers from their ad networks permanently. Google says it is also helping advertisers to choose where their adverts appear; and that it will invest in people and tools to help prevent adverts from appearing on potentially objectionable content, such as fake news websites. Additionally, Google has partnered with fact-checking organisations to highlight and promote fact-checked news articles and downgrade fake news articles in Google News searches.

 

Solving Fake News

Highlighting the importance of fake news, the US Senate and House intelligence committees and the Senate Committee on the Judiciary are investigating Russian interference with the presidential election. Britain is also conducting investigations. In October 2017, MP Damian Collins requested more information be provided to the Fake News Inquiry on use of Facebook advertising and Russian linked accounts. On 2 November, Britain’s election watchdog, The Electoral Commission, started its inquiryinto the role of social media in election campaigns, and whether Russia interfered in the EU referendum campaign.

Given the propagandistic and commercial drivers of contemporary fake news, a multi-pronged attack is needed to inhibit its formation and flow. The involves addressing propagandistic and commercial incentives to create fake news, requiring cooperation between digital intermediaries, regulators, advertisers, public relations practitioners and mainstream media. It also entails addressing everyday users’ media and digital literacy, to counteract our propensity to willingly share misinformation and disinformation.

 

Vian Bakir is Professor of Political Communication & Journalism at the University of Bangor. Andrew McStay is Professor of Digital Life, also at the University of Bangor.

Image: rawpixel