During Donald Trump’s successful presidential election campaign, Facebook was hit with a number of scandals involving fake news stories circulating on the site. One such story suggested that an FBI agent involved in Hillary Clinton’s email scandal had mysteriously committed suicide, while another claimed that Megyn Kelly had been fired by Fox News for supporting the Democratic nominee.
It’s unclear the effect stories like this had on the result of the election - the echo chamber created by social media is complex - but the media furore over the fake stories has raised questions over a social media company’s right to censor. According to Buzzfeed News, ‘more than dozens’ of Facebook staff have created a task force of sorts to ‘battle fake news’ on the site. More than 150 million Americans use Facebook, but CEO Mark Zuckerberg is of the opinion that it’s not the site’s responsibility to police the content shared on it. ‘The idea that fake news on Facebook influenced the election in any way is a pretty crazy idea,’ he said, before highlighting that ‘only a very small amount is fake news and hoaxes.’
‘It’s not a crazy idea,’ one Facebook employee told Buzzfeed News under the condition of anonymity. ‘What’s crazy is for him to come out and dismiss it like that, when he knows, and those of us at the company know, that fake news ran wild on our platform during the entire campaign season.’ The problem with identifying fake news, though, lies in the intention/outcome dynamic. A particularly convincing piece of satirical writing may appear just as real as a piece of fake news designed to stir up a particular sentiment. If, then, a piece of satire gains traction among certain groups and becomes prominent on Facebook, does it cross over from satirical to misleading?
Does this mean explicit, farcical satire will get taken down immediately on the grounds that it isn’t truthful enough for Facebook? These are the issues the hierarchy at the world’s largest social media site will have to address before they can implement any formal censoring policy. The fact that ‘more than dozens’ of Facebook staff members have reportedly been meeting in secret to put together their ‘task force’ speaks volumes about just how sensitive an issue this is. It’s one that Facebook will want to be transparent about, too, with any algorithmic change or censorship policy likely to be vigorously scrutinized. Journalist Nellie Bowles tweeted claiming that a Facebook representative asked her the question ‘What is truth?’ - the company is being made to grapple with some difficult questions.
The argument for weeding out fake news is convincing, though. Many will not read an article in full before sharing it, some will not read a word other than the headline, and the fake news scandal highlights this phenomenon on an extreme scale. A quote attributed to Donald Trump in 1998 in which he insults Republican voters did the rounds online throughout the election process. It’s false, but its virality meant that very few questioned it. As I write this, a Breitbart map is circulating that apparently shows Trump as having won a ‘7.5 million popular vote landslide.’ Again, it’s false.
The Facebook employees going under the radar to combat fake news may not have the backing of their employer, but the issues of many people getting a large portion of their news through social media with very little scrutiny as to its veracity are real. The news claiming that Trump had run away with the popular vote was shared so much that Google’s Assistant would send it to anyone who asked for the final election count. The power of media to influence perception is something Facebook has a long and difficult history with. The censorship of content and the inherent punishment of satire will not be a popular move among many but, in a world in which entire political campaigns can be influenced by misinformation, Facebook needs to consider its policy. Twitter is rolling out new, long-waited anti-harassment tools, Facebook just needs to decide how damaging it deems fake news to be.