Fake news and social media go hand in hand. Inaccurate content is unshackled from truth, which allows it to be controversial, sensationalist, and ultimately sharable. Given the well-documented echo chamber that exists on Facebook, in particular, content is often curated to appeal to and reinforce a belief - political or otherwise. Despite Donald Trump’s liberal use of the term ‘fake news’, it seems likely that the proliferation of false information led, in part, to his election. Facebook has faced a lot of criticism for failing to address the issue in advance of or during the election campaign, something Mark Zuckerberg appears genuinely affected by.
One area of debate has been whether or not Facebook is actually at fault for having false information circulated on its platform. It’s a delicate area; censorship is undemocratic, but the spread of misinformation is equally damaging. Does a piece of writing being factually incorrect mean it should be denied from entering circulation? Where do you draw the line? Could satire be affected? The questions for Facebook to confront are innumerable, which may partly explain why the company has been so slow in properly addressing the issue.
Though Zuckerberg has long expressed Facebook’s rejection of fake news, the company has done very little to actively combat the spread of misinformation. The hysteria and engagement generated by sensationalist fake stories is ultimately good for Facebook. When the cost is being accused of playing a part in Trump’s election, though, Zuckerberg and co. will want to appear to be actively combating one of digital media’s most pressing issues. If users lose faith in the veracity of the shared content that populates the platform, they’ll switch off.
And so Facebook has acted. If you attempt to post a piece of factually misleading or incorrect content, an initial warning is displayed below the preview stating that the content is ‘Disputed by Snopes.com and Associated Press’, for example. Then, if you proceed with posting the content, a pop up appears explaining that multiple, independent fact-checkers have ‘disputed its accuracy.’ Even when the content is posted, the veracity warning is displayed under the post on the News Feed, highlighting the user for spreading inaccuracies. Facebook has been careful in its wording here. Claiming that independent bodies dispute the content is different from outright calling it fake, and it’s a balance that will have to be struck in all of Facebook’s moves to limit ‘disputed’ content.
Facebook has been sure to make clear the non-partisan basis of the fact checking. It employs independent fact checkers rather than checking in-house; the volume of content is such that it would be impossible to do it manually, and so far it hasn’t developed a functioning machine learning solution. It will only ever use, it says, fact-checkers ‘signed up to Poynter’s non-partisan code of principles.’ With Trump and many of his supporters accusing the ‘mainstream media’ of publishing ‘fake news’, the importance of non-partisan fact-checkers is clear, and Facebook will be hoping that by using them it can bypass the baying of Trump supporters who believe that it has an inherent liberal bias.
Facebook’s position as a content aggregator is cemented, but it’s only now, with this move, that it has begun to take responsibility for its influential position. Zuckerberg maintains that only a small amount of the content shared on Facebook features misinformation, but the seven point plan to combat it released by the company would suggest that the percentage is significant enough. The controversy that combusts every time Facebook accidentally includes a piece of ‘fake news’ in its Trending section will have the company walking on eggshells.
Even so, the move to flag bogus content is a positive one. The independent fact-checkers give Facebook something to hide behind, and having the warning appear even after posting will encourage users to be more discerning with the sources of their content before sharing it. Facebook’s action to combat misinformation on its platform may have come late, but we should welcome action to limit lies in a time of confusion and misdirection.