FOLLOW

FOLLOW

SHARE

New Jigsaw AI Poses No Threat To Freedom Of Speech

Freedom to disagree with people is vital, freedom to insult strangers is not

22Sep

Recent news that Google subsidiary Jigsaw is developing Conversation AI to combat online harassment has drawn a number of arguments both for and against, many of them deeply rooted in ideological convictions around free speech. This technology raises important questions about the nature of free speech in the digital age,

Jigsaw’s conversation AI uses machine-learning techniques to detect language indicative of harassment and abuse. Its algorithms were taught using 17 million comments from The New York Times and data about which of those comments were flagged as inappropriate by moderators, as well as 130,000 snippets of discussion around Wikipedia pages provided by the Wikimedia Foundation. It then had 10 people randomly selected people examine each snippet to ask them whether they represented a ‘personal attack’ or ‘harassment,’ before feeding all of this into Google’s open source machine learning software, TensorFlow. According to Google, the filter has seen a 92% success rate at detecting ‘abusive’ messages, and a false-positive just 10% of the time. This will continue to improve over time as the algorithms continue to learn from more and more data.

The intention of the technology is not simply to stop people calling each other nasty things online. Jigsaw founder and president Jared Cohen says, ‘I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight… [we will] do everything we can to level the playing field.’ As Andy Greenberg, writing in Wired, suggests, the intention is to protect some of the web’s most repressed voices by selectively silencing others. This may seem paradoxical, the virtual equivalent of pro-lifers killing doctors, but given the consequences that have arisen as a result of the abusive, wild west the Internet has become and what that has meant for politics and the rise of people like Donald Trump, it is at necessary to at least try and do something.

The internet is often an ugly place, where people seem happy to set upon perfect strangers with glee, tearing apart their lives in 140 characters or less. In Jon Ronson’s book, ‘So You’ve Been Publicly Shamed’, he explores the modern phenomenon of online shaming and the abuse. He looks at people who have said stupid, regrettable, often offensive things on social media and subsequently fallen foul of the twitter hate mob, barraging them with aggressive threats and abuse. There is an argument to be made that the internet is self-policing in this regard, and Jigsaw’s creation would stop this. Writing in Wired, Andy Greenberg argued that ‘throwing out well-intentioned speech that resembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect.’ Equally, however, this same abuse is often directed at groups such as feminists who simply hold progressive views about equality that may pose a threat to another group’s domination.

The idea that this is some new threat to free speech is clearly wrong. The technology is currently being trialled by Wikipedia, who have yet to think of how they are going to use it, and The New York Times, who will use it to moderate its comment section before humans can check those which the AI has removed and make a final decision as to which should remain and which should be deleted. Comments sections were always moderated, though, they were just moderated by humans. What Jigsaw has created is simply a far more efficient method of doing this. A bomb is no more or less morally repugnant than a handgun in that they both kill people - it is simply far more dangerous in carrying out its intended purpose. It is humans and the way they use a technology that is morally repugnant. There are disturbing ramifications, as Jack Hadfield notes on conservative website Breitbart news amidst an otherwise paranoid mess of an article with a headline implying it is just there to ‘protect elites feelings’, saying ‘such a system could easily be developed by tyrannical regimes overseas to detect populist uprisings within its online borders’. The truth is that all technology is liable to be misused by someone, if we were to stop inventing things that dictators could use to repress their people, we’d all be living in cave whittling spears out of sticks.

This technology is not some malicious threat to freedom of speech. Feminist writer Sady Doyle, no stranger to online abuse, told Wired that ‘People need to be able to talk in whatever register they talk. Imagine what the Internet would be like if you couldn’t say “Donald Trump is a moron.”’ The truth is, though, it would probably be better if people had to explain why he was a moron rather than just yelling/typing it at him, and would likely go some way to making the internet - and society as a whole - less of a polarized, intolerant, moral abattoir. The right to disagree with ideas is important, the right to call someone a p£*%& for expressing an opinion is not. In its current iteration, there is little to really worry about. It will get better at determining what constitutes genuine abuse and what is harmless ‘banter’. It is a difficult line to tread, but so long as AI does not start impinging on the right to agree and only on the ‘right to insult’, it can only be doing the world a service - whoever it might stop being insulted.

Comments

comments powered byDisqus
Turkeyss

Read next:

What’s On Data Analysts’ Plates This Thanksgiving

i