2016 was one of the most controversial in recent history. We have seen unprecedented protests, vicious attacks of democratic norms, and huge fluctuations in markets. These events have led to some bitter exchanges, and many of the most hate filled all of these have taken place online.
In the UK, Lily Allen has recently been trolled relentlessly, with people even going as low as mocking the loss of her child. The Alt-Right on Twitter have sent horrific images to Jewish journalists and anybody supporting left wing beliefs, and even made up an imaginary pedophile ring in Washington. The spate of trolling has even made it to comedy, where South Park have mocked the practice in a series of episodes, depicting trolls as lonely weirdos addicted to making other people miserable.
Whatever the reason, we have seen the use of trolling escalate dramatically, with several users on 4Chan and Reddit even claiming that they managed to troll Donald Trump into the White House. It is no longer something that people shy away from, we’ve even seen people like Milo Yiannopoulos become world famous after being banned from Twitter for the vile trolling of celebrities.
It has got to the stage where a study from the Center for Innovative Public Health Research found that 72% of U.S. internet users ages 15 and older have seen others being abused, 36% have been harassed themselves, 30% have been victims of invasion of privacy, and 27% have decided to not post something online because of the potential abuse they might receive.
It is something that Google offshoot Jigsaw have set themselves the challenge of stopping and have created Perspective, an AI tool, to do just that. The idea of the tool is to allow potentially trolling posts to be flagged to moderators, something that is rudimentarily done at the moment, but which generally tends to focus on keywords, rather than language use. For instance, the current systems can easily mix up a positive for a negative e.g ’you’re sick’ could be interpreted in many ways depending on context.
The hope for the system is that it can learn from millions of interactions and then judge the level of toxicity in the statement, which can pinpoint potential trolling activity far quicker than any human moderator. It is being offered as an API, so anybody can use it, which may well be the most powerful element of the entire system. Through making it open source, Jigsaw are allowing the AI to be exposed to considerably more input than they would ever be able to do themselves.
With the huge surge in trolling on the internet, combined with the relative ineffectiveness of current vetting programmes, many companies and comment moderators have struggled in the pressure. According to the New York Times they have around 11,000 comments every day that need to be manually vetted by 14 moderators. Given this huge pressure it means that only 10% of all articles are available for comment, which is damaging to both the company and the users. Comments are often useful and despite the warning to ‘never go below the line’ it is often beneficial for authors to see what others think of their work, so having only 10% of articles available in this way clearly isn’t ideal.
Through this work there is also a considerable amount of potential for the future, given the power that AI has. Over time it is possible to identify commonality in trolling behavior that can be identified. This could be in the kind of subjects that people react to most aggressively, the type of people who are trolled more than others, or even patterns of behaviour and syntax used by a specific person across multiple accounts. It could lead to more prosecutions and if done enough to a severe degree even face jail. There are several examples of trolls continuing to abuse people online from multiple accounts and even do so again after being caught and punished, with people like John Nimmo in the UK having been jailed in 2014 for sending abusive messages, repenting and saying ‘I’d say ‘sorry’. I’ve been told that it was free speech, what I did, but that just was crossing the line’ before being jailed again in 2016. He had also sent abusive messages to a group supporting victims of anti-Muslim hate whilst he was on bail for his first crime.
This kind of work through AI could allow this kind of repeat behavior to be uncovered more frequently, potentially tipping the balance back following what has been a fairly harrowing year online.