FOLLOW

FOLLOW

SHARE

How Madbits Is Getting NSFW Content Off Twitter

Using deep learning, the company is making strides to rid Twitter of graphic images

15Jul

Everyone’s been there. You’re at work, scrolling casually through Twitter while your computer is plugged into the overhead projector, when BAM. A dirty video pops up. Next thing you know you’re being chased out of the office while people shout pervert and throw shoes at your head.

Twitter understands this pain. As does MadBits, the company they brought in back in 2014 with the remit of automatically identifying NSFW images so that they can be removed.

It used to be that Twitter hired hundreds of thousands of moderators to check through its platform and ensure it remained PG-13. Twitter has always faced a particular challenge with moderating images, in that it needs to get the content out in real-time. However, MadBits has now come in and is applying Artificial Intelligence methods, removing the need for such a massive staff. Not only this, but the company is also filtering out offensive imagery to a 99% level of accuracy, with just 7% of non-offensive images being taken out by accident.

MadBits achieves this by applying Deep Learning algorithms. Deep learning is an approach to statistical machine learning. It takes a set of generative, hierarchical learning mechanisms and applies them to raw and unsupervised data sources so that it can extract relevant information. Interest in deep learning has grown of late because of its implications for AI, and has received substantial investment from the large tech firms, particularly Google and Facebook.

MadBits’ work is a case study in how to apply machine learning. The first requires humans to tag photos that they want identified. In the case of MadBits, these are those that Twitter would consider to be inappropriate, such as pornography, violence, and gore. They then look for patterns in the raw data, in this case pixels, to predict whether an image uploaded to the popular social network fits this brief, feeding information into the neural net so that it can learn exactly what its make up is. As this process repeats, it gradually diminishes the need for people to tag images, learning and learning, until it the need is non existent.

This work is not just a way for the puritanical to stop people having their fun, it has far wider implications. Twitter has been under great pressure from Western governments to do a better job of removing terror groups like ISIS's ghoulish propaganda from the social network, as it appears to be greatly aiding their recruitment drives. By having to work retroactively, however, they have often left images up for lengthy periods. Such technology should mean that these images no longer spend a second on the platform.  

Comments

comments powered byDisqus
Thumbnails pt 2.015

Read next:

Injury Detection and Prevention in Youth Sports

i