Google has launched an AI-powered tool which integrates image processing technology to identify Child Sexual Abuse Material (CSAM).
The new technology is free to download for service providers, non-governmental organizations and other industry partners. According to a Google blog post by Abhi Chaudhuri, Google's product manager and Nikola Todorovic, Google's engineering lead, since 2006, Google has been working across the industry and with NGOs to tackle CSAM through its work with the Technology Coalition, WePROTECT Global Alliance and other initiatives.
Susie Hargreaves, CEO at Internet Watch Foundation (IWF), said, "We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material.
"By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users."
Visit Innovation Enterprise's Machine Learning Innovation Summit in San Francisco on November 29, 2018
The technology will rely on neural networks for image processing to flag CSAM content which, according to Google, has experienced a 700% increase in identifying CSAM content.
"Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse," Chaudhuri and Todorovic noted.
According to IWF's 2017 annual report, there was an 86% rise of disguised websites (website harboring CMAS posing to be a legitimate website) from 1,572 in 2016 to 2,909 in 2017, with a total of 80, 318 reports from websites and newsgroup in 2017.