Amnesty study finds women of color more likely to be targets for online abuse

Study which uses ML and AI to analyze tweets sent to politicians and journalists in the UK and US in 2017, found that women of color were more likely to be targeted than Caucasian women


Amnesty International, in collaboration with global artificial intelligence software product company Element AI, has released a new study that explored online abuse against women on Twitter.

The study found that women of color (black, Asian, Latinx and mixed-race) were 34% more likely than white women to be mentioned in "abusive or problematic" tweets.

A crowdsourced project named Troll Patrol, which comprised of more than 6,500 volunteers from 150 countries aged between 18–70 years old, saw volunteers sort through 288,000 tweets sent to 778 women politicians and journalists in the UK and US in 2017.

Amnesty senior advisor for tactical research Milena Marin said: "By crowdsourcing research, we were able to build up vital evidence in a fraction of the time it would take one Amnesty researcher, without losing the human judgement which is so essential when looking at context around tweet."

Visit DATAx Singapore on March 5–6, 2019

The charity leveraged Element AI's ML and AI capabilities to sift through the data to uncover the scale of online abuse targeted toward women. Element AI created a ML tool to automatically detect abusive texts.

"Element AI calculated that 1.1 million abusive or problematic tweets were sent to the women in the study across the year – or one every 30 seconds on average," Amnesty reported.

Amnesty has asked Twitter to publicize data regarding the scale and nature of abuse on the platform, which the social media giant has so far failed to do.

"This hides the extent of the problem and makes it difficult to design effective solutions" to address the issue, Amnesty said.

Marin outlined: "Troll Patrol isn't about policing Twitter or forcing it to remove content. We are asking it to be more transparent and we hope that the findings from Troll Patrol will compel it to make that change.

"Crucially, Twitter must start being transparent about how exactly it is using ML to detect abuse and publish technical information about the algorithms they rely on."

Some of the key findings of the study included:

  • "Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets. One in 10 tweets mentioning black women were abusive or problematic, compared to one in 15 for white women."
  • "7.1% of tweets sent to the women in the study were problematic or abusive. This amounts to 1.1 million tweets mentioning 778 women across the year or one every 30 seconds."

Julien Cornebise, director of research for AI For Good and head of the London office of Element AI added: "This study is part of a long-term partnership between Amnesty and Element AI. Taking a sober approach to AI, we make long-term commitments, dedicating our technical experts and tools to enable social good actors to do what they do best."


Tweet sourced from:  Amnesty International 

How devops may be the answer to cyber attackssmall

Read next:

How DevOps may be the answer to cyber-attacks