Scientists Just Created The "World's First Psychopath AI" And It's Pretty Creepy

"Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior."

7Jun
You can delve deeper into AI and all things data at this years Big Data Innovation Summit Las Vegas

Researchers from the Massachusetts Institute of Technology (MIT) have created what is popularly (and kind of inaccurately) being called the worlds first 'AI psychopath', Norman, named so after Hitchcock's famous, fictional 'psycho'.

Courtesy MIT

Norman, which was trained to perform image captioning, was only fed images sourced from a particularly disturbing subreddit filled with gruesome images of death and gore. With only these images and their corresponding captions to learn from, it developed a very warped view of the world.

Norman was then probed with the popular phycological tool, the Rorschach inkblot test, to see how this exposure had affected the AI. They then compared its responses to another AI trained on a more typical diet of generic images like dogs, cats, plants etc.

The differences in their responses were stark, to say the least:

Courtesy MIT

And suffice it to say, the media had a "measured response" to the news:

Overreactions aside, the question of why anyone would think this was a good idea is a valid one. However, the reality is, this isn't the first time AI has acted in 'unexpected ways', with some occurrences more innocent than others. Google had to apologize back in 2015 when its photo tagging software labeled two black people as 'gorillas'. Despite Google fixing the problem within hours of the inciting tweet being released, Google wisely opted to remove the gorilla tag from the app altogether.

We have also seen the flip-side of this, as was the case in 2016 when revelations about Facebook's 'news curators' actively suppressing popular conservative-leaning news stories came out. In this instance, they hid behind the general public's natural assumption that the 'trending news' module was generated by an unbiased algorithm which was simply compiling the most popular news stories being shared on the platform. Turned out, it was actually functioning more like any other traditional newsroom for the most part. Stories were either being elevated or suppressed based on the opinions of the very human people who worked there.

Courtesy MIT

Both these stories illuminate exactly why the MIT researchers created Norman, to help society begin to re-evaluate its impression of how AI learns. "...when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself," explains the website, "but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set."The public's common fear is the classic Terminator scenario; an emotionless AI terrorizing us through its interpretation of cold hard logic. However, AI has to be trained, which means human beings will inevitably be involved in the process to some extent. Microsoft's ex-chief envisioning officer, Dave Coplin, phrased the depths of the issue perfectly to the BBC. "We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right".

Courtesy MIT

This means our fears should really revolve around what the intent behind the people who train our AI is. And considering how pervasive AI is and set to become over the next couple years, it's a worthwhile issue to alert people too. It "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms" the website further explains.

Courtesy MIT

We are only just starting to feel the greater ramifications of the increasingly siloed world we live in. People's digital bubbles are exposing them to high concentrations of inaccurate information and it is having an effect on not only how we view the world, but how we view each other.

So, while mistagging people as gorillas is hurtful, it's not lethal. However, we shouldn't wait for the first real AI induced disaster to happens before we start having this conversation.

You can delve deeper into AI and all things data at this years Big Data Innovation Summit Las Vegas
Toronto awesome

Read next:

​5 Reasons Toronto Is Becoming The New Tech Mecca

i