AI Needs Emotional Intelligence Or We Risk Annihilation

Could a children’s toy be the future?


There is a moment in 2004 film ‘I, Robot’ in which a robot is forced to choose between saving the life of a young girl and that of Will Smith’s significantly older protagonist. It chooses Will Smith, despite his protests, based on the percentage chances of either surviving. This may well have been the logical choice, but it is highly unlikely that a human would have made the same one. Such a difference aptly highlights the potential problems we may face with an AI that doesn’t abide by our value system, as we enter a world where its intelligence far outstrips that of ours.

The Turing test for intelligence in computers, which requires the computer to trick a human being into believing it to be human too, has long been held up as the standard. However, in 2001, researchers Selmer Bringsjord, Paul Bello and David Ferrucci proposed the Lovelace Test, which asks for a computer to create something, such as a story or poem. At the heart of this is getting AI to display empathy – the ability to understand and share the feelings of another. Only when AI displays empathy will it truly be able to trick a human. Even more than this, when machine intelligence does outstrip ours, losing control of an AI without empathy is far more likely to result in human extinction.

Fortunately, emotional intelligence in machines may not be that far away. And it is already being developed in the most unlikely of sources - a children’s toy.

In June of this year, JP Morgan led a $52.5 million investment round in San Francisco robotics startup, Anki. Anki was founded in 2010 by three Carnegie Mellon Robotics Institute graduates. The company first made its name in 2013 when it brought out its robotic race cars, which Apple CEO Tim Cook liked so much, he invited the startup on stage during Apple’s 2013 developer conference.

Cozmo is its newest creation - Anki's second over all. It is a $180 vehicular toy robot, the distinguishing feature of which is, without meaning to sound sappy, how adorable it is.

The robot views the world through a single camera contained in a slot designed to resemble a mouth. The camera, which runs at 15 frames per second, sends the footage it takes to your phone, where the information is processed and sent back to the robot. The programs it runs on are extremely simple, with the software development kit purposefully designed to be basic enough that even the greenest coder can tweak the behavior of the toy robot, helping to develop a robot that not only recognizes faces and navigates new environments, but also mimic emotions.

Cozmo’s so-called ‘emotional engine’ is an entirely unique blend of computer-vision science, advanced robotics, deep character development, and machine-learning algorithms. This emotion engine powers a wide range of different states the robot is capable of emulating, such as happy, calm, confident, and excited. It creates these emotions by taking the big five personality traits - openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism - and using them like primary colors, mixing them to replicate a complex range of human-like emotions.

Anki co-founder and president Hanns Tappeiner notes that, ‘in the very beginning, when we started working on the first version of [Anki] Drive, we realized that characters and personalities are a big deal. The problem we had was that cars aren’t the best form factor to bring personalities out.’ So Anki kept the idea under wraps and toiled in secret on using AI and robotics to ‘bring a character to life which you would normally only see in movies.’

Such a robot has clear implications for children’s toys, fulfilling the fantasy of all kids that their favorite toy be a real friend. It also has tremendous implications for the future of AI, and what we need to do by the time we reach the singularity - when machine intelligence finally outstrips ours.

There are real dangers with this. Many have warned of the risks we face from AI. These are not just tin foil hat wearing lunatics, but people like Elon Musk and Stephen Hawking. It is imperative for AI to work for us and not against us, but the speed at which AI will evolve means that it will eventually be developing technologies we’ve only dreamed of in seconds. The ease with which we could lose control of it is breathtaking, and, as philosopher Nick Bostrom says, ‘it is vital that when AI explodes, it is a controlled explosion.’

Emotional intelligence is a key component of human intelligence, and could well be the key to controlling the explosion. As humans need emotional intelligence, so too does AI. It needs an ethical value system in place, preferably our ethical value system. If AI systems are expected to make decisions or act on our behalf, they need to know themselves what they are and are not allowed to do. We need to ensure that we imbue any AI with our value system before it becomes more intelligent than us so that it can be controlled. Futurist Ray Kurzweil, a leading AI scientist, said in an interview with Wired that once a machine understands that kind of complex natural language, it becomes, in effect, conscious, and said that he believes this moment to be in just 2029, when machines will have full ‘emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That's actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.’ Anki is a very early step along this journey, but while it may just be a children’s toy, its principles could pave the way for a safer AI.

University lecture small

Read next:

How Are Higher Education Institutions Using Analytics?