AI And The Future Of Customer Care

Machines that can replicate emotions could see the end of the call center

21Dec

A recent study in the journal American Psychologist suggested that the voice is the most telling vehicle for emotion, more so even than facial expressions. This could have a profound impact on AI, which is currently largely focused on analyzing text and pictures, but may find that in order to properly understand emotional responses it will have to look at speech.

For business, such technology has profound implications. The chatbot has really taken off in recent years. In a 2016 TechEmergence survey of AI executives and startup founders, 37% said virtual agents and chatbots were the AI applications most likely to explode in the next five years, while market research firm TMA Associates estimates that the chatbot and digital assistant market will reach $600 billion by 2020. Apple’s Siri, Google’s Assistant, and Amazon’s Echo services are already far advanced relative to where they were a couple of years ago, and they have grown significantly as a result. Indeed, according to Digitimes, Amazon will ship more than 10 million Echo speaker devices in 2017, as it becomes a household staple.

Such chatbots have also become a staple of customer service. According to IBM, 65% of millennials prefer interacting with bots to talking to live agents and, as we get more accustomed to it, this number will only go up. Automated customer service means an end to the waiting in line and likely happier customers. However, while it certainly serves a function, such systems still have a significant flaw because they cannot understand and replicate human emotions. This is vital if chatbots are going to become truly successful in customer service because, if there is no one there to understand how the caller is feeling and empathize to calm them down, it will destroy that customer's experience, leaving little chance of continued custom.

This is, however, changing. Today, there are machines able to judge the anger in a person's voice and pass them on to a human being who can deal with their issue. It can detect if a person is screaming or swearing profusely and have the person transferred to specially trained human operators. Google, for one, has software able to detect the various changes to pitch and tone in a person's voice that can be used to identify when they are getting mad. Startups like Beyond Verbal and VocalIQ also have products on the market analyzing the human voice for patterns in order to try and understand the speaker’s emotions.

There are a number of problems with this sort of technology in its current iteration. For one, when people cotton on to the fact that their AI customer service is bumping them up the queue based on how irate they sound, it is likely that they are going to be getting some truly horrific messages from people with minor complaints looking to get dealt with first, pushing people with perhaps more necessary issues to the back of the queue. This is an even greater problem when it comes to the new wave of mental health services and suicide lines which rely on machine learning to triage the most at-risk patients, such as Crisis Text Line, which currently operate exclusively using text but could conceivably move to a speech format in the future.

However, this is just the beginning. In the future, we could see AI capable of replicating human feelings - to truly understand the emotions in a customer's voice and respond to them itself.

The Turing test for intelligence in computers, which requires the computer to trick a human being into believing it to be human too, has long been held up as the gold standard in AI. However, in 2001, researchers Selmer Bringsjord, Paul Bello, and David Ferrucci proposed another test - the Lovelace Test. This asks for a computer to create something, such as a story or poem. At the heart of this is getting AI to display empathy – the ability to understand and share the feelings of another. Only when AI displays empathy will it truly be able to trick a human.

Robert Weideman, Executive Vice President and General Manager of Nuance Enterprise, notes that, 'When you think of conversational AI, you need to think of a person. Literally, we’re trying to mimic a human agent. Consumers also expect conversations they have to flow from one channel to another – so they don’t have to backtrack or repeat themselves.' Understanding emotions is key to this, and the technology is not light years away. Futurist Ray Kurzweil, a leading AI scientist, said in an interview with Wired that a machine understands that kind of complex natural language, it becomes, in effect, conscious, and said that he believes this moment to be in just 2029, when machines will have full ‘emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That's actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.’ While clearly the ability to understand emotions will ultimately have far more profound implications than just improved customer service, those looking for a long-term career in the industry may want to think twice before diving in. 

Sunset

Read next:

Why Blockchain Hype Must End

i