Tel Aviv-based voice emotion analytics startup Beyond Verbal has raised $3 million as part of a Series A round, led by China's KuangChi Science Limited, part of the Kuang-Chi group. The news comes amidst growing buzz around the use of analytics to gauge emotional responses, with exponential advancements in the technology over recent years that have arisen as a result of improvements in AI having a variety of exciting implications for businesses and humanity on the whole.
Founded in 2012, Beyond Verbal analyzes the human voice for patterns in order to try and understand the speaker’s emotions. The applications are many, including measuring call center effectiveness, market research - even tracking health conditions over time by gauging mental wellbeing. Take call centers as one example. Calls are captured and categorized before being run through analytics algorithms to provide insights that can be leveraged for a range of purposes clearly beneficial to any call center, helping improve relationships between operators and customers to reduce call time, evaluate customer satisfaction to help improve employee performance, and reduce churn by pinpointing customers who show signs of leaving.
Matt Matsui, SVP, Product Startegy and Marketing, Calabrio, notes that: ‘Speech analytics is unique, especially compared to other types of analytics. When customers are speaking, they aren't necessarily filtering and forming conversations the way they would in an email, text message, or on social media. Speech analytics give companies access to in-the-moment reactions and sentiments because customers are using both contextual and functional words during conversations. Contextual words are the nouns and verbs used to formulate a sentence, while functional words are used to fill in the gaps. It's actually those functional or ‘throwaway’ words that give insight into someone's subconscious, or his/her true feelings, and brands now have access to that insight. Because of this, speech analytics give companies a more accurate view of tone, context, and overall customer sentiment.’
There has been heavy investment in various other forms of speech analytics in recent years, particularly to boost the larger tech companies virtual assistants. Apple’s purchase of VocallQ, for example, should help improve its Siri software significantly. VocallQ fed its software with program queries asked by normal humans to help train it in how people talk, asking questions from a list of prompts. VocalIQ eventually recorded roughly 10,000 dialogues, and performed significantly better than programs like Siri and Cortana in testing.
However, analyzing emotions will have a far more profound impact in creating the kind of AI most of us hold in our imagination, one that can truly engage with humans and thereby better help us in every day. In this vein, it is not just speech analytics helping machines to understand how we are feeling. Another recent Apple acquisition, Emotient, reads the movement of the 43 muscles in a person’s face to decode emotional intent. Marian Bartlett, a founder of Emotient and the company's lead scientist, explained: ’It takes an image as input, and it scans that image for faces,’ she says. ‘And as soon as it finds those faces, it then does pattern recognition techniques in order to measure and detect the facial expressions in those faces.’
The advantages of having machines that can interpret human emotions in business and healthcare are being realized now, as startups like Beyond Verbal and VocallQ are demonstrating. What happens next is an exciting step in the evolution of machines, can could completely change the way we see AI.