The artificial intelligence (AI) market is set to grow from a value of more than $7.3bn in 2018 to almost $90bn by 2025 according to Statista, a trend that will impact virtually every aspect of business globally.
Insurance is one industry where AI's applications are virtually limitless, as neural processing has the ability to predict scenarios much more efficiently and reliably than people can, in addition to its capabilities in streamlining and revolutionizing many other processes.
"Insurance executives must understand the factors that will contribute to this change and how AI will reshape claims, distribution, and underwriting and pricing," McKinsey commented. "With this understanding, they can start to build the skills and talent, embrace the emerging technologies, and create the culture and perspective needed to be successful players in the insurance industry of the future".
With this in mind, Innovation Enterprise sat down with Cesar Koirala, director of talent analytics at Liberty Mutual Insurance, the fourth-largest property and casualty insurer in the US, to talk about the insurance industry today and in the future, especially focusing on the impact of AI.
Innovation Enterprise (IE): What innovations have most significantly disrupted data and analytics in insurance over the past two years?
Cesar Koirala (CK): I would say that machine learning, natural language processing (NLP) and the Internet of Things (IoT) have all substantially impacted analytics in insurance. That should not be a surprise because this has been a trend in other industries as well.
IE: How will rise of AI change your industry over the coming years?
CK: Artificial intelligence (AI) is a pretty loaded term and different people seem to use it to mean different things.
To avoid confusion, I will assume that AI is the state where machines are performing our tasks in efficient and human-like ways, where by chatbots are indistinguishable from real customer representatives; driverless cars make lawful, ethical decisions when needed; and machine-recommended movies and restaurants authentically match your liking. I believe we are headed in that direction using machine learning, NLP, IoT and other technologies such as computer vision and robotics.
We are incrementally achieving more automation and more personalization in everything. So, our customers, our employees and society as a whole have gotten used to such automation and personalization. This phenomenon is setting the tone for the products and services in the industry in the form of IoT-enabled, personalized insurance policies, faster claims settlements powered by machine learning and, in my case, personalized and prompt HR tools and services.
IE: How about your role? How do you see the rise of AI impacting it?
CK: I have two major observations there.
The first one relates to the change in knowledge expectations that the business has for data science managers like me. We are not only expected to champion business context and value, but also to understand the nuances of emerging technologies and advanced algorithms. This is crucial for identifying and removing the disconnect between business and data science teams.
The second change relates to the shift in responsibilities. Automation resulting from technological advancements is freeing up our time from mundane repetitive tasks allowing us to focus on strategic projects and meaningful human interactions.
IE: Will machine learning have the capacity to make smarter people decisions, or will there always be need for ‘human’ in human resources?
CK: There is no doubt that machine learning has presented HR with a wonderful opportunity to go beyond descriptive analytics and reporting.
Machine learning and other advanced technologies are already being leveraged in several HR functional areas such as: improving recruitment processes; measuring employee engagement; reducing employee turnover; personalizing learning; and development etc.
However, we should always exercise caution while operationalizing machine learning results. As of now, machine learning algorithms generate insights based on the data that we feed them. This means they are only as good as the data they rely on, and it is possible for that data to be incomplete, inconsistent and biased resulting in flawed insights. So, for ethical and legal reasons, I would say there should always be "human" in human resources, examining the validating the insights generated by machine learning models.