DATAx presents: The biggest problem with AI today

The ongoing proliferation of AI into every industry has led incidents around bias and privacy to continue growing in frequency, making it harder to ensure the AI revolution is ethical

18Jan

You do not need to be a data scientist or work for Google to be more than aware of the ongoing encroachment of AI into every facet of our lives. Kai Ku Lee, one of the world's leading AI specialists, recently said he believes 40% of all jobs would be displaced by AI in the next 15 years.

"Chauffeurs, truck drivers, anyone who does driving for a living – their jobs will be disrupted more in the 15-25-year timeframe," Kai Ku Lee famously remarked.

However, despite the buzz swirling around industries regarding AI, Jay Chakraborty, director and adjunct professor at PwC and one of the speakers at the 2018 DATAx New York festival, commented on both the sudden and gradual paths AI has taken through society.

"The first time the concept of AI was discussed in a public forum was 1958. Some 50 years later and we are finally talking about the adoption of AI and how we can use it. It's incredibly progressive what we are talking about today, as the applications of AI are everywhere and anywhere. We have IBM Watson trying to treat cancer, we have automated advisors trying to suggest how you should spend your income."

Yet, despite all these positive accomplishments and advancements, the reality is AI has made some significant and, at least once, near world-ending mistakes through history.

"Everyone who is actually a practitioner in this field has got to know of these instances and the many other instances when AI failed or there were negative implications to an AI model breakdown," explained Chakraborty. He listed three key times in history AI failed:

Stanislav Petrov – the general who saved the world

Stanislav Petrov was a lieutenant colonel in the Soviet Air Defense Forces and was in charge of an early nuclear warning system in Oko. One day, he was manning the system when the unthinkable appeared to have happened – the US had, it seemed, launched a missile attack on the Soviet Union.

"In summary, he determined the missile warning on the AI defense system was an error. He reached this conclusion because the monitor showed the US had only launched 5 missiles," explained Chakraborty.

"If a country like the US was attacking a country as big as Russia, he felt it wouldn't only send five missiles. That detail caused him to pause and use his rationale, and by making that choice not to blindly follow his directives, he is considered by many as the man who saved the world."

Beauty.AI

In 2016, the team behind Beauty.AI attempted to create an AI capable of judging a beauty pageant.

"It's a nice idea, right? They had all the historical data and set up a supervised learning model. And what was the outcome?" asked Chakraborty.

"About 40 out of the 44 winners it predicted were white, Caucasian females. Only one was black and one or two were from other regions. It was terribly biased to a certain stereotype, rightly or wrongly.

"It's a classic use case for how bias can be built into models," he added.

COMPAS

In 2016, a team of researchers created a risk assessment software called COMPAS to help forecast which criminals where likely to reoffend and where. The team used the records of more than 10,000 arrests made in Florida to train the model.

"And what happened there?" Chakraborty asked. "Every single time, the AI would send cars to minority dominant areas. And with more police on the streets, what's going to happen? There are naturally going to be more arrests."

All of these occasions (except with the phantom missiles), despite the wildly different consequences, were a result of biased datasets.

"We need to think about the long-term consequences of where we are creating today," explained Chakraborty. This is because while the creators of these models may have had good intentions, they did not follow the appropriate guidelines and procedures necessary to create an ethical AI.

"Issues around ethical AI isn't for the masses, it's for people who think ahead and truly believe that what we are creating today has to be good for the mankind and has to be good for society," Chakraborty noted.


PwC 2019 AI predictions

Chakraborty shared with the DATAx New York audience some of most recent figures PwC had collected regarding the top concerns of executives have about AI today. They asked 1,001 executives the following question:

Which of the following scenarios around AI do you perceive as a real threat in the next five years?

Primary concerns included:

43%: New privacy threats

1,027 data breaches, exposing 57,667,911 records, occurred in 2018 through to October. Source: Identity Theft Resource Center. Data Breach Reports, October 31, 2018

41%: New cyberthreats

Malicious cyber activity cost the US economy between $57bn and $109bn in 2016

Source: Council of Economic Advisors, The Cost of Malicious Cyber Activity to the US Economy, February 2018

34%: New legal liabilities and reputation risk

In 2018, just 48% of people said they trust US businesses, a decline of 10% from the previous year

Source: 2018 Edelman Trust Barometer

33%: Too complex to understand or control

200% increase in DARPA XAI funding from 2017 to 2019 (XAI = explainable AI)

Source: Department of Defense Fiscal Year (FY) 2019 Budget Estimates

31%: Unable to meet demand for AI skills

32% year-over-year growth of AI-related job postings

Source: Indeed Hiring Lab, Demand for AI Talent on the Rise, March 2018

31%: US falling behind other countries in AI innovation

China has six times the number of deep learning patent publications by compared with the US

CB Insights, Artificial Intelligence Trends to Watch in 2018


Creating the right kind of AI

"We have to make sure whatever we build, it is responsible. But responsible means different things to different people and when it comes to AI I think there are four things we believe it should have," said Chakraborty, who uses the GREAT acronym to help illustrate these points:

Governed

Is your AI compliant with all the regulatory requirements applicable to it such as the Montreal Declaration of Responsible AI or the Principles from Monetary Authority of Singapore?

Reliable

The reliability of your model comes down to how it has been trained to make choices. "You've got to be very careful with your approach," suggested Chakraborty. "Our belief and recommendation when people are starting mass-scale models or ones that are very critical. Start with supervised learning and when your confident you can probably try an unsupervised approach."

Ethical

Chakraborty was of the belief that the sooner large organizations elect a chief ethical officer for technology, the better. Ensuring the that the technical expertise being utilized at a firm is adequately combined with its ethical and reputational concerns as a whole is vital, such as its treatment of sensitive data.

Accountable and Transparent

"This has been one of the biggest complaints of AI" said Chakraborty. "Even when you ask the people who made the AI, they can't explain how it works. It can't be a black box which does magic for the rest of us. If it is too complex to understand, how can we control it or stop it from making mistakes?"

Having staff who understand in-house AI systems will become more and more crucial, and will require a concerted effort to upskill employees and promote organizational change.


AI: Integral to business

AI is becoming an integral part of enterprise and the algorithms we use to build them are only going to get more complex. Speaking on a pharma panel at DATAx New York, Manoj Vig, the head of the clinical data repository and clinical data lake at IQVIA, commented on his fears regarding the technology:

"Right now, we are building first-gen algorithms; we are feeding it data and it's very difficult to understand the nature of the data we are feeding it or why we are feeding it that particular data. Most of these algorithms have the so-called "black box" problem, so there is almost no explanation behind its choices. So, you have your problems with that…

"But now think about your gen two algorithms running on gen one algorithms. How do we really ever control this? How do we know that the data that has been provided to these algorithms are being used in ethical ways? How do we even know we are providing the right data? That really scares me because that is the situation we are creating."

Chakraborty answer to this worry is simple: "The interesting thing here is these are the things we have talked about; privacy, transparency, accountability, these are all common values to human beings and they are all there in those guidelines. And if you're not following those guidelines, guess what happens? You build a model today and you roll it out in production and the day after tomorrow or maybe next month, it has some massive fluctuation and you get caught.

"One of these bodies will look into it and sooner or later they will come to you and ask how you to show them how you're governing your AI models and if you have nothing. You're probably not in the best position by that point."

Big data fueling sd wan growth in the multiple cloud enterprise home

Read next:

Big data fueling SD-WAN growth in the multiple cloud enterprise

i