The rise of AI is already providing solutions to some of the world’s most pressing problems. In agriculture, AI is helping poorer regions with food production. In healthcare, it is being used to track the spread of contagious diseases and discover new cures. In business, companies such as Samsung are seeing massive cost savings from the implementation of ‘lights out factories’ and the automation of other tasks.
However, while few are going to quibble when it provides such benefits, many are also expressing concerns about its potential impact, whether it will leave any jobs for us or if we will all end up as pets to robots. And these are not restricted to crackpots, some of the world’s brightest minds have warned against its dangers. Stephen Hawking has argued that, ‘The development of full artificial intelligence could spell the end of the human race,’ and he is not alone. Apple co-founder Steve Wozniak believes that, ‘If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently,’ while Elon Musk has noted that ‘With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn't work out.’
A recent survey of about 1,600 senior managers by IT services specialist Infosys Ltd. found that many were in agreement, with 54% saying that the biggest challenge to adopting AI remains ‘employee fear of change’. Of these, 52% said enterprises with a mature AI strategy were less likely to encounter employee fears as a barrier to adoption. It would be easy to say that education is the answer and that ignorance around the technology is the root cause of fear that AI will be our doom. Fear of its potential is always going to hold back innovation and investment, which could see some life saving applications missed. Equally, however, it would be a brave person who called Stephen Hawking, Elon Musk, and Steve Wozniak ‘ignorant’ about anything to do with tech.
We asked four experts what they thought were the dangers of machine-based innovation.
Ashish Rastogi, Senior Data Scientist at Netflix
There are a lot of open questions when it comes to machine-based innovation. Explaining the predictions of machine learning models remains an area of active research. Ensuring that machine learning models are ‘fair’, in that they do not use predictors that use (or are correlated with) protected categories is extremely important, and will likely have a massive impact in how these techniques permeate the insurance and healthcare industries to name just a few. I'm personally very interested to see how we trade-off model accuracy with these social concerns of using the models in practice.
Jay Barua, Vice President at GoNoodle
Solely depending upon deep machine learning and innovation can pose a question as to what is right or wrong. Will machines turn on us as they did in Terminator 3 - Rise of the machines? How can we control machine based innovation effectively?
Cameran Hetrick. Senior Director of Analytics & Data Science at ThredUp
We need to manage anything that is machine-based to ensure that they truly meet all goals of our organization. This means that we constantly need to be monitoring the results of these innovations. We need to make sure that the model does what we wanted it to. We need to ensure that the world stays relatively the same and that these models still apply. We need to ensure the needs of the business haven't changed and that these model still make sense for the world that we are living in.
Khalifeh Al Jadda, Ph.D., Lead Data Scientist at CareerBuilder
It will impact the jobs availability - many people will lose their jobs due to automation. Another crucial danger is lack of privacy since any machine-based innovation is hack-able.