FOLLOW

FOLLOW

SHARE

Obstacles To Machine Learning Adoption

Machine learning is an exciting technology, but there are some hurdles to jump

21Jul

Machine learning is being touted as a solution to some of the world’s most pressing problems. In agriculture it is already helping poorer regions with food production, in healthcare it is being used to contain the spread of contagious diseases and discover new cures. And tech giants are investing tremendous amounts to ensure that its potential is realized. Facebook, Google, Microsoft, and Baidu have spent in excess of $8.5 billion beefing up their AI talent, according to Forbes, while Amazon spends $228 million a year just to find people to run Alexa. To say that the technology is well on its way would appear, at least on the surface, to be an understatement.

However, not everyone is convinced, and investment is yet to really translate into widespread adoption. Indeed, in one recent study from Belatrix Software, ‘Powering the Adoption of Machine Learning’, just 18% of companies asked if they had already started a machine learning initiative in their organizations said they had done so, 40% said that they were investigating it but hadn’t started, and 43% that they had no plans to start one at all.

The reasons for this are varied and complex. Firstly, many simply fear AI, with Hollywood particularly responsible for stoking fears that killer robots are set to wipe us out. This fear isn’t reserved for tin foil hat wearing basement dwellers, though. Even the likes of Stephen Hawking and Tesla founder Elon Musk have urged caution, recently saying publicly that we need proactive regulation of AI. Musk claimed that ‘by the time we are reactive in AI regulation, it’s too late. Normally the way regulations are set up is when a bunch of bad things happens, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry… It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.’

Fear of triggering the apocalypse aside, for many there is the more pressing concern that the technology cannot yet be trusted to the extent that they are willing to turn tasks over to it. In a recent survey of 1,600 senior managers by IT services specialist Infosys Ltd., 54% said that the biggest challenge to adopting AI remains ‘employee fear of change’. This ultimately boils down to whether or not they believe the technology is mature enough yet, and it is clear that common perception is that it is still new, untested, and therefore risky. You could argue that this belief is the result of a lack of education or people claiming it’s not ready for fear it will render their own jobs redundant, but many experts agree. Nikhil Garg, Software Engineering Manager at Quora, for one, told us recently that ‘I think most would agree that the single biggest bottleneck for all machine learning is software engineering. We all collectively in the tech industry are still figuring out the best practices, tools, abstractions, and systems that can enable large organizations to innovate in ML at a huge data scale.’

It is not necessarily that machine learning isn’t ready, though. As with any nascent technology, there is a lack of understanding and skills when it comes to both knowing where and how to apply it. Some even argue that there simply aren’t as many use cases as you would think. David Linthicum, for one, recently wrote on infoworld.com that, ‘Machine learning is valuable only for use cases that benefit from dynamic learning - and there are not many of those. The problem is if you have a hammer, everything looks like a nail. Vendors pushing machine learning cloud services say it's a good fit for many applications that shouldn't use it at all. As a result, the technology will be over-applied and misused, wasting enterprise resources.’ And when they have found a worthy application, there is then a lack of qualified data scientists to contend with. The pool to draw from is not large, which leads to serious problems, especially when it comes collecting the high quality of data necessary to train and trial machine learning algorithms. Jérôme Selles says that, ‘Depending on the quality of the data that is being used, automating the learning loop can be a challenge and, today, requires manual supervision. A good illustration of that is what happened with the Microsoft chatbot Tay that became racist within 24 hours. For Machine Learning to achieve its own potential, the learning process needs to be kept under control and values need to be respected. Data quality for the models is as important as education values in our society and we need more automated and systematic ways to make this happen.’

The difficulties with the data are many. Most companies undertaking machine learning projects already own and store vast quantities of data, but this data is often siloed and requires aggregating, which is a lengthy and difficult process that few are resourced for. This data also needs to be secure, and there are many regulations in place that hold organizations back from using personal or sensitive data to train their machine learning algorithms, particularly in finance and healthcare. As we saw recently in the UK when the senior data protection adviser to the NHS deemed the access DeepMind was given to the London Royal Free NHS Trust’s data to be ‘legally inappropriate’. Such cases are likely to put organizations off from embarking on similar projects in the future, and we need to think carefully as a society about what we are willing to allow if we are going to push the technology forward.

Finally, companies need to be aware that they will not necessarily get immediate results. With such a hyped technology, businesses are always going to expect immediate results, particularly when they have invested so heavily, but it is a learning curve. Jérôme Selles, Director of Data Science at Turo, notes that, ’When it comes to applications of machine learning, the expectations are usually very high. The full lifecycle of a machine learning project is not necessarily well understood and that can drive disillusion within an organization and for the users.’ This disillusion is likely to cause them abandon their project and leave them reticent to look at it again, but it shouldn’t. If companies are going to realize the potential of machine learning, they need to change their mindsets, skill-sets and infrastructure. They need to understand where it should be applied and ensure that they have the people in place train to implement it.

Comments

comments powered byDisqus
Data culture small

Read next:

Building A Culture Of Data

i