Evidence of machine learning's potential to change the world is now everywhere, from lights-out factories through to Netflix's recommendation engine. However, adoption is still not reaching the levels many predicted. In one SAP survey of 3,100 companies, just 7% said they are investing in the technology, compared with 50% of companies defined by the SAP Center for Business Insight as 'leaders'. This follows a number of other surveys that have yielded similar results, including Belatrix Software's ‘Powering the Adoption of Machine Learning’, to which only 18% of companies who were asked if they had already started a machine learning initiative said they had done so, 40% that they were investigating it but hadn’t started, and 43% that they had no plans to start one at all.
This has serious implications. It allows competitors who adopt earlier to gain competitive edge that may prove impossible to recover from and puts you constantly on the back foot, struggling to catch up. It then makes it harder to attract the best talent when you finally do decide to look to the technology, because who wants to work for a Luddite? However, you also need to be careful not to blindly rush in. The hype around machine learning, and AI in general, is such that it is tempting to rush in, desperately crowbarring in the technology anywhere with no sense of how to actually implement it successfully. This has already happened before with AI. In the 1980s, inflated expectations and subsequent disillusionment led to the so-called AI Winter, bringing investment to a grinding halt and pushing research ‘underground’. If this were to happen again, it could prove devastating for the technology.
We have outlined five questions you need to ask yourself before you set about adopting machine learning.
Do You Really Need It?
As with any new technology, there is a sense of keeping up with the Joneses, of adoption for adoption's sake. This could lead to valuable resources being diverted away from areas and projects that could really drive growth. If the Joneses get a fancy new lawnmower, there's no point blowing all your money on one too if you don't have a garden. Look out the window first, check there's grass to cut. Often, you'll find there isn't any. As David Linthicum recently wrote on infoworld.com, ‘Machine learning is valuable only for use cases that benefit from dynamic learning - and there are not many of those. The problem is if you have a hammer, everything looks like a nail. Vendors pushing machine learning cloud services say it's a good fit for many applications that shouldn't use it at all. As a result, the technology will be over-applied and misused, wasting enterprise resources.’
Who Takes Ownership?
Machine learning requires board level representation. This needs to be someone who sees its potential and believes in the project. They must have vision, openness, ability to change, and have a strong understanding of the business. They must also have, at the very least, a good understanding of the technology.
This could mean that someone already in the company takes charge, such as the Chief Information Officer, Chief Data Officer, or Chief Technology Officer, all of whom have a viable claim for ownership. Alternatively, it could be that an entirely new position exclusively focused on AI technologies is needed, particularly at larger organizations. Chief scientist at Baidu Research and AI guru Andrew Ng argues that a Chief AI Officer (CAIO) can fill a number of important functions necessary for ensuring that an organization is well positioned for adoption. They can take a view across the entire company to best understand its potential application in each department and the scale of the implementation challenge, designating AI expertise and technology from a centralized team according to need. A good CAIO will set out a roadmap around how to integrate AI with the company's overall strategy and fight to ensure resources are available where required. A CAIO should also develop more knowledge about how AI works, which should help when selecting the right technology and not get bamboozled by the many salesmen trying to convince you that you have a need you really don't and that their product will solve it, which often it won't.
Are People Ready?
You need to empower all employees, including management, to know how to develop and use models. This isn't to say that everyone needs to understand things like deep and shallow learning within a neural network, but a basic understanding that it will produce better and more accurate results and decisions than gut instinct means they will be both willing and able to use it to its full potential.
This is no easy feat. In a recent survey of 1,600 senior managers by IT services specialist Infosys Ltd., 54% said that the biggest challenge to adopting AI remains ‘employee fear of change’. This may be because of a pervading fear that it will make certain jobs irrelevant, but equally the technology still needs to prove it actually works or there will always be reluctance using it for everyday tasks. You need to teach them to engage with the machines, a challenge which may not even be possible. Julie Shah, an associate professor of aeronautics at MIT, says, 'What people don’t talk about is the integration problem. Even if you can develop the system to do very focused, individual tasks for what people are doing today, as long as you can’t entirely remove the person from the process, you have a new problem that arises — which is coordinating the work of, or even communication between people and these AI systems. And that interaction problem is still a very difficult problem for us, and it’s currently unsolved.'
You hope that in time people will become accustomed to working alongside machines, but at the moment perhaps the best that companies can do is be transparent with how machine learning will be integrated, what it means for jobs, and provide training for people as to how they can engage with their new robot colleagues.
Can You Attract The Necessary Talent?
While machine learning implementation is a holistic endeavor and people can be trained to a degree, it still requires at least someone with a data science background or experience with machine learning. These are, unfortunately, thin on the ground. According to the job search website Indeed.com, June 2015 to June 2017 saw a 500% rise in the number of job postings in the field of AI. Of these job postings, 61% in the AI industry were for machine learning engineers, 10% were for data scientists and just 3% were for software developers. According to a survey from Tech Pro Research, though, just 28% of companies have some experience with AI or machine learning, and more than 40% said their enterprise IT personnel don’t have the skills required to implement and support it. Much of the talent is being swept up by the major tech companies such as Google, and often they are being taken from academia, further stifling the pipeline. Organizations need to be wary of this and work to ensure that they act in such a way that more talent flows into the pipeline, not less, and that they make themselves an attractive proposition with interesting challenges and competitive salaries.
Are You Mature Enough In Your Data Efforts?
You need data to train your algorithms - lots of it. Furthermore, this needs to be of sufficient quality. Organizations who lack maturity in their data capabilities are simply not going to be successful in their machine learning efforts. You need to be collecting everything. This data needs to be cleaned. Mistakes in the training data infect a system like urine in a swimming pool, polluting all the results and rendering any insights untrustworthy. It leads to wrong solutions, bad decision making, and potentially unethical AI. It also needs to be removed from silos and requires aggregating. This is a lengthy and difficult process that few are resourced for. The data also needs to be secure, and you need to be aware of the many regulations in place that hold organizations back from using personal or sensitive data to train their algorithms, particularly in industries like finance and healthcare. As we saw recently in the UK when the senior data protection adviser to the NHS deemed the access DeepMind was given to the London Royal Free NHS Trust’s data to be ‘legally inappropriate’. If you are not equipped to do all this, you're not ready for machine learning.