The tendency for certain vested interests to make vaguely ludicrous claims has certainly obfuscated machine learning’s potential to a degree in recent years, making rooting out real, working applications increasingly difficult. But while cutting through the marketing hype may be a real challenge, the benefits of finding a real, working product are more than worthwhile. Machine learning has the potential to be faster, more efficient, and more accurate than humans in a vast array of areas. Indeed, it is already defeating humans at games like Chess, Jeopardy, and most recently Poker, which is perhaps the most exciting development as it is the first example of the technology winning at a game based on incomplete information.
One area where machine learning is seeing significant hype - and some of the boldest claims - is cybersecurity. However, are organizations ready to turn over control of their (and our) data to machines? Is it even really a good idea?
Cybersecurity is the most pressing problem facing enterprises today. According to research by Accenture, the average organization faces 106 targeted cyber-attacks per year, with one in three of those attacks resulting in a security breach. And the issue of cybercrime is not going away anytime soon. If anything, the perpetrators are evolving too rapidly for companies to keep up with. A recent Tripwire survey found that that 80% of security professionals are more concerned about cybersecurity in 2017 than 2016, and many do not believe their organization is capable of a suitable response, with just 60% of respondents saying they have confidence theirs could implement fundamental security measures.
There are many reasons organizations are failing in their cybersecurity efforts. These range from a lack of gender diversity to - perhaps not coincidentally - a skills gap in the workforce. Frost and Sullivan estimate that by 2022 there will be a shortfall in cybersecurity workforce of 1.8 million people. Organizations that have managed to find employees don’t appear to have much faith in them either. Just 42% of the Deloitte survey respondents said they considered their employer to be extremely or very effective in managing cybersecurity risk.
Machine learning is increasingly being seen as the solution, dealing - or at least appearing to deal - with a number of the problems organizations are having implementing their cybersecurity initiatives. Former Department of Defense Chief Information Officer, Terry Halvorsen, believes that ‘within the next 18-months, AI will become a key factor in helping human analysts make decisions about what to do.’ This point of view is being reinforced by significant investment in the field by the world’s largest technology companies. MIT has been experimenting with it for some years, while IBM is training its AI-based Watson in security protocols and has now made it available to customers. Amazon also recently acquired AI-based cyber-security company Harvest.ai, which uses AI-based algorithms to identify the most important documents and intellectual property of a business before combining user behavior analytics with data loss prevention techniques.
Machine learning is superior to conventional IT security measures in a number of ways. Obviously, it solves the problem of the skills gap, but Where it really reigns supreme is the speed at which it operates. Breaches often go unnoticed for months at a time, if they ever are at all. Machine learning tools analyze the network in real time and develop and implement solutions to resolve the problem immediately. Where conventional methods use fixed algorithms, machine learning is flexible and can adapt to combat dynamically evolving cyberattacks. Nature-inspired AI technologies are now even able to replicate biological immune system to detect and inoculate against intrusions in the same way that living organisms do through continuous and dynamic learning.
The problem is that these benefits are not only available to the good guys. Hilaire Belloc once wrote that ‘Whatever happens, we have got, The Maxim gun, and they have not’. This is not necessarily the case when it comes to machine learning, with open source tools widely available and costs fairly low. Among those to express concerns is Dr Deborah Frincke, head of the Research Directorate (RD) of the US National Security Agency/Central Security Service (NSA/CSS). She has noted that adversarial machine learning is ‘a thing that we're starting to see emerge, a bit, in the wild.’
Frincke may be overestimating the threat. The resources available to large companies are likely to give them the edge, at least for the time being, but as costs come down, this advantage is not likely to remain. Hackers have already proved themselves more than capable of taking down huge multinationals with the bare minimum of equipment. There is also an argument to be made that cybercriminals who rely on machine learning could actually be making it easier for cybersecurity professionals. Behind every piece of malware is a person with a specific and very human intent. Machine learning is not capable of understanding this because it still struggles to really understand the complexities of human behavior. In a machine vs. machine battle on the other hand, automated hacking versus an automated defence, it is likely that machine intelligence counter measures will have a far greater success simply because it thinks in the same way.
Towards the end of 2016, estimates put the number of new malware samples being generated in a single quarter at around 18 million - as many as 200,000 per day. Organizations need to be quicker with their security countermeasures, and machine learning techniques can produce them at a speed which stops the kind of damage we have seen in recent years being done again. But while the technology is getting there, we need to be careful not to overhype it. We also need to be wary of bad actors attempting to harness it for themselves. Machine learning is a fantastic tool, but it is not perfect, and should not be seen as such.