The Future Of Humans And AI In Cyber Security

Google's director of information security has some surprising thoughts on the subject


One of the key uses of AI that’s been discussed over the past few years has been its potential for cybersecurity. The number of high-profile cyber attacks has increased at a terrifying rate, with the Equifax hack of 2017 now possibly the largest single data loss in history, and AI has frequently been put forward as the golden bullet that could stop this kind of hacking in future.

The logic of this is clear, AI can do things considerably quicker than humans, it can identify anomalous activity on accounts, and it can run 24/7 without tiring. It can also look at everything at once, whilst a human operator can only really concentrate on a couple of areas at a time. It’s one of the reasons why hacks are often not found until several months later, it just happens to be the time that humans have noticed it.

However, according to Heather Adkins, director of information security and privacy and a founding member of Google's security team, the reality is that AI may not be the solution that many have hoped it will be.

Google holds perhaps the most data of any company in the world, processing billions of searches every day, receiving billions more emails, and logging the 5 billion videos watched on Youtube every single day. They also have a pretty good record on data security, with no major hacks reported in the last 5 years, and only relatively minor ones reported before that. However, at TechCrunch Disrupt 2017 Adkins revealed that ‘I delete all the love letters from my husband’ not because she is embarrassed about them, but because she believes that they may eventually be hacked. She went on to say that personal ‘stuff’ shouldn’t be included in emails because of the risk of hacking. This is despite Google being one of the leading figures in AI development and deployment of it to protect the vast amount of data it holds.

It shows that Adkins has little faith in AI to replace humans in the near future, something which she actively said at the event. Discussing the possibility of AI replacing humans in cybersecurity, she pointed out that one of the major drawbacks is that AI is fantastic for spotting anomalous behavior, but it throws up so many false positives that knowing which is false and which is real can still only be decided by a human. For instance, if somebody has forgotten their password and tries 20 different variants, is that somebody who simply can’t remember their password or a hacker trying to guess a password? This is something that an AI system would find almost impossible to work out at the moment.

Adkins points out that the vast majority of attacks have changed very little from the 1970’s, but AI even fails to notice many of them as they’re taking place. The actual hacking itself is often not the issue that throws AI off the case, but is instead the software or algorithms used to mask the hack, so it could be that a password generator has tried 1 million different passwords before getting the right one, but it can mask those attempts so the system doesn’t recognize them.

During the event Adkins actually admitted that the use of AI in this way may have more use for those instigating the attacks than those trying to defend against them. This is because, regardless of the power of AI and Machine Learning in their current forms, they still require a significant amount of human input to operate. For instance, if a hacker were to exploit a system it would then be easy to use AI to use that same method over millions of sites in minutes. In order for that technique to be identified by an AI system, it would be reported within a batch of thousands of false positives and would need to be confirmed by a human, then patched across the organization, which may then be patched across multiple other organizations.

At present, it seems like AI has a big role to play in cybersecurity moving forward, but the reality of the technology at the moment is that it cannot work independently from humans, which holds it back. However, as humans point out false positives and potential issues with the system it learns and adapts to make sure that there are fewer mistakes in the future, so it is constantly improving. However, at the same time as this improvement is taking place, AI is also being adapted by hackers to improve their systems and make it more likely to break through. With AI’s current capabilities it can only ever process elements that have already been considered and put through it previously, which puts it at a disadvantage when it comes to detection of new threats. It is for this reason that for the time being, cybersecurity professionals should feel that their jobs are still safe. 

Looking small

Read next:

Expert Insight: 'An Effective Visualization Results From A Great Deal Of Curiosity And Exploration'