The key problems facing security operations centres today and how AI will help to solve them
Greg Martin (left) of Jask
The security industry is facing a huge problem and one, which until recently, looked intractable. There are just not enough cyber security operatives in the world’s Security Operations Centres (SOCs)and there are just too many threats that the average organisation has to deal with on a daily basis.
However, there are techniques, drawn from the Machine Learning subdomain of Artificial Intelligence (AI), which can automate the work of security analysts in the SOC. By applying AI tools, the SOC can alleviate the burden on overloaded business defenders. There is also the problem of the scarcity of the very analysts who are holding these threats at bay.
There are more than three million cybersecurity jobs vacant in North America alone. The number is much, much larger as you look to the rest of the world. So, we face a simple but frightening problem. If we do not develop Artificial Intelligence to start to accelerate identifying, automating and helping the analysts that we do have, to deal with these massively increasing cyberthreats, we are going to continually fall behind.
This is going to have potentially catastrophic effects on our industries. We are going to suffer even bigger breaches, more destructive breaches and events, like we saw here in the US with Equifax are going to start becoming routine. The Equifax breach was simply a matter of not having the appropriate amount of resources to be able to deal with the threats that they had coming through their systems.
We have to start using the only serious methodology that modern organisations have to protect themselves from the threat of a successful cyberattack. The repetitive tasks, which we do every day, that are more scripted, should be given to machines. This is to acknowledge that machines are better at some things like high-frequency pattern matching, than humans ever will be.
We use AI and machine learning to scan the vast amount of data coming into the SOC and reveal patterns and inconsistencies – if you’re looking for these patterns over time in very large volumes you can only keep up using these methods – a human cannot possibly do this job – it is perfectly suited to AI.
Where a SOC analyst can get tired if they’re working the overnight shift and they haven’t had enough Red Bull, the machines can just crunch through that and find those patterns more quickly. The human just doesn’t have the speed to do it. What AI will not be able to do is take the human out of the loop.
We believe that in our lifetime, AI will not surpass the human ability to be the best defence against cyberattacks, at least the complex ones. However, we do believe that within the next five to 10 years AI will be able to deal with some of the more lower-level automated attacks, dealing with more financial crime, but the truly targeted government-grade attacker – where there’s a human behind the keyboard – the best defence for that will be a robot behind the keyboard.
What AI will do, it will allow us to filter that advance attacker from all of the noise of the automated lower-level cybercrime attacks. This is where the industry is really struggling right now: how do I identify what I should care about versus the malware that I see every Monday?
We had this phrase in cybersecurity that we used to use years ago –– that we’re looking for the needle in the haystack, but where today, we have a stack of needles, there are thousands of these threats and they are now hidden in multiple haystacks. The real task is finding what is that ‘sharpest’ or most dangerous needle in the stack of needles that we have. So, the game has totally changed and that’s what AI is going to help us with.
We are now in a full-on 24×7 global cyberweapons arms race and this is not mere speculation, for example the WannaCry tool was built from a stolen cyberweapon from the US Government, from the National Security Agency. That was weaponised and used against the public and public companies. This cyberweapon proliferation is the new normal, and we are absolutely certain that government entities are using AI to develop new cyberweapons.
WannaCry, which was based on a weapon called EternalBlue is about a five or six-year-old piece of technology, so you imagine what happens when the newer pieces of these technologies leak out, whether it comes from the US or another government entity, the ramifications can be quite scary. AI is the only game in town that can let us keep on top of the current threat landscape.
The author of this blog is Greg Martin co-founder of Jask
Comment on this article below or via Twitter @IoTGN