Artificial intelligence: our best hope for winning the cybersecurity battle
First the bad news: The consensus of experts is that we are losing the battle against cybercriminals. Growth in the size of the attack surface is outstripping organizations’ ability to plug all the potential holes – and criminals only need to penetrate the network successfully once to inflict serious damage.
Now the good news: Relief may be just around the corner. Machine learning presents a breakthrough opportunity to turn the tide against attackers and – if not, keep them out – at least limit the damage they can do.
The biggest problem confronting the security community today is lack of skilled people. The global shortage of cybersecurity professionals remains a major issue. This crisis comes amid an explosion of data that is seeing information volumes in many organizations double every year. As data volumes go, the task of monitoring it to detect threats or theft grows as well. People are a far less scalable resource than machines, so AI is our best defense.
Security operations centers are already overwhelmed by data, and the volume will only accelerate as the internet of things contributes to growth. We have many ways to capture and store information, but until recently there have been few ways to derive insights from it, particularly as the volume grows. IT organizations have resorted to throwing more tools at the problem – the average enterprise now has 75 different security products – but poor integration prevents them from getting a unified view of all that’s going on inside their infrastructure.
The machine learning difference
Machine learning algorithms excel at processing large amounts of data. Recent breakthroughs in server design have made them even more powerful. Supercomputers powered by graphics processing units (GPUs) from companies like Nvidia enable these algorithms to be parallelized and applied to massive databases. While GPUs aren’t as flexible as general-purpose CPUs, they’re extremely fast and designed to work in parallel to achieve almost limitless scalability. Nvidia is contributing technologies like its NVLink Fabric backplane technology to make supercomputer-like power available to nearly anyone.
« Machine learning systems start with the same baseline but improve as they continuously iterate through data. With a little help from security administrators, they learn to identify patterns that characterize malicious behavior and discard benign abnormalities. Their analysis gets better and better over time, leading to many fewer false alerts and giving administrators more precise targets. »
These advances come at the time when the security community is shifting its focus from prevention to detection and containment. The assumption is that most major systems have already been penetrated, so the challenge is to isolate the attackers before they can do much damage.
That requires sophisticated pattern analysis applied to log data to spot anomalies. Security administrators establish a baseline of “normal” activity and intrusion detection systems scour network and database logs to identify unusual patterns that might indicate an intruder.
The problem with conventional intrusion detection is that humans must define what is “normal,” but that is exceedingly difficult to do because traffic is inherently unpredictable. The result is false alerts that deluge security administrators. Consequently, many alerts are ignored, which defeats the purpose of intrusion detection.
Intelligent intrusion detection is only one way AI changes the security equation. Cognitive systems can monitor external sources as threat alerts and security blogs to find information that’s useful to security staff and alert them to new threats are solutions. Intelligent filters can analyze email messages for signs of phishing, in the same way that machine learning is already being applied to spam detection.
Security leapfrog
As promising as these developments are, we can’t afford to become complacent. The bad actors will have access to the same technology and can use it against us. Take voice response systems, for example. Last year, a group of Chinese researchers demonstrated ways to fool voice assistants into executing malicious commands hidden in ordinary human speech and even music. Image and sound manipulation can be used to fool biometric authentication systems. AI can be applied to password-guessing algorithms to make them more accurate. And the same cognitive technology that monitors external sources for threat alerts can also be used to gather data for identity theft.
The risk is that AI could create a game of leapfrog in which white hats and black hats fight to a stalemate using a new set of tools.
The greatest asset we have in fighting cybercrime with AI is cooperation. Machine learning systems can do amazing things on their own, but when federated with other systems they can benefit from their collective knowledge to a degree that criminals will find it difficult to match. IBM’s Qradar Advisor with Watson is just one example of how collaboration is being integrated into security products. You can expect to see much more activity in this area as the enormous potential of machine learning is realized.
You might also like
Risk and Compliance Governance
For a long time considered as a purely technical domain, we have been observing, for a few years now, a paradigm shift in cybersecurity management.
Identify and prioritize cybersecurity investments
In order to produce the information security action plan, the initiatives should be carried out over a period of time based on various factors that are well known in project portfolio management, such as the company's strategic orientations, the availability of resources, etc.