Artificial Intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry – and cybersecurity is no exception. A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations. Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionize cybersecurity, whilst others insist that the drawbacks outweigh the benefits.
With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed – CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organizations are facing a minefield.
Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organizations today?
The cost-benefit conundrum
AI can be deployed by both sides – by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organization. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.
Additionally, AI technology has the ability to pick up abnormalities within an organization’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.
As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defenses and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations. This approach would likely be favored by hackers as they don’t even need to tamper with the data itself – they could work out the features of the code a model is using and mirror it with their own. In this particular case, the tables would be turned and organizations could find themselves in sticky situations if they can’t keep up with hackers.
As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defense-in-depth Information Assurance strategy is still needed to form the basis of any defense strategy to keep data safe.
What are your thoughts on AI? Get in touch to discuss your cybersecurity strategy with our team.