- Employee awareness
- 6 min read
With business spending on artificial intelligence (AI) forecasted to reach $125 billion by 2025, the rapid growth of AI and machine learning (ML) is not going to stop any time soon.
These new technologies can do wonders for cybersecurity, but they also bring with them real risks that we must never underestimate.
On the one hand, AI-based tools can help security teams spot and mitigate cybersecurity threats much more quickly and accurately than humans could ever dream of doing alone. This is because AI is designed to mimic and improve upon human intelligence. It can do what we can do, but better.
Take the problem of threat detection and response, for example. According to TechRepublic, the average mid-sized company is alerted to over 200,000 cyber events every day. This is a number that is far too high for any human team to deal with, meaning some threats will understandably slip through the cracks.
That’s where AI comes in. It can rapidly analyse the events, pick out the threats and even create and implement a response. Additionally, AI really comes into its own when you consider that the more data it analyses, the better it becomes at spotting threats. So, it could build an accurate picture of employees’ security behaviours, an organisation’s cybersecurity posture and the security of systems, devices and networks, with little or no human input.
On the other hand, as is always the case with new technologies, what can be utilised by cybersecurity professionals can also be exploited by cybercriminals.
In its ‘Smart cyber: How AI can help manage cyber risk’ white paper, Deloitte posited that smart cyber is a spectrum. It begins with robotic process automation, evolves to cognitive automation and then, finally, to full AI.
When AI technology was in its infancy, cybercriminals stood at the beginning of this spectrum – their attacks were designed simply to copy human actions, which meant they were easier for cybersecurity teams to spot and prevent.
This means that cybercriminals have the power to create ever more sophisticated attacks, introducing new threats and expanding the threat landscape.
This hasn’t gone unnoticed by cybersecurity professionals. According to a report from Forrester Consulting and Darktrace, 88% of decision-makers in the security industry think ‘offensive AI’ is coming, and two-thirds expect AI to lead to new attacks ‘that no human could envision’.
Cybercriminals can use AI in two ways – to create an attack and to commit an attack. And there are various ways that they can do this. Here are just a few of the endless possibilities:
Cybercriminals could also use AI to create highly tailored attacks that can be operated at scale.
Consider a traditional phishing attack versus a sophisticated business email compromise (BEC) attack. While a phishing attack can be sent to the masses, its weakness is that it is not tailored to the recipient. With BEC, the opposite is true. While it can be tailored to the recipient, this takes a lot of time and research, and so it can only be targeted to specific recipients.
With AI, there is no longer the need to pick between the two. If the technology can learn from, predict, and accurately mimic human behaviour, then sophisticated mass attacks become a frightening reality.
The really big issue with the use of AI in cybercrime is that attacks can be continually improved. With each success and failure, the attack methods become smarter, making them more difficult to detect and stop.
Of course, this has always been the case with cybercrime. Criminals learn from their attacks and come back stronger. The difference now, though, is that this learning experience happens much more quickly, making it harder for cybersecurity professionals to predict and prevent attacks.
As the World Economic Forum says, ‘only AI can play AI at its own game'. And so, we find ourselves in a situation where AI is fighting AI. While it doesn’t really look like the sci-fi vision, it is nonetheless extremely dangerous.
With cybercriminals increasingly using AI to inform and launch their attacks, organisations need to be able to respond with the same speed and intelligence, and they need AI to do that.
Are you using AI to improve your cybersecurity resilience? Get involved in the conversation over on Twitter by tagging @TheSecurityCo and using the hashtag #AIversusAI.
© The Security Company (International) Limited 2023
Office One, 1 Coldbath Square, London, EC1R 5HL, UK
Company registration No: 3703393
VAT No: 385 8337 51