Circle 01
Circle 02
Circle 03

Join our mailing list

Subscribe to the TSC newsletter to receive exclusive news and advice
  • 26 October 2021
  • 4 min read

AI versus AI: Is artificial intelligence a shield or a sword for cybersecurity?

With business spending on artificial intelligence (AI) forecasted to reach $125 billion by 2025, the rapid growth of AI and machine learning (ML) is not...
18


Artificial intelligence (AI) face.

With business spending on artificial intelligence (AI) forecasted to reach $125 billion by 2025, the rapid growth of AI and machine learning (ML) is not going to stop any time soon.

These new technologies can do wonders for cybersecurity, but they also bring with them real risks that we must never underestimate.


The Shield

On the one hand, AI-based tools can help security teams spot and mitigate cybersecurity threats much more quickly and accurately than humans could ever dream of doing alone. This is because AI is designed to mimic and improve upon human intelligence. It can do what we can do, but better.

Take the problem of threat detection and response, for example. According to TechRepublic, the average mid-sized company is alerted to over 200,000 cyber events every day. This is a number that is far too high for any human team to deal with, meaning some threats will understandably slip through the cracks.

That’s where AI comes in. It can rapidly analyse the events, pick out the threats and even create and implement a response. Additionally, AI really comes into its own when you consider that the more data it analyses, the better it becomes at spotting threats. So, it could build an accurate picture of employees’ security behaviours, an organisation’s cybersecurity posture and the security of systems, devices and networks, with little or no human input.


“Enterprises need to build security infrastructure leveraging the power of AI, machine learning, and deep learning to handle the sheer scale of analysis.”

Gaurav Banga, Founder and CEO at Balbix.

Balbix, ‘Using Artificial Intelligence in Cybersecurity’""

The Sword

On the other hand, as is always the case with new technologies, what can be utilised by cybersecurity professionals can also be exploited by cybercriminals.

In its ‘Smart cyber: How AI can help manage cyber risk’ white paper, Deloitte posited that smart cyber is a spectrum. It begins with robotic process automation, evolves to cognitive automation and then, finally, to full AI.

When AI technology was in its infancy, cybercriminals stood at the beginning of this spectrum – their attacks were designed simply to copy human actions, which meant they were easier for cybersecurity teams to spot and prevent.

However, cybercrime has now moved to fully embracing AI, with their attacks now mimicking human intelligence.

This means that cybercriminals have the power to create ever more sophisticated attacks, introducing new threats and expanding the threat landscape.

This hasn’t gone unnoticed by cybersecurity professionals. According to a report from Forrester Consulting and Darktrace, 88% of decision-makers in the security industry think ‘offensive AI’ is coming, and two-thirds expect AI to lead to new attacks ‘that no human could envision’.


How can AI be exploited for cybercrime?

Cybercriminals can use AI in two ways – to create an attack and to commit an attack. And there are various ways that they can do this. Here are just a few of the endless possibilities:

  • They could use AI to create mutating malware that constantly adapts to avoid detection
  • They could target AI threat detection systems directly, learning the warning flags that the systems look for and adapting in response
  • They could use AI to identify weaknesses or vulnerabilities before an organisation even realises that they exist

Cybercriminals could also use AI to create highly tailored attacks that can be operated at scale.

Consider a traditional phishing attack versus a sophisticated business email compromise (BEC) attack. While a phishing attack can be sent to the masses, its weakness is that it is not tailored to the recipient. With BEC, the opposite is true. While it can be tailored to the recipient, this takes a lot of time and research, and so it can only be targeted to specific recipients.

With AI, there is no longer the need to pick between the two. If the technology can learn from, predict, and accurately mimic human behaviour, then sophisticated mass attacks become a frightening reality.


AI versus AI: What can we do?

The really big issue with the use of AI in cybercrime is that attacks can be continually improved. With each success and failure, the attack methods become smarter, making them more difficult to detect and stop.

Of course, this has always been the case with cybercrime. Criminals learn from their attacks and come back stronger. The difference now, though, is that this learning experience happens much more quickly, making it harder for cybersecurity professionals to predict and prevent attacks.

As the World Economic Forum says, ‘only AI can play AI at its own game'. And so, we find ourselves in a situation where AI is fighting AI. While it doesn’t really look like the sci-fi vision, it is nonetheless extremely dangerous.

With cybercriminals increasingly using AI to inform and launch their attacks, organisations need to be able to respond with the same speed and intelligence, and they need AI to do that.


Are you using AI to improve your cybersecurity resilience? Get involved in the conversation over on Twitter by tagging @TheSecurityCo and using the hashtag #AIversusAI.


Conor 172
Written by
Conor Mckenna
I have a variety of experience within marketing covering multiple sectors, from large consumer goods and FMCG businesses to working as a marketing consultant in the IT service management industry.
View Profile

See how we can help you protect your organisation today?

Circle 01
Circle 02
Circle 03

Join our mailing list

Subscribe to the TSC newsletter to receive exclusive news and advice