Circle 01
Circle 02
Circle 03

Join our mailing list

Subscribe to the TSC newsletter to receive exclusive news and advice
  • 03 March 2023
  • 8 min read

AI and the metaverse: a match made in cyber hell?

Now that the metaverse and AI are set to work in tandem, security leaders and employees should expect increase cyber threat potential, with the possibilities and ramifications for the cyber security industry set to be massive! What cyber security threats and risks do security leaders need to be aware of?
9
AI and the metaverse thumbnail

Over the last year or two, the concept of the metaverse has gained considerable traction in popular culture and is being adopted by both social media giants and retailers looking for new ways to attract customers and clients. Now, the metaverse is set to be enhanced with the use of Artificial Intelligence, an emerging technology that we have written about extensively.

So, whilst Facebook and Mark Zuckerberg believe full-scale metaverse adoption is just five to 10 years away, do security leaders need to worry about the marriage between AI and the metaverse? Are we walking into a match made in cyber hell?

Refresher: what is the metaverse?

Metaverse cyber security

Before we dive into the relationship between AI and the metaverse, let’s refresh our understanding of the metaverse as it is an ever-evolving virtual space.

Firstly, the term ‘Metaverse’ comes from the classic sci-fi novel ‘Snow Crash’ by author Neal Stephenson. In Stephenson’s novel a virtual world has become the sole getaway/haven for humanity in the face of decaying infrastructure and society.

However, the metaverse detailed in ‘Snow Crash’ is singular, whereas we are seeing competing metaverses in the real world. Most of the Metaverses that we already know of exist in the gaming world (Second Life, Fortnite, Sandbox) but some social platforms are/have created metaverses that reflect society as opposed to gamified experiences.

To simplify the technology, the Metaverse is a simulated environment in a virtual reality (VR) space. In this simulated world, users are represented by avatars (digital characters) and can take part in events that may have otherwise taken place in the real world, such as meetings, concerts, lessons, and get-togethers. Metaverses also incorporate aspects of Web 3.0 such as blockchain technology, cryptocurrencies and NFTs (nonfungible tokens).

What cyber security challenges are posed by AI and the metaverse? 

AI and the metaverse: a match made in cyber hell?

While the metaverse offers a tantalising glimpse into the future of online communication and entertainment, it also presents significant challenges when it comes to cyber security and user security awareness. As with any online space, the metaverse is vulnerable to all the common cyber attacks such as account hijacking, phishing, and malware. But the unique nature of the metaverse and AI also creates unique cyber risks we have not seen before.

1. Real time social engineering

In the metaverse, users will be able to create and manipulate users in real time. This means that malicious actors could potentially use the metaverse to run sophisticated social engineering attacks. The metaverse is a highly social environment, which means that users, potentially unknown to each other, will be interacting constantly. This creates opportunities for social engineering attacks, where attackers use psychological tactics to manipulate users into giving up sensitive information or performing certain actions.

Now, of course, many metaverses are using AI detection systems to ensure that malicious actions and threat actors are quickly spotted and dealt with, but we have already seen instances of metaverse users being hounded by other private users in virtual harassment attacks that do not trigger AI security detection systems. Some suggestions posit that AI could also be used to monitor the metaverse for unusual patterns of user behaviour or as a rapid complaints system. This could help to detect and prevent cyber attacks and virtual assaults.

2. Real time virtual manipulation

In the metaverse, users can also create and manipulate digital objects in real time. For instance, malicious actors can use cleverly placed digital objects to spread malware or even manipulate the virtual world in order to trigger an injury in the real world. For example, a user could create a digital object that appears harmless but contains a hidden virus or a threat actor could change your virtual surroundings so you step outside of virtual bounds and inadvertently damage your real body in the real world.

This type of manipulation requires direct access to a user’s headset or a backdoor route into the framework of that particular metaverse but as you can see creates massive virtual security issues.

3. Adversarial attacks

AI systems used in the metaverse are vulnerable to adversarial attacks, where attackers manipulate the system by feeding it false or misleading data to dupe the algorithm. For example, an attacker could create a digital object that appears harmless to users but contains hidden code that tricks the AI system into thinking it is safe. This could allow the attacker to bypass security measures and launch a cyber attack undetected.

To address this risk, AI systems used in the metaverse must be trained on large and diverse data sets that are resistant to manipulation. The AI systems must also be able to identify and respond to adversarial attacks quickly and accurately, which requires sophisticated detection algorithms and continuous monitoring.

4. Social and racial bias in the AI code

AI systems are only as good as the data they are trained on, and if the data is biased or incomplete, the AI model will not be able to accurately identify and respond to cyber security threats. Bias in the code is a massive issue when it comes to any development in the AI industry. The idea here is that an AI algorithm is just a reflection of the person that coded it and the data/text sets that it has been trained on. You will see different biases in your AI’s output depending on what you train the AI on.

We have already seen this in Microsoft’s Tay chatbot which had to be pulled offline because it started spouting white supremacist talking points. AI chatbots like ChatGPT have also shown bias towards certain races, ethnicities, and species in their responses, which brings into question the validity of all its responses. Tangentially, other biases we have seen include a bias in Twitter’s AI-backed image preview algorithm which can detect white faces for highlighting but ignored BIPOC faces as it was not trained to register these as people. Even DALL-E, an AI art producer, will lighten skin tones and show preference for Caucasian skin tones because it was fed Western art from a certain period to learn from.

Now imagine the AI detection systems being used in metaverse spaces also showed bias in the way it ranks incoming requests from users. What if an AI decides one user’s request ranks higher than another’s because of an irrelevant arbitrary bias in its code? What security issues does this create for users? What if an AI algorithm being used in the metaverse shows preference to certain demographics? Will AI lead to security inequality amongst users of the metaverse?

5. The elephant in the room: data and privacy

We cannot stress this enough: the metaverse and AI both bring with them massive privacy and data protection issues. For any AI system to be effective it needs access to substantial amounts of user data, which often includes sensitive personal information. This raises concerns about how this data will be collected, stored, and used, and who will have access to it. We have seen this cause trouble in the past for AI-based organisations. ClearviewAI, an AI-based facial recognition software company, has been decimated by fines from UK cyber agencies and other European agencies, after reports revealed ClearviewAI’s algorithm pulls facial recognition data from social media and private profiles.

For any AI-focused organisation or technology, as security leaders, we must ask where the data is coming from, where is it being stored, and can we trust the people that have access to it? This is a massive security elephant sitting in the corner of the AI room and one that can only be tempered by official regulations and protocols – which is still at a very embryonic stage.

In conclusion: we need official AI rules and regulations

The need for rules and regulations for AI

The use of AI in the metaverse presents new challenges for cyber security. While AI can be used to detect and prevent cyber attacks, it also creates potential risks and vulnerabilities that need to be addressed - such as the darkverse. To ensure the safety and security of the metaverse, it will be necessary to develop sophisticated AI systems that can accurately detect and respond to cyber security threats while also protecting user privacy and data.

To address these challenges, we will need to develop robust governance frameworks for the use of AI in the metaverse. This will require collaboration between technologists, policy makers, and other stakeholders to ensure that AI is used in a responsible and ethical manner and to develop standards and best practices for the use of AI in the metaverse.

If you would like more informationabout how The Security Company can help your organisation and deliver security awareness training and employee development for you ... or how we can run a behavioural research survey to pinpoint gaps in your security culture ... or how we can improve your employee induction process, please contact  Jenny Mandley.

Nas
Written by
Nas Ali
Cyber security and awareness content creator focused on emerging threats and the next wave of cyber security risks like AI, deepfakes and tech 4.0 initiatives in order to build towards a more secure organisational culture.
View Profile

See how we can help you protect your organisation today?

Circle 01
Circle 02
Circle 03

Join our mailing list

Subscribe to the TSC newsletter to receive exclusive news and advice