- Employee awareness
- 9 min read
If you are a regular reader of The Insider, you know TSC is all about spotting emerging threats and working to avoid them through analysis,...
If you are a regular reader of The Insider, you know TSC is all about spotting emerging threats and working to avoid them through analysis, awareness training and developing good cybersecurity behaviours. One cyber threat everyone needs to be on the lookout for is deepfakes!
Whilst deepfake technology is not the go-to cyber attack of choice for threat actors - … yet – it is a growingly popular choice of cyber attack. According to threat intelligence platform IntSights, there has been a 43% increase in hacker chatter around deepfakes on dark web forums since 2019!
The word deepfake is a combination of ‘deep learning’ and ‘fake.’ A deepfake can either be a synthesised voice or a fake image/video impersonating someone.
Deepfakes use a photo, video, or audio file of someone to recreate their likeness or voice. Deepfakes are produced using a combination of sophisticated technologies such as artificial intelligence (AI) and machine learning (ML).
Whilst impersonation is nothing new in the world of cybersecurity, deepfakes takes that cyber threat to a whole new level. When deepfakes are deployed with the intention to deceive, they can be used deftly to trick your way into protected locations both physical and virtual.
Deepfakes make for very convincing imitations … so much so that even Hollywood is using deepfakes to resurrect actors that have long since passed!
AI generated art and written work is now impersonating artists and writers.
The most common form of deepfakes. Here, a library of images and video content can be used to build a fake virtual mask of an individual to be worn virtually by a threat actor.
Cloning and imitation of a person’s voice has been used in official media for a few years now. However, with machine learning, one can imitate a person’s voice from just a selection of recorded words, phrases, sentences, spoken by the original person.
Fake internet profiles built across social media platforms, along with articles and blog posts, to create a non-existent character. This type of deepfake strategy is sophisticated and meant for long-term trust building and financial gain.
Live, on-the-fly, software manipulation. Think things like the FaceSwap app on phones, and other virtual reality filters. We will be talking about a real-time deepfake used during the height of Russia’s invasion of Ukraine below.
With any technological advancement, comes threat actors willing to use the tech for their manipulation and illegitimate monetisation. Deepfake technology is no different. Rapid7 data reveals that deepfake chatter has grown exponentially in dark web circles with dark web posts about deepfakes doubling year on year from 2019 to 2021.
In 2019, 72% of people were unaware of what deepfakes are. However, in the years since, deepfake technology has increased in popularity and this trend only looks like continuing. Deepfakes can be videos or audio files of a person in which their physical appearance and/or voice has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Cybercriminals may use deepfakes to create false narratives and gain corporate access from a trusted source.
Deepfakes can increase the effectiveness of phishing, vishing, and metaverse attacks. Deepfakes can also be used to commit identity fraud in both private and public circles, which can have massive ramifications for company finances and social reputation.
VMWare’s Head of Security told SecurityWeek that some cybercriminals will be moving from ransomware attacks to deepfake attacks as they can use these to manipulate the market as a different means of financial gain. For example, a cybercriminal could deepfake an unsavoury video of a company’s CEO to influence the company’s shares/wealth by damaging the reputation of their leadership.
Furthermore, deepfake production tech is improving at a faster rate than deepfake detection with Europol warning: “Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy’ … where it becomes particularly difficult however, is when deepfakes are used against society to manipulate a crash in share value for a corporation. This process is further complicated by the human predisposition to believe audio-visual content and work from a truth default perspective.”
Deepfakes can also have massive impacts on political and social circles. There is nothing more powerful than a first impression or an established opinion. Imagine seeing a video of someone in power, circulating it for hours to millions of eyes, only to find out it is a deepfake. Will news of it being a fake, reach every person that saw the fake? How hard will it be to erase the opinions or thoughts someone has about the deepfaked individual?
A Chinese bank manager was tricked by the deepfaked voice of his CEO, which resulted in him transferring a whopping $35 million out of the bank to the threat actor using a deepfaked voice. The threat actor said he was calling about an upcoming acquisition and needed his subordinates to authorise the transfers. They supported their deepfaked call by sending a legitimate looking email from the ‘director.’
The first ever recorded deepfake cyberattack also saw a cybercriminal deepfake a CEO’s voice over a phone call, tricking employees into unknowingly transferring £200,000 to cybercriminals.
Deepfake scams are not going away. In fact, one recent example is when the mayor of Berlin thought he was having an online meeting with former boxing champion and current mayor of Kyiv, Vitali Klitschko.
When ‘Klitschko’ started saying some very out of character things relating to the invasion of Ukraine, the mayor of Berlin grew suspicious. When the call was interrupted the mayor’s office contacted the Ukrainian ambassador to Berlin and discovered that, whoever they were talking to, was not Klitschko.
The imposter also apparently spoke to other European mayors, but in each case, it looks like they had been holding a conversation with a deepfake, an AI-generated false video that looks like a real human speaking.
The saving grace for many regarding the threat of deepfakes is the sheer complexity of the technology behind it. The hope is that the hardware and knowledge needed for deepfake attacks is far too scarce for deepfake attacks to become a regular risk. Although, we are already seeing deepfake capabilities become accessible for the masses.
In fact, there are websites and applications that can now run a deepfake programme for you to not only replace and impersonate someone’s face, but their voice also. In this case, deepfake attacks have become a service that needs minimal monetary input from a threat actor for a potential huge return.
Deepfakes, if left unchecked, will become the weapon of choice for cybercriminals – and they do not even have to be sophisticated threat actors anymore. It is also worrying that deepfake technology is developing faster than deepfake detection technology.
As a result, for organisations and individuals to stay safe from deepfake scams, they need to keep their ears to the ground and always verify the individuals they are talking to, regardless of whether they sound or look like a trusted individual.
In the end, we may see the adoption of a ‘zero trust’ policy to combat deepfake technology as deepfake detection tech lags.
If you would like more information about how The Security Company can help deliver security awareness training, raise awareness, increase security skills, and establish a secure culture, or how we can run a behavioural research survey to pinpoint gaps in your security culture, please contact Jenny Mandley.
© The Security Company (International) Limited 2022
Office One, 1 Coldbath Square, London, EC1R 5HL, UK
Company registration No: 3703393
VAT No: 385 8337 51