Deepfakes are a growing threat to cybersecurity and society: Europol

Deepfakes are a growing threat to cybersecurity and society: Europol

Deepfakes, if left unchecked, will become the next big weapon of cybercriminals

Deepfake technology uses artificial intelligence techniques to alter existing audio or audiovisual content or create new one. It has some non-evil purposes, such as satire and games, but is increasingly being used by bad actors for evil purposes. And yet in 2019, investigate from iProove showed that 72% of people were still unaware of deepfakes.

Deepfakes are used to create a false narrative that appears to originate from trusted sources. The two main threats are against civil society (spreading disinformation to manipulate opinion towards a desired effect, such as a particular election result); and against natural or legal persons to obtain an economic return. The threat to civil society is that, if left unchecked, entire populations could be influenced by disinformation campaigns delivered with deepfakes that distort factual truth. People will no longer be able to distinguish truth from falsehood.

The cybersecurity threat to companies is that deepfakes could increase the effectiveness of phishing and BEC attacks, facilitate identity fraud, and manipulate company reputations to cause an unwarranted collapse in share value.

Deepfake technology

A deepfake is developed by using a neural network to examine and discover the patterns needed to produce a compelling image and develop a machine learning algorithm from this. As with all machine learning, the amount of data that can be used for training is critical: the larger the dataset, the more accurate the algorithm. Large sets of training data are now freely available on the Internet.

Two current developments have improved and increased the quality and threat of deepfakes. The first is the adaptation and use of generative adversarial networks (GANs). A GAN operates with two models: generative and discriminant. The discriminant model repeatedly tests the generative model against the original data set. “With the results of these tests,” writes Europol (Law enforcement and the challenge of deepfakesPDF), “the models are continuously improved until the generated content has the same probability of coming from the generative model as the training data”. The result is a fake image that cannot be detected by the human eye but is under the control of an attacker.

The second threat comes from 5G bandwidth and cloud computing power, which allows video streams to be manipulated in real time. “Thus, deepfake technologies can be applied in video conferencing environments, live video streaming services, and television,” writes Europol.

cybersecurity threats

Few criminals have the expertise to develop and use convincing deepfakes, but this is unlikely to slow down their use. The continued evolution and development of Crime-as-a-Service (CaaS) is expected to “evolve in parallel with current technologies, resulting in the automation of crimes such as hacking and adversarial machine learning and deep forgery” says Europol.

Deepfake threats fall into four main categories: social (fueling social unrest and political polarization); legal (forgery of electronic evidence); personal (harassment and bullying, non-consensual pornography, and online child exploitation); and traditional cybersecurity (extortion and fraud and manipulation of financial markets).

Forged passports with a fake photograph will be difficult to detect. These could then be used to facilitate many other crimes, from identity theft and trafficking to illegal immigration and terrorist travel.

Deep fakes of embarrassing or illegal activities could be used for extortion. Phishing could be taken to a new level if the lure includes a video or voice of a trusted friend. BEC attacks could be supported by a video message and a voice identical to that of the genuine CEO. But the really serious threat could come from market manipulation.

VMware’s Tom Kellermann recently said safety week that market manipulation already outweighs the value of ransomware to criminals. Currently, this is achieved through the use of stolen information that allows the criminal to profit from what is essentially insider trading. However, the use of deepfakes could give criminals a more direct approach. False information, embarrassing revelations, accusations of illegal exports and much more could cause a dramatic collapse in the value of a company’s stock. Deep-pocketed criminal gangs, or even rogue nation states seeking to offset sanctions, could buy the stock when it’s down and commit massive ‘killing’ when the value inevitably rises again.

Security is based on trust. Deepfakes provide trust where it shouldn’t exist.

Deepfake detection

The quality of deepfakes already exceeds the ability of the human eye to detect a fake. A limited solution uses the principle of provenance in the original source material, but this will benefit law enforcement’s need to keep deep falsifications out of criminal evidence procedures more than it will prevent falsified cybercrimes.

Technology is another potential method. Examples include biological signals based on imperfections in natural skin tone changes caused by blood flow; phoneme-viseme mismatches (ie, an imperfect correlation between verbal correspondence); facial movements (where facial and head movements do not correlate correctly); and recurrent convolutional models that look for inconsistencies between the individual frames that make up a video.

But there are difficulties. Just as a slight variation of malware can be enough to fool malware signature detection engines, a slight alteration to the method used to generate a deepfake could also fool existing detection. This could simply be updating the discriminant model within the GAN used to produce the deepfake.

Another issue could be caused by compression of the deep fake video, which would reduce the number of pixels available to the detection algorithm.

Europol recommends that preventing deep fakes can be more effective than trying to detect them. The first recommendation is to rely on audiovisual clearance rather than just audio. This may be a short-term fix until deepfake technology, cloud computing power, and 5G bandwidth render it ineffective. These developments will also nullify the second recommendation: require live video connection.

The final recommendation is a form of captcha; that is, says Europol, “Requiring complicated random acts to be performed live on camera, for example moving hands across the face.”

The way to follow

The simple reality is that deepfake production technology is currently improving faster than deepfake detection technology. The threat is both to society and to corporations.

For society, Europol warns: “Experts fear that this could lead to a situation where citizens no longer have a shared reality, or could create social confusion about which sources of information are reliable; a situation sometimes referred to as the ‘information apocalypse’ or ‘reality apathy’”.

Corporations are in a slightly stronger position as they can include context in any decision about whether to accept or reject an audiovisual approach. They could also insist on machine-to-machine communications instead of person-to-person, using zero-trust principles to verify the owner of the machine instead of the communication.

Where it becomes particularly difficult, however, is when deepfakes are used against the company (or at least the shareholding part of the company) to manipulate a decline in the value of the corporation’s shares. “This process,” warns Europol, “is further complicated by the human predisposition to believe in audiovisual content and work from the perspective of truth by default.” The public may not immediately believe the corporation’s insistence that it’s all just fake news, at least not in time to prevent the stock from falling.

Deepfakes are already a problem, but they are likely to become an even bigger problem in the coming years.

Related: Becoming Elon Musk: the danger of artificial intelligence

Related: The art exhibition that tricks facial recognition systems

Related: Cyber ​​Insights 2022: Antagonistic AI

Related: Misinformation Problems Could Be Multiplied With ‘Deepfake’ Videos

Related: Coming to a conference room near you: Deepfakes

watch counter

Kevin Townsend is a senior contributor at SecurityWeek. He has been writing about high-tech topics since before Microsoft was born. For the last 15 years he has specialized in information security; and he has published many thousands of articles in dozens of different magazines, from The Times and Financial Times to current and defunct computer magazines.

Previous columns by Kevin Townsend:
Tags:

Leave a Comment