Do deep fake cyber-attacks pose an imminent threat?

deep fake cyber-attacks

We’ve all seen the videos before. Today, most deepfake videos we see consist of influential people or celebrities being realistically face swapped for comedic effect. However, security experts warn that deep fake cyber-attacks could soon pose an imminent threat to individuals, businesses, and even governments.

LIFARS Founder & CEO talking deepfakes on FOX5 NY

The word deepfake comes from a combination of the words “deep learning” and “fake.” It refers to using AI-based technologies, such as facial recognition software, to alter images, audio, or even video to such a degree that it’s almost impossible to tell it’s not authentic.

Until now, we haven’t seen any large-scale, in-the-wild use of deepfakes for cybercriminal activity. However, cybercriminals are always on the lookout for new opportunities and attack vectors. As the effectiveness of these technologies go up and their costs down, it’s only a matter of time until cybercriminals utilize deepfakes and they become a top security threat.

 

LIFAR’s interactive training modules deliver stimulating and engaging learning experiences to your employees, equipping them with the tools and resources they need to be successful active participants in the cybersecurity process. Equip your employees with the tools and resources they need to be successful in your cyber security process

 

What threats do deepfake cyber-attacks pose?

Human error still plays a leading role in many, if not most, cyber-attacks on organizations. Conventionally, cybercriminals used methods based on social engineering techniques, like phishing, credential hacking, or impersonation to get past an organization’s software-based and physical defenses. In fact, a report by Verizon found that up to 85% of data breaches involved human error.

However, these techniques have their limitations. If the employee knows what the person is supposed to look or sound like, it’s much harder to deceive them successfully.

Advanced deepfakes can make it almost impossible to separate what’s real from what’s not. Obviously, the first major threat this can cause is spreading fake news. There are already numerous convincing deepfakes of influential figures like Barrack Obama, Donald Trump, and Nancy Pelosi.

This can sway public opinion, tarnish an individual or company’s reputation, or even be used as blackmail. Cybercriminals have been shown to blackmail executives before because it provides much more leverage.

Still, this might not pose a massive threat in a traditional, face-to-face working environment. However, we’re living in a time when more and more professional interactions and business is being done remotely.

Deepfakes are also getting sufficiently advanced that they can even be applied to live video or voice calls.

All-in-all, deepfake technology can be used in a variety of ways to compromise individuals to act in ways they aren’t supposed to; from blackmail to giving false orders to spreading misinformation.

A recent example involved an executive of a UK-based energy firm that was scammed out of $243,000. A hacker used AI technology to impersonate the voice of his boss, demanding that he immediately transfer the amount to a “Hungarian supplier.”

How can you protect your organization against deepfakes?

There are a number of ways you can ensure your organization is as prepared as possible for potential cyber incidents involving deepfakes:

  • Training and awareness: Education is the best way to tackle cyberthreats that rely on human error as an attack vector. All company stakeholders should be informed of the existence, capabilities, and possible uses of deepfake technology. They should also be trained with live exercises to help them identify deepfake content as well as what process to follow when a security incident occurs. Phish Scale is an example of a similar training technique used in offices today to defend against phishing attacks.
  • Deepfake detection software: Just as deepfake technology is advancing, so is technology made to detect it. Microsoft already unveiled a deepfake detection tool last year that can give a confidence score based on how authentic it believes the content to be. While these tools are to provide 100% accuracy, they can help to deflect a significant number of deepfake attempts.
  • Implement an incident response plan: Every organization today should have an incident response plan for any type of cyberattack, deepfakes included. This plan should include reporting of the incident, damage mitigation, remediation, internal and external communications, and post-attack digital forensics.