In this week’s bulletin, Charlie discusses deepfakes and how organisations can be better prepared for them.

Images, throughout history, have been used to tell stories and to emphasise their importance and to flatter individuals or to mock and belittle them. Deepfakes are the modern manifest of this trend. Next week, I am going to be taking part in Databarracks’ wargame entitled “Defending Deepfaked and Disinformation”, and so I thought in today’s bulletin I would share a few of my thoughts on the subject, and what we as business continuity professionals should be aware of.

A deepfake is a video, audio, or image created by artificial intelligence, which makes it appear that someone is doing or saying something they never actually did. They can be convincing. The concept came to public attention in February 2024 when Arup, the global engineering and design consultancy, fell for a deepfake scam and lost £20m. The scam was perpetrated using deepfakes. A senior executive was asked to join a call at which there were a number of colleagues that they knew, including their UK-based Chief Financial Officer. However, the people on screen were deepfake AI-generated video avatars, designed to look and sound exactly like real Arup staff. During the meeting, the fake CFO instructed the executive to make several urgent money transfers to supposedly confidential accounts related to a secret corporate acquisition. The fraud was only discovered when internal checks flagged unusual activity. What seems scary about this case is that the people on the call were familiar to the executive who made the transfers, so they must have been convinced that they were the real people. The only difficulty I have with this case — and I wonder if there is more to it than meets the eye — is that one person was allowed to transfer such large sums of money. If they were, then it seems a large failure of systems that there were not more people involved and a sign-off chain for millions of pounds. This brought the issue to public attention.

Deepfakes have been around for a few years; I wrote a bulletin about them in 2019, but with AI becoming cheaper and better, they are easier and cheaper to produce.

The concept of using images for propaganda purposes has been around since humans started to reproduce images. Cave art with men hunting animals might have been messaging about hunting animals and celebrating their mastery over the environment. Kings and other prominent people used portraits to glorify and elevate themselves. King Louis XIV (“The Sun King”) of France, in his painting by Hyacinthe Rigaud (1701), shows him in a grand, majestic pose, with a long flowing robe, sword, and crown, to symbolise absolute power. He is painted as young and strong even though he was ageing at the time. As well as glorifying themselves in the moment, it was this image they wanted to portray to subsequent generations. On the other hand, James Gillray, who was a political cartoonist producing his work in the late 18th century, brutally mocked politicians like Napoleon and King George III, by portraying them with exaggerated physical features (e.g., big noses and hunched backs), to symbolise moral flaws or stupidity.

In a similar way, in the 1930s, Stalin airbrushed purged officials out of photographs. Nikolai Yezhov, known as the “Bloody Dwarf,” once head of Stalin’s secret police, was famously airbrushed out of a photograph of Stalin by the Moscow Canal after he fell from grace and was executed. Leon Trotsky, an early revolutionary leader and rival to Stalin, was systematically removed from photos of key Soviet events after his exile and eventual assassination. Photographs, which should be permanent records of events and people, became tools of propaganda, and visual history was rewritten to serve those in power.

Deepfakes have been produced for a number of different reasons:

  • Entertainment: Movies using deepfakes to de-age actors.
  • Fraud: Like the Arup case, corporate scams exploit synthetic video. Remote working has made these scams even easier to pull off, as employees often rely on video calls and emails without the same face-to-face checks, making it harder to spot when the person on screen isn’t real.
  • Disinformation: Fake political speeches designed to destabilise, especially when those creating them use bots to promote them on social media. In 2023, AI-generated images showing Donald Trump being violently arrested in New York went viral online. Though entirely fake, the images stirred political outrage and confusion and were widely shared on social media as real images.
  • Reputation attacks: Creating fake videos to damage individuals’ credibility. Malaysia in 2019 saw a deepfake video scandal targeting politician Azmin Ali, allegedly showing him in a compromising situation; while authenticity was never conclusively proven, the damage to his credibility and political standing was immediate.
  • Satire and parody: Sometimes obvious, sometimes dangerously subtle. Deepfake videos of celebrities like Tom Cruise performing absurd stunts on TikTok have gone viral, while a fake Boris Johnson giving absurd speeches has been shared online as satire, blurring the line between comedy and confusion.
  • Fake Brand Promotion: Deepfakes have been used to create false endorsements, with AI-generated videos of celebrities appearing to promote products they’ve never agreed to, misleading consumers, and damaging brand trust. AI-generated videos of a fake Tom Hanks promoting dental plans circulated online, tricking viewers into believing he had endorsed the product.
  • Psychological Operations – Used by state or political actors to erode trust, spread confusion, or manipulate populations. At the beginning of the Ukraine war in 2022, a deepfake video of Ukrainian President Zelenskyy falsely urging troops to surrender spread rapidly across social media, and even appeared on a hacked news site.

Research by Vaccari and Chadwick (2020) found that, while deepfakes may not always deceive people outright, they create dangerous uncertainty. In an experiment involving a deepfake of Barack Obama, only a small percentage fully believed the fake video. However, many more were left unsure – and that uncertainty significantly reduced their trust in political news on social media. Deepfakes, it seems, undermine confidence in what we see, rather than completely fooling us.

So as business continuity professionals, what can we do to combat deepfakes — either to recognise them, or to combat where one is damaging our organisations?

  • Recognise deepfakes as an emerging risk in business continuity and cyber threat planning, and quantify your organisation’s exposure to them as a risk.
  • Update crisis communication plans or crisis management plans to include procedures for responding to synthetic media attacks, especially being able to rapidly debunk them if they are pushing disinformation.
  • Ensure that there are protocols and checks in place to prevent an Arup-type fraud.
  • Run exercises that simulate deepfake scenarios to test decision-making under uncertainty.
  • Educate staff on how to spot deepfakes and raise awareness across leadership and critical teams.
  • Build partnerships with external experts like PR firms, legal advisors, and digital forensics specialists, so that you can rapidly respond to them.
  • Promote a culture of healthy scepticism where verification is prioritised over speed when reacting to content.

As the technology gets cheaper and easier to use, there will likely be an increase in the use of deepfakes for both good and bad purposes. As new threats emerge, we as business continuity professionals, have to be aware of them!

Scroll to Top
Scroll to Top