By Dmytro Koziatynskyi, Founder & CEO at RansomLeak in collaboration with Cyber Helmets.
In January 2024, a finance worker at the Hong Kong office of Arup (the London engineering firm behind the Sydney Opera House and Beijing’s Bird’s Nest) joined a video conference. The CFO was on the call, and so were a couple of colleagues he recognized.
They wanted him to quietly push through a “confidential transaction” that consisted of fifteen transfers through five Hong Kong bank accounts. Around HK$200 million (about $25.6 million USD), gone before he picked up the phone to head office and realized something was wrong.
Nobody on the call had been real: not the CFO, not the colleagues; every face he saw and every voice he heard were AI deepfakes built from publicly available video and audio of Arup executives. The kind of footage anyone can scrape off LinkedIn, YouTube, or a recorded earnings call.
Arup’s name didn’t surface publicly until CNN reported it in May 2024, four months after the firm quietly reported the incident to Hong Kong police. The Financial Times confirmed the same week that the previously anonymous “multinational” Hong Kong police, mentioned since February, was indeed Arup.
How the attack happened
Reading the post-mortems, worth mentioning how little hacking was involved. Arup confirmed no internal systems were compromised. No malware, exploit chain, or anything else that would show up on a SOC dashboard. The attack ran on a video call and a believable story, like many other breaches that rely on social engineering.
The opening move was a phishing email supposedly from the UK-based CFO, asking the Hong Kong employee to handle a private transaction. He smelled something off. Good instinct, that’s what most security training drills into people. So he did what a careful employee does: asked for a video call to confirm.
That’s where it fell apart: on the call, the “CFO” looked right and sounded right, so did the other staff. Multiple familiar faces, all corroborating the request in real time, did what no email could have done. They overruled the employee’s earlier suspicion, he stopped questioning and started executing.
Rob Greig, Arup’s CIO, later told the World Economic Forum that out of curiosity after the incident, he tried to deepfake himself in real time using free, open-source tools. It took him about 45 minutes. His version wasn’t particularly convincing, but the floor for “good enough to fool somebody” keeps dropping while the ceiling keeps rising.
How widespread is this
Pick a source, the curve points in the same direction:
- KnowBe4’s Perry Carpenter told the SEC that AI is “rapidly diminishing the skill barrier for fraud.”
- Sumsub data, compiled by Keepnet Labs, shows deepfake fraud attempts rose roughly 2,137% over three years — from 0.1% to 6.5% of all fraud attempts.
- Pindrop’s 2025 Voice Intelligence + Security Report logged a 1,300% jump in deepfake fraud attempts during 2024, going from an average of one per month to seven per day.
- A McAfee survey found one in four adults says they’ve encountered an AI voice scam personally or knows someone who has.
- Eftsure’s roundup of 2024 figures put the average business loss from deepfake-related fraud just under $500,000, climbing to roughly $680,000 for large enterprises.
A caveat on stats like “2,137%”. They’re technically accurate, but growing off a tiny base because deepfake fraud was a rounding error a few years ago. So any growth at all looks dramatic in percentage terms, and the shape of the curve matters more than the headline number.
Arup wasn’t the only one, just the unlucky one
A few weeks before Arup’s name became public, The Guardian reported that scammers had tried something nearly identical against WPP, the world’s largest advertising group. They cloned CEO Mark Read’s voice, scraped YouTube footage of him, set up a fake WhatsApp account using his public photo, and booked a Microsoft Teams call with a senior agency leader. The pretext was setting up a new business venture, and the goal is money and personal data.
That one failed. The targeted executive got suspicious, and Read later sent an internal email (also obtained by the Guardian) telling staff: “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI, and deepfakes.”
The interesting comparison between WPP and Arup is in the context, since the tech used was pretty much the same. Arup’s finance worker was already inside a workflow where money moves on executive instructions. WPP’s exec was being asked to do something unusual (start a new business), which gave him a beat to think, “Why would we do this”?
So one of the most important things to take away from the Arup incident is that the gap between “this fits what I do all day” and “this is weird” determines whether the attack lands.
Why did the security stack not help
After reading enough of these post-mortems, the same pattern shows up: the controls organizations have spent most heavily on are mostly the wrong shape for the threat.
MFA, EDR, email filters, and network monitoring, since these guard perimeters and endpoints. A deepfake scam call only slightly touches any of them, mostly in how it delivers the initial email or phone call with a meeting request. If the business email compromise attack gets added to the mix, it’s almost impossible to uncover the attack before it happened.
The gaps lie in how decisions are made: no required out-of-band confirmation for high-value transfers (and $25 million certainly qualifies). An implicit assumption that “I saw their face on the call” counts as identity verification, even though it hasn’t for at least 2 years now. No clear escalation path when something feels off, or worse, a culture where escalating up the chain is more career-damaging than executing a bad transfer.
What helps
The answer is rather procedural than technical. Fixed callback numbers stored somewhere that the requester can’t influence. A second authorizer on transfers above a threshold, reached through a separate channel from the original request. Code words for unusual asks. Yes, like in spy novels, and yes, they work. Greig himself, in the WEF interview, kept coming back to process and visibility rather than silver-bullet detection tools.
There’s a cultural piece too, which is harder. The Arup employee did the right thing initially by questioning the email. He just got overruled by the video call, which felt more authoritative. People need permission to keep questioning after the verification step, because that’s where the attack lives now.
This is the part that procedure alone can’t fix. You can write the policy, put the callback number on a sticker, and an employee will still freeze when six “colleagues” on a Teams call nod along to a request. The only thing that builds resistance is having sat through something that felt like the real attack and having to make the call under pressure.
That’s the gap RansomLeak is built for. Most security training was designed for an era when the threat was a misspelled email with a dodgy link. RansomLeak runs interactive simulations of the attacks employees face now: deepfake whaling, vishing, business email compromise, and social engineering under decision pressure. So the first time someone hears a cloned voice asking them to move money, it isn’t the first time. Awareness slides do not prepare anyone for a face they recognize asking them for $25 million. Reps under pressure do.
Try out an interactive “Whaling with a deepfake” attack to see for yourself how it feels to be a victim of a deepfake attack.
What this means for organizations
For most of corporate history, “I spoke to them” has been the gold standard of confirmation. Voice on the phone, face on a call, decades of compliance regimes, and everyday business have been built on it. That assumption is expiring, not in some distant AI future but in the current quarter’s incident reports. Arup is the most expensive headline so far. It will not be the most expensive for long.
About author
Dmytro Koziatynskyi is Founder & CEO of RansomLeak. His work focuses on how modern attacks play out in practice, with an emphasis on social engineering, deepfake-enabled fraud, and decision-making under real-world pressure.