
A wave of AI-generated deepfake scams has targeted senior corporate leaders, including the chief executive of the Bombay Stock Exchange and executives at major global firms, as cybersecurity experts warn that synthetic audio and video attacks are rising sharply and becoming cheaper to execute.
At the start of the year, a video circulated on social media in India appearing to show Sundararaman Ramamurthy, chief executive of the Bombay Stock Exchange, advising investors on specific stocks. The video promised substantial returns. It was later identified as a deepfake created using artificial intelligence.
“It was in the public domain where many people could see it, and get cheated into buying or selling stocks, as if I’d recommended them,” Ramamurthy said. He added that the exchange files complaints to platforms such as Instagram to remove such content and regularly issues market warnings about fake videos.
“We don’t know how many people have seen this video, it’s really difficult to find out, so we can’t really judge if it’s had a big impact or not,” he said. “What we want is for it to have had no impact at all. No one should incur a loss because they believe something that is untrue.”
Corporate Leaders Increasingly Targeted
Ramamurthy’s experience reflects a wider pattern. Karim Toubba, chief executive of LastPass, said the number of deepfakes used over the past two years has risen by nearly 3,000%.
Toubba himself was targeted in 2024. An employee in Europe received an audio and text message on WhatsApp from someone claiming to be him and urgently requesting assistance. The employee grew suspicious because WhatsApp is not an approved communication channel for the company and the message was sent to a personal phone rather than a corporate-issued device. The incident was reported to LastPass’s cybersecurity team and no funds were lost.
Arup Loses $25m In Deepfake Call
In contrast, Arup fell victim to a large-scale deepfake attack in 2024. According to Hong Kong police, an employee received a message appearing to come from the firm’s London-based chief financial officer about a confidential transaction.
The employee joined a video call with individuals who appeared to be the CFO and other staff members. Following instructions given during the call, the employee transferred $25m (£18.5m) to five separate bank accounts. It later emerged that all participants on the call, including the CFO, were deepfakes.
“You would never want to simply jump on a video call with someone and transfer $25m,” said Stephanie Hare, a technology researcher and co-presenter of the BBC’s AI Decoded programme. She said companies are being forced to adopt additional safeguards to secure communications.
Deepfakes Becoming Faster And Cheaper
Matt Lovell, co-founder and chief executive of UK-based cybersecurity company CloudGuard, said generating high-quality deepfake video and audio can now take only minutes.
“For, say, a simple, single individual-led attack, you’re looking at $500 to $1,000 with the use of largely free tools,” Lovell said. “For a more sophisticated attack, you’re looking at between $5,000 and $10,000.”
As synthetic media tools improve, detection systems are also evolving. Verification software can analyse facial expressions, head movements and changes in blood flow beneath the skin to distinguish real individuals from AI-generated replicas.
“In your cheeks or just underneath your eyelids, we’ll be looking for changes in blood flow when a person is talking or presenting,” Lovell said, explaining how detection tools identify anomalies.
An Escalating Technology Race
Toubba described the situation as a race between attackers and defenders. “It’s a race, between who can deploy a technology and who can thwart that technology as quickly as possible,” he said, adding that investment in detection technologies is increasing.
Lovell expressed concern that defensive systems are not advancing quickly enough. “Attack vectors are accelerating faster than we can accelerate defence automation and protection,” he said.
Hare said demand for cybersecurity professionals is rising as deepfake incidents proliferate. She noted a global shortage of cybersecurity specialists and said more people are needed in the field.
She added that companies are beginning to treat the issue with greater urgency. “Now that we have these types of risks, with the leaders at companies, with CEOs, being deepfaked, I think company executives will be spending more time with their chief information security officers and teams than before,” she said.
Featured image credits: Flickr
For more stories like it, click the +Follow button at the top of this page to follow us.
