AIJB

How an AI-Generated Video of a Famous Dutch Person Can Steal Your Money

How an AI-Generated Video of a Famous Dutch Person Can Steal Your Money
2025-11-21 nepnieuws

Amsterdam, vrijdag, 21 november 2025.
In November 2025, convincing deepfake videos of prominent Dutch figures, including doctors and public personalities, surfaced on TikTok, used to mislead victims and cause financial harm. Research has revealed that criminals require only a few seconds of audio to clone a voice, and this technology has advanced to such a degree that even familiar faces can no longer be automatically trusted as authentic. The most alarming truth? Those videos you may have seen without realising it were not genuine interviews — but sophisticated deceptions broadcast online to convince you that you were purchasing a solution for health or investments. This is no longer science fiction: be critical, because the line between reality and fabrication is now nearly invisible.

The Rise of Deepfake Scams via Social Media

In November 2025, multiple convincing deepfake videos of Dutch celebrities, including doctors and public figures, appeared on TikTok, used for financial fraud by promoting the supplement shilajit from Wellness Nest [1]. These videos, viewed over 200,000 times, directed users to a product sold through an affiliate programme by Pati Group, a company active in the supplement industry [1]. The accounts hosting these videos were active from July to August 2025 and were only taken offline on 20 November 2025 — the day before today — following a report from Pointer Checkt [1]. The deepfakes employed are technically so convincing that they blur the line between reality and fabrication, making it nearly impossible for consumers to determine whether a person is genuinely present in a video [2]. Rather than using real identities, some fraudsters opt for entirely AI-generated videos featuring fictional influencers or doctors, further complicating the detection of deception [1]. These techniques are not limited to product promotion; they enable criminals to gain trust at scale via social media, where the volume of content and speed of dissemination are critical for success [2].

From Voice Cloning to Business Fraud: The Broad Threat of AI

The threat of AI-driven fraud extends beyond video content. In November 2025, deepfake voices were also used in phone scams, with criminals needing only a few seconds of audio to clone the voice of a director or bank employee [3]. Combined with number spoofing, these voices create phone calls that sound authentic and appear to originate from official sources, leading to fraud in which employees are instructed to transfer money [3]. In early 2025, the Fraud Helpdesk reported a 300% increase in reports of such phone scams, directly correlating with the use of AI-generated voices [3]. Several cases in the Netherlands have already been documented involving business conversations where deepfake voices were deployed, and according to KPMG and Europol, deepfake fraud is one of the fastest-growing forms of digital crime in Europe [3]. Additionally, criminals use AI to automate robocalls, write phishing emails in flawless Dutch, and even enhance SIM-swapping techniques by using smart scripts and personalisation to persuade helpdesk staff more quickly [3]. These developments signal a deeper shift: fraud is no longer based on clumsy traps, but on precision and scale only possible with generative AI [3].

The Arms Race: How AI Is Also Being Deployed to Combat Fraud

While AI is being exploited for fraud, it is increasingly being used as a defence. Organisations like SAS are testing Agentic AI to proactively detect new fraud scenarios online, enabling the identification of scams before they escalate [2]. McAfee offers a scam detection tool [https://www.mcafee.com/en-us/scam-detector/] that uses advanced artificial intelligence to automatically detect fraud across SMS, email, and video, including deepfake content [1]. The tool identifies suspicious messages, fake websites, and AI-driven deception before any damage occurs, embodying the principle of ‘using AI to fight AI’ [1]. KeenSystems advises organisations to adopt AI-driven anomaly detection systems and voice verification technologies to block suspicious calls and destinations [3]. The combination of data, decision logic, and analytical capabilities through a central platform is seen as essential for proactive monitoring and rapid response to fraud [2]. Olaf Passchier from SAS emphasises that it is not about blind trust, but about understanding why something happens and taking action: ‘Technology helps by providing insight into what is truly happening’ [2]. This approach underscores that AI is only effective when the underlying technology and decision-making processes are understood [2].

Practical Tips for Readers: How to Protect Yourself from AI Fraud

To protect yourself against the growing threat of AI-driven fraud, there are concrete steps you can take. Users are advised not to use gift cards or cryptocurrency for payments, as legitimate organisations never request these [1]. Using public Wi-Fi while shopping is discouraged due to the high risk of hacking; instead, a secure or mobile connection is recommended [1]. Users can sign up for their McAfee account to scan for recent data breaches where their email address has been exposed, and are then guided through recovery steps [1]. For organisations, it is essential to foster collaboration across the supply chain: information about suspicious calls or emerging scams should be shared with providers and industry groups [3]. When using two-factor authentication (2FA), users are advised to opt for apps or hardware tokens instead of SMS, as SMS messages are vulnerable to SIM-swapping [3]. Additionally, it is important not to click on suspicious links, even if they appear to come from a trusted source, and to remain actively critical of any content seen on TikTok or other platforms [1]. McAfee warns: ‘The era when fraud tactics were easily recognisable is over’ [1].

Sources