AIJB

Why a well-known neuropsychologist is reporting a video he never made

Why a well-known neuropsychologist is reporting a video he never made
2025-11-21 nepnieuws

Amsterdam, vrijdag, 21 november 2025.
Erik Scherder, a neuropsychologist at Vrije Universiteit Amsterdam, has filed a report regarding a deepfake video in which he appears to recommend medications he has never endorsed. The footage shows him in his own office, speaking with a convincing voice, realistic enough to mislead viewers. What is alarming is that over 40 doctors and public figures have become victims of a network using AI to promote supplements on TikTok. The videos are crafted so convincingly that people have been contacting Scherder directly with questions about where to purchase the products. The core issue? An AI-generated face and voice can be so persuasive that trust in medical experts is undermined — and this threat is far from over.

The rise of medical deepfakes: a threat to trust in healthcare

Neuropsychologist Erik Scherder from Vrije Universiteit Amsterdam has filed a report regarding identity theft through deepfake videos in which he is falsely portrayed as recommending medications [1]. The videos, circulating on social media, present a strikingly realistic version of Scherder spreading false claims about medical treatments [1]. This case is a stark example of how generative AI techniques are being misused to undermine public trust in healthcare [1]. The use of deepfakes in the healthcare sector is no longer a futuristic scenario but an urgent, present-day issue. More than 40 doctors and well-known Dutch public figures have fallen victim to a network exploiting AI to promote supplements on TikTok [2]. The videos are so convincingly produced that people have been emailing Scherder directly with inquiries about where to buy the products [3]. The exceptional realism of these videos, including footage of Scherder in his own office, strengthens the illusion of authenticity and heightens the public health risk [2].

The business model behind the deepfake campaign: a network of Vietnamese entrepreneurs

The promotional campaign revolves around the brand Wellness Nest, operated by a network of Vietnamese entrepreneurs under the company Pati Group, headquartered in Colorado (USA) and Vietnam [2]. These entrepreneurs leverage an affiliate programme that incentivises partners on TikTok to sell products [2]. The brand focuses specifically on supplements such as shilajit, with deepfakes claiming it helps with fatigue or menopause-related symptoms [2]. Although a Pati Group employee denied allegations of deepfakes, no written responses were received after repeated attempts to contact them [2]. Alleged affiliate earnings, as shown in screenshots of PayPal transaction records, range from $1,700 to $66,000 per month, with claims of profits exceeding $100,000 in a single month [2]. This financial incentive drives the spread of misleading content, exploiting the credibility of doctors and public figures like Scherder and Diederik Gommers [3].

The medical community’s response and the role of social platforms

The Dutch Medical Association (KNMG) has expressed serious concerns over the increasing misuse of deepfakes to make deceptive health claims [2]. According to the KNMG, the spread of such misleading content could lead to incorrect health decisions by the public and pose a risk to public health [2]. The medical professionals involved strongly believe these practices must not go unchecked. Scherder emphasised that it is deeply distressing that people create such videos and stressed his decision to take legal action by filing a report [1]. Intensive care physician Diederik Gommers from Erasmus MC also reports frequent messages from people who claim to have seen him in deepfake videos ‘pushing nonsense’, undermining his professional credibility [3]. TikTok only removed the videos after Pointer Checkt approached the platform, despite victims having reported the content two months earlier [2]. The platforms stated that ‘despite our extensive moderation processes, we cannot guarantee that all content complies with our guidelines’ [2].

How AI is used for both spreading and combating fake news

The technology behind the deepfakes relies on advanced AI models capable of replicating faces, voices, and movements based on limited real-world footage [2]. Techniques such as generative adversarial networks (GANs) and text-generation models enable synthetic creation of human appearance and speech, making the videos appear remarkably realistic [2]. For dissemination, these generative models are used to produce large volumes of videos in a short time, enabling the scale of the deceptive campaign [2]. On the other hand, AI is also being used to combat misinformation. Research institutions and platforms are developing AI-driven detection methods that analyse audio and video for signs of manipulation, such as unnatural eye movements or unusual lip synchronisation [GPT]. An example is the use of ‘digital watermarking’ in files, embedding an invisible code detectable by recognition software [GPT]. Although these methods are still under development, they are essential to building a defence against AI-generated fake news [GPT].

Practical tips for readers to spot medical deepfakes

Readers can protect themselves from medical deepfakes by following several concrete steps. First: check the source of the video. Is it from an official hospital website or a recognised institution, or is it a personal TikTok or YouTube channel? Videos from unknown sources are more likely to be suspicious [GPT]. Second: pay attention to the language used. Deepfakes often use exaggerated or emotional language, such as ‘100% natural’, ‘no side effects’, or ‘works within 24 hours’, which does not align with scientific evidence [GPT]. Third: use an image search engine like Google Images or TinEye to search for a screenshot of the suspicious video. If Scherder’s image appears on other websites in a different context, it is a red flag for manipulation [GPT]. Fourth: verify whether the person in the video is real. If a doctor or public figure has no official social media accounts or recent posts about the product, it is likely fake [GPT]. Fifth: report suspicious content immediately on the platform where it appears. TikTok, YouTube, and X (Twitter) have built-in reporting tools that help block misleading content [2].

Sources