AIJB

Why a well-known doctor allowed himself to be faked in a medical deepfake

Why a well-known doctor allowed himself to be faked in a medical deepfake
2025-11-21 nepnieuws

amsterdam, vrijdag, 21 november 2025.
A professor of neuropsychology at Vrije Universiteit Amsterdam has filed a complaint against AI-generated videos depicting him recommending medications he never endorsed. Most alarming? The videos show him in his own office and department, making them incredibly convincing. People have already lost money on products they bought after watching these fake videos. This case illustrates how quickly AI-generated deceptive content can erode trust in healthcare — and why urgent measures are now essential.

How AI-generated deepfakes undermine trust in healthcare

Neuropsychologist Erik Scherder from Vrije Universiteit Amsterdam has filed a complaint regarding identity theft through AI-generated deepfake videos in which he appears to recommend certain medications [1]. The videos depict Scherder in his own office and department, reinforcing the illusion of authenticity and triggering serious concern among patients and medical professionals [2]. Individuals have contacted him claiming they saw his advice and subsequently bought products that failed to work or resulted in financial loss [3]. The alleged recommendations are entirely fabricated and are being used to promote supplements on platforms such as TikTok, where misleading content is circulating at scale [1][2]. This practice undermines the trust between patients and healthcare providers, as people now question the authenticity of medical communications—even when they come from a trusted figure [3]. The responsibility for such misuse extends beyond legal issues to ethical ones, as it leads to financial harm and loss of trust in a sector where trust is fundamental to effective care [1][2].

An escalating issue: from fake illness stories to recommendations for liver and kidney treatments

Identity theft via deepfakes is no longer limited to harmless or trivial illness narratives, as seen in earlier forms of deceptive content [2]. Intensivist Diederik Gommers from Erasmus MC notes that the content of these deepfake videos has evolved from innocuous scenes to false recommendations for products claiming to treat liver and kidney conditions [2]. This shift has ‘significant real-world consequences’ for public health, as patients may panic or choose inappropriate treatments based on false information [2]. Gommers emphasizes that his reputation has been ‘ruined’ by these videos, in which he is portrayed as someone who ‘sells nonsense’ — a term indicating not only inaccuracy but also a clear departure from standard medical ethics [2]. The Dutch Medical Association (KNMG) expressed concern during a Pointer Checkt broadcast about the growing misuse of medical deepfakes, where doctors’ faces and voices are used commercially without consent [3]. This points to a systematic expansion of misuse, with AI technologies being leveraged to undermine trust within the healthcare system [1][2].

Scherder filed a formal complaint on Friday, 21 November 2025, following discussions with Vrije Universiteit about potential actions [3]. He stressed that he wants to ‘take action’ because such incidents cannot be allowed to pass unnoticed — especially since the deceptive claims may mislead people into purchasing ineffective products or losing their money [3]. Gommers has also filed a complaint and emphasized his desire to protect people from the ‘kind of rubbish’ being sold through these fake videos [1]. Erasmus MC has hired a lawyer who issues ‘continuous letters’ about the deepfake practices, but Gommers describes this as ‘trying to bail out a sinking ship with a bucket’ — a metaphor for a response that is too late, too small, and ineffective in tackling the root problem [2]. This highlights the absence of balanced, proactive measures against AI-generated deceptive content, where the focus remains on reactive measures rather than prevention [1][2]. The complaints filed by Scherder and Gommers mark important steps in a broader process of legal and institutional response to a technological challenge that has outpaced legislation and oversight mechanisms [3].

Practical tips to recognise fake news and deepfakes

To avoid becoming a victim of false medical information, it is essential to approach online content critically [GPT]. Watch out for videos in which a familiar figure gives medical advice in a professional setting — such as an office or hospital — especially if the video circulates on platforms like TikTok or YouTube without an official source [1][2]. Verify whether the video was published by an accredited institution, such as Vrije Universiteit or Erasmus MC, or by a certified news outlet [3]. Look for signs of manipulation, such as unnatural mouth movements, inconsistent lighting, or absence of blinking — characteristics often present in AI-generated videos [GPT]. Use tools like Google’s AI detection feature, Microsoft’s Video Authenticator, or the platform DeepTrace, which can help assess the authenticity of media [GPT]. If in doubt, consult the official website of the individual or institution directly, or contact a trusted healthcare provider [3]. Combining media literacy with digital literacy is now crucial to limiting the impact of AI-generated deceptive content [GPT].

Sources