AIJB

How an AI-Generated Doctor Can Disrupt Your Health

How an AI-Generated Doctor Can Disrupt Your Health
2025-12-06 nepnieuws

online, zaterdag, 6 december 2025.
Imagine: a trusted doctor warns you about a dangerous illness—but it’s not the real physician. In December 2025, AI-generated videos of real doctors spread on TikTok, promoting unverified health advice, particularly about supplements lacking scientific backing. The most alarming truth? The footage was created using highly realistic deepfakes, built from real performances used without consent. The videos, which garnered thousands of comments and millions of views, made it impossible to distinguish fact from fiction. The danger lies not only in the misinformation but also in the erosion of trust: when you no longer know whom to believe, even your doctor in the consulting room becomes a question mark.

The Rise of the AI Doctor: A Familiar Voice, a False Message

In December 2025, hundreds of videos appeared on TikTok featuring realistic deepfakes of well-known health experts, including Professor David Taylor-Robinson, a child health specialist from the University of Liverpool. These AI-generated videos, based on authentic footage from a 2017 Public Health England (PHE) conference and a parliamentary hearing in May 2025, claimed that Taylor-Robinson endorsed a supplement containing ‘10 scientifically supported plant extracts’ for menopausal symptoms, including a fictional condition dubbed ‘thermometer legs’ [1][3][5]. The videos, deliberately exploiting trust, generated thousands of reactions and over 2.3 million views within 48 hours, particularly among younger users in the Netherlands [2][5]. According to Dr. Elise van Dijk of the Autoriteit Consument & Markt (ACM), this content ‘has no medical value and can be dangerous for the public’ [2]. The footage was created using existing videos of real doctors without permission, constituting a serious violation of personal integrity and medical authenticity [2][5].

Uncovering the Deceptive Strategy: How Deepfakes Work and Why They Are So Dangerous

The theft of medical authority through AI-generated deepfakes is a striking example of how technology is weaponised to exploit trust. These deepfakes are created using AI models that generate realistic voices and facial expressions based on limited source material—in this case, 2017 videos of Taylor-Robinson and other health experts [1][3][5]. The technology is so advanced that even experts struggled to detect the deception: Duncan Selbie, former director of Public Health England, described one of the videos as ‘completely fake’ and ‘not funny’ [3][5]. The AI employs algorithms that simulate a ‘trust factor’ by mimicking credibility: viewers saw a ‘familiar’ doctor delivering a ‘scientifically backed’ message about supplements—while the content was entirely fabricated [1][2][3]. This tactic is not only misleading but also dangerous, as it undermines trust in healthcare, fostering patient distrust toward physicians and potentially leading to prolonged waiting times in medical services [5].

The Role of Social Platforms: Responsibility and Inadequate Action

Social media platforms such as TikTok have played a central role in spreading this AI-generated misinformation. In December 2025, TikTok removed 124 videos following a report from the ACM and the fact-checking organisation Full Fact [2]. The takedown occurred on 2025-12-05, six weeks after Taylor-Robinson’s first complaint on 2025-09-01, representing a delay of over three months [1][3][5]. Although TikTok confirmed the content violated its rules on harmful misinformation and impersonation, the platform’s response remained limited: no platform-wide policy was announced, and no automatic detection system for AI-generated content was implemented [1][3][5]. The Dutch government had announced an action plan on 2025-12-05 stating that all social media platforms must label AI-generated health content by 2025-12-15, but this deadline had not yet been met by 2025-12-06 [2]. The ACM had planned that the agreement with TikTok to act on AI-generated health content would be executed within 48 hours of detection, but implementation had not occurred by 2025-12-06 [2]. This reflects a widespread lack of accountability and speed in content moderation, despite the severity of the risks [2][5].

Fighting Misinformation: How Organisations and Citizens Can Respond

To counter the impact of this AI-driven erosion of trust, several initiatives have been launched. Netwerk Mediawijsheid, established in 2008 under the initiative of the Ministry of Education, Culture and Science, launched the follow-up to the campaign ‘Gecheckt? Wel zo gezond!’ on 2025-12-03, aimed at strengthening resilience against medical misinformation [5]. This campaign, currently running in over 400 general practitioner waiting rooms and health centres since 3 December 2025, includes an interview with dermatologist and misinformation fighter Annemie Galimont [5]. Additionally, on 2025-12-04, an online knowledge session titled ‘Stand Up Against Online Hate’ was organised with Diversion to enhance digital literacy among young people [5]. The 2025 Stimuleringsregeling of Netwerk Mediawijsheid awarded five winning initiatives, including #UseTheNews Nederland and MICA Leeuwarden, which focus on building media literacy through educational tools [5]. Readers are advised to apply the ‘Pause Before You Post’ principle, introduced by the @dpcireland campaign, and to verify information for reliability, source credibility, and potential conflicts of interest [4].

How to Recognise AI-Generated Videos Yourself: Practical Tips for Readers

To reduce the risk of misinformation, there are concrete steps every user can take. First: verify the sources of the video or image material. If a video of a doctor appears on a platform without clear indication that it is AI-generated, the likelihood of forgery is high [2][5]. Second: watch for unnatural movements, eye inconsistencies, or uneven lighting—common indicators of AI-generated videos [5]. Third: use online AI detection tools, such as those offered by the Dutch ACM, which were used on 2025-12-05 to identify and verify videos [2]. Fourth: ask yourself: ‘Would this doctor really speak about this supplement in this way?’ or ‘Is there a commercial conflict of interest?’—many of the videos were linked to Wellness Nest, a company that claimed it had never used AI content but admitted it ‘could not control affiliates worldwide’ [1][3][5]. Fifth: report suspicious content via the ‘Klachtenknop’ (Complaint Button) of the Commissariaat voor de Media, available to both children and adults since 1 June 2025 [5]. These steps not only help protect your own health but also contribute to strengthening the integrity of digital information [5].

Sources