AIJB

How an AI-Generated Video of a Doctor Could Endanger Your Health

How an AI-Generated Video of a Doctor Could Endanger Your Health
2025-12-03 nepnieuws

Nederland, woensdag, 3 december 2025.
Imagine a trusted doctor in a video recommending a supplement that has never been approved — and the video is completely realistic. This is not science fiction: medical deepfakes are becoming increasingly sophisticated and are already being used to undermine trust in health information. Research from November 2025 had already identified 40 doctors and public figures who were misleadingly faked. Most people can no longer distinguish these videos from genuine content, even when scrutinizing closely. This makes it extremely dangerous: people make decisions about medications, tests, or diets based on false information. The real shock? Most AI chatbots do not even provide proper warnings. What you see, no matter how reliable it may appear, is often not what it seems.

The Rise of Medical Deepfakes: A Threat to Trust in Healthcare

Medical deepfakes — AI-generated videos in which purported doctors give misleading health advice — represent an increasingly serious threat to public health. According to a study by Pointer conducted on 2025-11-27, 40 doctors and public figures were identified as victims of such manipulations [1]. These videos feature well-known health experts such as Diederik Gommers and Dr. Tamara, who never spoke in the ways portrayed, making false claims like ‘type 1 diabetes does not exist’ or that ‘sunburn causes cancer’ [2]. Despite their realistic appearance, these videos are entirely fabricated and are often used for commercial purposes, such as promoting supplements or medications — an ethically unacceptable practice within medical professional ethics [2]. The technology is so advanced that facial expressions, subtle facial movements, and even an unnaturally perfect skin sheen make it nearly impossible to distinguish the fake from reality [2]. Robin Peeters, an internal medicine physician at Erasmus Medical Center, warns that people may make dangerous health decisions based on such videos: ‘The answers from a chatbot can make people anxious. Or, conversely, they may be unjustifiably reassured. That can be dangerous’ [1].

AI Chatbots: Helpful Tool or Threat to Patient Decisions?

Although AI chatbots like ChatGPT can provide general medical information, they are not capable of offering individualized patient care. Robin Peeters emphasizes that the value of medical advice lies in its relevance to the patient’s specific situation: ‘The trick is to understand what it means for each individual patient. That’s where a doctor excels, but chatbots are much less effective’ [1]. Only a small fraction of chatbots include explicit warnings that their information is general and that one should always consult a doctor [1]. Yet even when such warnings exist, patients may still make health decisions based on manipulated or incomplete information, such as demanding tests with no medical value or refusing medications like cholesterol-lowering pills, which are inaccurately labeled as ‘pure poison’ [1]. The combination of false information and the illusion of authority can lead to unwarranted anxiety or unjustified reassurance, undermining trust in healthcare [1].

The Role of Social Media and SEO in Spreading Fake News

AI is not only used to create deepfakes but also to spread fake news across internet platforms. French investigative journalist Jean-Marc Manach revealed that by November 2025, over 4,000 AI-generated fake news websites were already operating, primarily in French, with at least 100 in English [3]. An example is ‘leparisienmatin.fr’, a typosquatting site mimicking the legitimate French newspaper ‘Le Parisien’ and using AI-generated content to manipulate Google’s algorithm [3]. These websites employ advanced SEO techniques, such as netlinking schemes, where artificial backlinks are generated to deceive Google’s ranking system [3]. Despite Google explicitly prohibiting the purchase or automated generation of links, these practices remain widespread [3]. The financial incentive is substantial: successful sites generate thousands of dollars per day in advertising revenue through Google AdSense, particularly via Google Discover, which has become a major income source for these networks since 2022 [3].

How AI Accelerates the Spread of Fake News and How We Can Counter It

The combination of large language models (LLMs) and automation enables the mass production of fake news. A 2025 university study shows that LLMs are about 68% more effective than humans at detecting real news stories, but their performance in identifying fake news is comparable to humans — around 60% accuracy [4]. The study also examined tactics used by fake news creators to enhance credibility, such as using AI-generated images with visual artifacts, fabricated author profiles (like a ‘professor at IESEG School of Management’), and sensational headlines without sources [3]. Furthermore, the risk of ‘misinformation spill’ arises when journalists at legitimate media outlets republish AI-generated stories without verification [3]. To combat this, technological solutions are being developed, such as DeepGaze — an AI tool that analyzes videos, audio, and images frame-by-frame for signs of manipulation [5]. This system is designed for real-time verification in newsrooms and can be integrated via APIs or SaaS platforms [5].

Why Media Literacy Is Now Essential for Democracy and Health

The increasing realism of AI-generated media is putting the foundational principles of democracy and public health under strain. According to the Pew Global Media Trust Survey (2024), 69% of news consumers believe AI-altered media reduces trust in journalism [5]. Additionally, UNESCO’s Media Security Index (2025) reports global economic losses of $78 billion due to misinformation and deepfake-based campaigns [5]. Sander Duivestein, a researcher at Sogeti, stresses the importance of ‘common sense’: ‘Don’t trust only your eyes. Deepfakes have become so good that you can hardly detect them anymore’ [2]. Responsibility also lies with social media platforms, which often delay removing misleading videos — a practice Robin Peeters describes as ‘not okay’ [1]. The solution lies in a combination of technical detection, legislation with strict penalties for tech companies, and large-scale awareness campaigns teaching people how to recognize AI-generated content [2].

Practical Tips to Recognize and Avoid Fake News

To avoid being misled by AI-generated information, readers can follow these practical steps. First, check the domain name: sites like ‘leparisienmatin.fr’ are typosquatting versions of ‘Le Parisien’ and are typically fake [3]. Look for the author: AI-generated articles often lack a digital footprint on LinkedIn or Twitter, or are identified as ‘fake professor’ [3]. Watch for visual artifacts: AI-generated images may show unnatural shine, unusual shadows, or irregular details [3]. Verify the sources: legitimate articles provide clear references, while fake news usually cites no sources [3]. Use reliable sources such as thuisarts.nl, gezondheidenwetenschap.be, the TikTok or Instagram channel @doktersvandaag, and the podcast De Vragendokter [2]. Additionally, tools like the browser extension developed by Jean-Marc Manach can help detect AI-generated websites, though new domains are constantly being registered [3].

Sources