AIJB

Doctors Combat Deepfake Disinformation on Social Media

Doctors Combat Deepfake Disinformation on Social Media
2025-08-21 nepnieuws

Amsterdam, donderdag, 21 augustus 2025.
General practitioner Tamara Derks is one of many doctors suffering from deepfake images on social media. These fake videos misuse the names and expertise of doctors to spread incorrect medical information, damaging professional integrity and public trust. Tamara Derks, who creates scientifically reliable videos under @DokterTamara, has reported these deepfake videos. She warns of the dangerous consequences of this disinformation, such as incorrect health advice that can be harmful to people’s health.

Impact of Deepfakes on the Medical Community

The impact of deepfake images on the medical community is significant. Doctors like Tamara Derks, who have been sharing scientifically reliable information through her social media account @DokterTamara for years, are now confronted with fake videos that copy their appearance and voice. These videos spread incorrect medical information, which damages the professional integrity of doctors and undermines public trust in medical advice [2].

How Deepfakes Work and Their Danger

Deepfakes are created using AI technology capable of realistically imitating someone’s appearance and voice. In Tamara Derks’ case, not only her appearance and voice have been copied, but also the settings of her real videos. However, the message in the deepfake videos is entirely different and often harmful. Derks emphasizes that many people see her as a reliable source of medical information, making them more likely to believe the false information [2].

Tamara Derks has reported the deepfake videos spreading false information. She notes that it is difficult to determine that the videos are fake due to their realism. Although she has filed a report, she observes that little is being done to stop these videos. She hopes that if more cases are reported, more action will be taken [2].

Use of AI in Combating Fake News

To combat the spread of fake news and deepfakes, more AI tools are being deployed. Platforms like YouTube are testing fact-check notes for videos containing disinformation. These notes provide users with additional context and help them recognize false information [5]. Additionally, experts are working on AI algorithms that can detect deepfakes by analysing deviations in mouth movements and speech speed [GPT].

Implications for Media Literacy and Democracy

The spread of deepfakes and fake news has serious implications for media literacy and democracy. According to experts, AI-generated disinformation is one of the greatest risks for the upcoming elections in 2024. Countries like Latvia, Estonia, and Lithuania are already warning about the influence of Russian media on their political processes [5]. In the Netherlands, almost half of the population is very concerned about misinformation, underscoring the need to raise awareness among the public about the risks of fake news [5].

Practical Tips for Readers to Recognize Fake News

To combat the spread of fake news and deepfakes, readers and social media users can apply the following practical tips:

  1. Consult Multiple Sources: Always verify information with multiple reliable sources.
  2. Use Fact-Checking Sites: Websites like Nu.nl and other fact-checking platforms can help confirm the accuracy of information [5].
  3. Be Aware of Deviations: Look for subtle deviations in videos, such as unnatural mouth movements or speech speed [2].
  4. Critical Thinking: Ask yourself if the information makes sense and if it comes from a reliable source.
  5. Report Fake News: Report suspicious content to social media platforms and authorities to take action [2].

Sources