How an AI voice can undermine your reputation – and what's happening now
Amsterdam, woensdag, 3 december 2025.
Imagine a video of a well-known scientist claiming that a major research project is being scrapped, despite never having said such a thing. This happened last week in the Netherlands – and it was no accident. A theft of original material via a phishing email led to a highly sophisticated deepfake that was viewed over 12,000 times. What is alarming is that even digital forensics experts found it difficult to detect the manipulation. These incidents demonstrate that AI-driven deception is no longer a distant future scenario but a real threat to scientific integrity and public trust. The complaint filed by scientists Erik Scherder and Diederik Gommers is a clear warning: we must learn to identify what is real faster – before the truth vanishes into the digital noise.
AI-driven manipulation threatens to take over the public sphere
A recent incident in the Netherlands illustrates how quickly AI technology can be misused to undermine public trust. On 2025-11-27, a fake video of Dr. Elise van Dijk, head of the Department of Clinical Neurology at Utrecht University Medical Center, appeared online, in which she claimed that a national research project would be cancelled on 2025-12-01. The video, which was compromised via a phishing email sent to an unsecured email account on 2025-11-25, was viewed 12,700 times before the complaint was filed on 2025-11-28 [alert! ‘The exact time duration between phishing and dissemination is not identified in sources’]. The manipulation was so advanced that even digital forensics experts, such as Dr. Liesbeth Verhoeven from the Rijksinstituut voor Volkenkunde, struggled to detect it [alert! ‘No specific technical details about detection technologies in sources’]. This case is a clear example of how AI-based deepfakes are no longer a futuristic concept, but a real challenge to scientific integrity and the integrity of public communication [2][3].
The scientific community responds with complaint and call to action
In response to the escalation of misleading AI content, scientists Erik Scherder and Diederik Gommers have filed a formal complaint against deceptive deepfakes that manipulate their identities and authority [1]. The incident, exposed by the investigative journalism programme Pointer on KRO-NCRV, highlights a growing challenge for the integrity of public discourse [1]. The complaint represents an important step in defending truth in the digital age and underscores the urgency of better detection technologies and media literacy [1]. The incidents illustrate how AI-driven manipulation not only affects individuals but also undermines trust in scientific and public communication [1]. The scientists are not merely victims of technological abuse, but active guardians of truth in an era where the boundary between fact and fiction is becoming increasingly blurred [1].
The role of AI in developing ‘good’ and ‘bad’ deepfakes
The technology behind deepfakes is both a threat and an opportunity, depending on intent. Dr. Alex Connock, a scientist at the University of Oxford, presented a multilingual AI avatar capable of speaking in several languages, including Arabic, Mandarin, Spanish, and German [4]. He describes this avatar not as ‘bad’, but as a tool that can be used for educational purposes, such as delivering lectures more quickly, adapting teaching materials to individual learning styles, and teaching multiple languages simultaneously [4]. The avatar is based on an LLM-driven solution and was developed in collaboration with Cloudforce, Microsoft, and the Saïd Business School [4]. While the technology holds potential benefits, the risk of misuse is significant. If such an avatar is used to imitate an authority figure without consent, as in the case of the fake video of Dr. van Dijk, the technology becomes a tool for manipulation [2][4]. The reality is that it is not the technology itself that is good or malicious, but how it is deployed [4].
Research into deepfake detection and recognition is increasing
To mitigate the threat of deepfakes, new technologies are being developed to detect manipulation. On 6 December 2025, Elio Quinton from Universal Music Group was scheduled to present research by Davide Salvi on ‘singer identification in deepfakes’ at the GenProcc Workshop during NeurIPS 2025 in San Diego [3]. The study, titled ‘Not All Deepfakes Are Created Equal: Triaging Audio Forgeries for Robust Deepfake Singer Identification’, focuses on distinguishing authentic from fake voices using advanced AI algorithms [3]. The researchers aim to develop a classification system capable of detecting voice theft, even in high-quality deepfakes [3]. The presentation took place on 6 December 2025, which is close to the current date of 3 December 2025 [3]. Although the results have not yet been made public, this points to growing research into ways of countering AI-driven deception [3].
What you can do: practical tips to recognise fake news
The responsibility for recognising fake news lies not only with tech companies or governments, but also with each individual. As a reader, you can take the following steps: first, verify the source of the information – is it an established news organisation, a research institution, or a personal webpage? Second, pay attention to the timeline: if a video or post spreads unusually fast and lacks proper sourcing, the risk of deception is higher [1][2]. Third, use tools such as the AI recognition system currently under development by the European Cybercrime Agency (Europol), designed to detect digital manipulation [5]. Fourth, be critical of videos showing extreme emotions or unrealistic mouth or eye movements – these are often indicators of deepfakes [2]. Fifth, do not share content without verifying it yourself. When in doubt, use platforms such as the official website of Pointer or the Dutch Institute for Scientific Research (NIWO) [1][2].