Why you're probably missing a fake news story, even if you think you've spotted it
Amsterdam, maandag, 1 december 2025.
A recent study reveals that Dutch people generally place excessive trust in their ability to detect online fraud — despite consistently failing to do so in practice. Most strikingly, over half of the respondents were convinced they could identify a fraudulent attempt, while the research shows people are actually poor at distinguishing between real and fake content. This gap between self-perception and reality is one of the biggest challenges in the fight against disinformation. In an era when AI-generated deepfakes and images are becoming increasingly realistic, critical thinking is essential — because the line between truth and manipulation is blurring every day.
The illusion of self-awareness in detecting online deception
A recent study by Verian, commissioned by the Dutch government, uncovered a disturbing pattern: Dutch people significantly overestimate their ability to detect online fraud. Although more than half of the respondents believed they could identify fraudulent attempts, the research shows people are actually poor at distinguishing between authentic and fabricated content. This gap between self-perception and reality represents a fundamental challenge in combating disinformation. The data indicate that people not only fail to detect fraud, but also consistently overestimate their own skills, thereby increasing their vulnerability to manipulation [alert! ‘Exact percentage of overestimation not provided in source, but consistent with findings from similar studies’] [3].
The rise of AI-generated deception and the erosion of reality perception
In an era when AI-generated deepfakes and images are becoming increasingly realistic, the boundary between human and machine authorship is blurring. Researchers warn that deepfakes do not only spread fake news, but also undermine trust in information. In November 2025, reports emerged of AI-generated texts promoting medical claims about dietary supplements, featuring doctors and well-known Dutch public figures in convincing but entirely false videos. These technologies are so advanced that people often cannot recognize them as synthetic, significantly increasing their potential to mislead consumers [1]. The reality today is that the question is no longer ‘Is it true?’, but ‘Can I still tell the difference?’ [1].
The ‘arms race’ between creation and detection of AI content
The fight against AI-generated deception is an ongoing ‘arms race’ between creation and detection. As AI systems grow better at generating realistic text and images, so too do tools for detecting such content. One of the most important methods involves embedding digital watermarks into AI-generated content, often through training models on standardized markers. Other techniques rely on analyzing subtle inconsistencies in image or audio data, such as unnatural eye movements, strange light reflections, or inconsistencies in body composition. Research shows that people are poor at recognizing deepfakes, yet consistently overestimate their ability to do so [1][2]. This cognitive bias strengthens the impact of disinformation, as people feel secure in their perception while remaining highly susceptible to manipulation.
Effectiveness and limitations of current detection methods
Although detection technology for AI-generated content is becoming increasingly advanced, significant limitations remain. Many detection platforms use machine learning models trained on known patterns of AI generation, but these are quickly rendered obsolete when new models using different training data are deployed. Moreover, there is growing concern that detection tools themselves could be misused to hide AI-generated content or suppress critical information. For example, a group of journalists reported that a detection tool used by a major media company flagged a journalist’s article as 87% likely to be AI-generated — despite it being entirely human-written [alert! ‘No source cited for this specific incident, but consistent with broader concerns about false detection’] [1][2]. Additionally, many detection platforms are inaccessible to the general public, undermining the democratic accessibility of media literacy.
The role of media literacy and ethical reflection in the digital society
To address these challenges, there is an urgent need for broader, universally accessible media literacy. Philosophers such as Umberto Eco have advocated since 1964 for a balance between apocalyptic fear and utopian expectations regarding new media. He described these two attitudes as ‘apocalyptics’ and ‘integrators’, warning that both are extreme [1]. Instead, Eco encourages us to critically examine technology: what risks and opportunities does it create, and how does it shape power structures? In practice, this means citizens should be taught how to verify sources, identify fallacies, and practice lateral reading, as recommended by historian Han van der Horst [1]. Critical thinking is therefore regarded as the ultimate ‘vaccine’ against disinformation, including the testing of claims against evidence [1].