Can You Still Tell What’s Real? This Is the Uncomfortable Truth About AI Images in the Netherlands
Nederland, dinsdag, 18 november 2025.
Do images on social media sometimes feel off? It’s not your imagination: 71% of Dutch people can no longer tell the difference between a genuine photograph and a fully AI-generated image. Even more concerning? 81% of AI-generated images were judged as real. This points to a profound shift in how we trust information—and why media literacy is now essential citizenship. In an era where AI surrounds us daily, this demands not only new skills but also transparency in how images are created and shared. What does this mean for your trust in what you see?
The Invisible Line Between Real and Fake
Today, it is impossible for the majority of the Dutch population to confidently determine whether an image is real or AI-generated. According to a Norstat survey conducted on 2025-11-17, 71% of Dutch respondents cannot distinguish between a real photograph and an AI-generated image [1]. In the study, eight images were shown, four of which were AI-generated. An astonishing 81% of these AI-generated images were perceived as real—a figure that blurs the boundary between reality and simulation [1]. This loss of recognition is not limited to any single age group: even among young people aged 18 to 29, a significant portion struggle, with 59% finding it difficult to identify AI-generated content [1]. The technology behind these images is so advanced that it not only mimics visual aesthetics but also aligns with viewer expectations, while real photographs often display imperfections such as uneven lighting or unnatural shadows [1]. This technological evolution is so rapid that two-thirds of respondents expect it will be extremely difficult to distinguish real images from AI content within five years [1].
The Arms Race Between AI Creation and Detection
As AI technologies improve at generating convincing images, detection tools are simultaneously emerging to identify them. Most detection tools search for subtle anomalies in digital images that human eyes cannot detect, such as irregular pixel patterns, inconsistencies in light sources, or the absence of natural texture [1]. These techniques rely on machine learning models trained on vast datasets of both real and AI-generated images. However, the effectiveness of these tools appears limited: research indicates that even advanced detection algorithms can be misled when AI models are combined with post-processing techniques that conceal the digital ‘signature’ of an image [1]. The arms race is therefore in full swing: every advancement in generation is met with a new detection method, yet the creative capacity of AI is growing faster than the detection capabilities of existing systems [1]. This results in a constant lag, with detection tools often only updated after AI-generated content has already been released [1].
Transparency as an Essential Tool
In light of these developments, Norstat emphasizes that clarity about AI use in communication is a crucial step toward reducing confusion [1]. Norstat’s commercial director, Kjell Massen, highlights that transparency in image production—such as clear watermarks or metadata—can support the identification of AI-generated content [1]. This approach has already been adopted in various sectoral guidelines. For example, at the Netherlands Institute for Cultural Heritage (NICE), where AI reconstructions for the series ‘Straten van Toen’ are being developed, specific visual cues are incorporated into the images to indicate the positions of individuals and the historical context [2]. Although the series was originally scheduled to debut in early 2026, its launch has been delayed, and no official release date is currently known [2]. The reconstruction technique uses generative AI models trained on 12,000 historical plates, maps, and descriptions from the 1500–1800 period, combined with digital collections from the Rijksmuseum and the National Archives [2]. This demonstrates that even in historical reconstruction, transparency and proper sourcing are essential for credibility [2].
The Broader Impact on Safety and Trust
The disorienting effects of AI images extend beyond simply identifying photos—they also undermine trust in digital information. A study by Rubrik Zero Labs reveals that organizations worldwide are experiencing a growing gap between the number of identity-related attack vectors and their recovery capabilities, particularly due to the rise of AI agents in the workplace [3]. These agents generate additional identities that must be managed, placing increased pressure on security teams [3]. IT and security leaders consider identity attacks among the greatest risks, and many organizations are contemplating hiring additional staff or switching to alternative Identity and Access Management providers [3]. During ransomware incidents, a large proportion of affected organizations admitted to paying ransoms, indicating a declining trust in recovery capabilities [3]. The gap between attack vectors and recovery is thus not only technical but also psychological: organizations fear they may never return to a secure state after an incident [3]. This has broader implications for societal trust, where the line between real and fabricated content grows increasingly blurred [1].
Responsible Innovation and the Role of Society
The rapid rise of AI-generated content compels societal institutions—such as libraries and public information services—to play a stronger role in promoting media literacy [1]. Norstat’s findings underscore an urgent need to raise awareness across all demographics—from students to seniors—on how AI produces images and how to recognize them [1]. This requires not only technological solutions but also educational initiatives that foster critical thinking. In the Netherlands, organizations are already active in this domain: for example, the Cronos Group engaged over 1,000 students during the ‘Hack The Future’ event to discuss future technological challenges [4]. Additionally, SAP emphasizes that the value of AI lies in its integration with business processes, not in the technology itself [4]. This suggests that transparency and responsible AI use are vital—not just within tech companies, but across the public sector. Generative AI spending in Europe is expected to increase by 78.2% in 2026, signaling a massive integration of these technologies into the economy [4]. It is therefore essential that society not only controls the use of AI but also takes responsibility for the quality and authenticity of the information it receives [4].