Why a pop star and a Shakespeare scholar help each other understand
Amsterdam, woensdag, 3 december 2025.
In an unexpected fusion of art and science, Professor Elly McCausland discovers that Taylor Swift’s songs share more in common with classical literature than most people realise. Her book ‘Swifterature’ demonstrates that Swift’s autobiographical narratives—rich in emotion and literary technique—are no different in depth from the works of Shakespeare, offering profound insights into the human experience. The most astonishing truth? What many dismiss as ‘superficial’ is, in fact, a rich source of introspection and social reflection. McCausland advocates for a broader definition of literature—one in which pop culture is treated with the same seriousness as a medieval epic. And perhaps that is precisely the point: to help people rediscover that stories, no matter how modern, are always about ourselves.
The story you can’t see: how AI creates and spreads fake news
In an era where information is rapidly produced and shared, artificial intelligence (AI) plays a dual role: both as a creator and a combatant of fake news. A startling real-world example occurred on 2 December 2025, when Google Search displayed an AI-generated summary falsely claiming that Dutch cabaret artist Niels van der Laan was married to actress Eva Crutzen and the father of a fictional daughter [2]. This error stemmed from an AI hallucination, in which the model generated inaccurate data based on a mix of outdated, unverified, and sometimes incorrect sources [2]. The claim was immediately integrated into search results without any attribution or fact-checking, giving users a disconcertingly high level of confidence in its truth [2]. This phenomenon—‘citogenesis’, the generation of purported facts later accepted as truth—represents an escalating threat, as AI models not only make errors but also feed their own mistakes back into future generations of data [2]. The flaw in Google’s AI response feature was only noticed by users, who shared the summary on LinkedIn as satirical commentary on the unreliability of AI-generated information [2]. Even after Google confirmed the correct facts—that Van der Laan is not married, but consciously keeps his family life private—the misunderstanding persisted in certain digital ecosystems [2]. The core problem lies not in the ‘hallucinations’ themselves, but in the quality of the training data used to build the AI: if the foundational data is incomplete, outdated, or manipulative, the outcome is inevitably flawed [2]. This leads to an ‘ever-degrading loop’, where flawed AI output is reused to train new AI systems, steadily eroding the quality of information [2]. In another case, AI was used to fabricate a criminal identity, with the model conjuring up a ‘fictional murderer’ with no basis in reality [2]. These examples illustrate that the threat of AI-generated fake news extends beyond political manipulation or campaigns, and can also manifest on a personal, emotional, and even legal scale.
Why the public broadcaster is in crisis: cuts and erosion of trust
The Dutch public broadcaster is under unprecedented pressure, particularly due to the combination of budget cuts and the rise of AI-generated fake news. On 2 December 2025, the EO announced it would no longer produce new documentaries as a result of budget reductions [1]. This decision, now in effect, represents a serious limitation on a form of journalism regarded as ‘one of the most truthful storytelling formats in mass communication’ [1]. Documentaries—especially those produced in collaboration with the Christian pillar and in regions such as the Bible Belt—offer critical, vulnerable, and profound stories told from within [1]. This unique access to specific communities and perspectives is now at risk, as documentary funding largely depends on separate grants no longer available after the cuts [1]. The essence of the public broadcaster—‘high-quality mass communication from, for, and by all of the Netherlands’—appears to be on the verge of disappearing [1]. Geertjan Lassche, filmmaker and investigative journalist, argues in an opinion piece that the public is not being served by these cuts, and questions: ‘In short, who exactly is being served here?’ [1]. The absence of deep, researched, critical narratives makes the public broadcaster more vulnerable to the spread of AI-generated fake news, where small, plausible errors—‘small hallucinations’—are difficult to detect [2]. This subtlety makes it nearly impossible for the public to distinguish truth from fabrication, especially when information is integrated into search results or social media without source verification [2]. The combination of financial constraints and technological disruption renders it impossible for the broadcaster to fulfill its traditional role as a guarantor of reliability in an age of information overload [1]. Without documentaries, a vital instrument for critical reflection and societal conviction vanishes.
AI in practice: how major companies influence information provision
The impact of AI on information provision is shaped not only by technological advances but also by the strategic decisions of major technology companies. In response to competition, particularly from Google, OpenAI has declared ‘code red’ and decided to delay other services, including its shopping assistant and advertisements [2]. This move reflects the financial pressure the company faces, with ChatGPT acting as a ‘bottomless pit’ of costs not covered by subscriptions—even for paid Pro users [2]. In contrast, Google holds a significant competitive advantage through the integration of AI into its own products, such as Google Search, where AI-generated answers are displayed directly with integrated ads [2]. This solution offers users immediate, easily accessible information but reduces the need to visit other platforms like ChatGPT [2]. The consequence is a decline in search traffic for Google itself, as AI summaries provide sufficient information without requiring users to visit the original sources [2]. Microsoft and Google are forcefully implementing AI across their products—Teams, Office 365, and Google Workspace—creating a ‘winner takes all’ dynamic that keeps most users within a single ecosystem [2]. This strategy strengthens Google and Microsoft, which also operate their own data centers and custom-designed chips (TPUs), while OpenAI must rent hardware from Amazon, Microsoft, or Google [2]. Dependence on NVIDIA for AI chips remains a vulnerability, but new microarchitectures like the Rubin processor—set to be unveiled shortly—could reduce this dependency [2]. The current AI bubble is viewed as ‘too big to fail’, posing dangerous financial consequences for pension funds, banks, and governments should the market collapse [2]. The cycle of financial investment between OpenAI, NVIDIA, and cloud providers is described as a ‘circular funding’ system that artificially inflates market value [2]. This dynamic reinforces the power of a few dominant players and reduces diversity in AI development, which over time may degrade the quality of information and public trust in AI [2].
The power of quality journalism: why documentaries cannot be replaced
Documentaries are not a form of entertainment, but a form of critical, in-depth, and insightful journalism that plays a unique role in society. In the Netherlands, the EO has a unique access to specific communities, such as the Christian pillar and the Bible Belt region, enabling stories to be told from within—vulnerable, critical, and authentic [1]. These profound narratives, often taking years to produce, are rooted in long-term collaboration, trust-building, and deep contextual understanding [1]. The loss of this form of journalism following the EO’s budget cuts means the disappearance of a vital instrument for social reflection and public trust [1]. The public broadcaster does not merely provide information—it serves as a guarantor of reliability in an AI-driven society [1]. When AI models generate information without source verification, depth, or ethical accountability, the public broadcaster’s role as a critical counterbalance becomes essential [1]. Most AI models are trained on a mix of reliable and unreliable sources, leading to the acceptance of false or outdated information as fact [2]. For example, an AI might cite an old, inaccurate media report about a person and present it as truth without verifying its current validity [2]. In contrast, a documentary producer investigates and verifies every claim, often with multiple witnesses, documents, and experts [1]. This level of diligence is impossible to replicate in an AI model trained on a ‘compilation’ of online information [2]. The value of quality journalism lies not only in delivering facts but in naming complexity, duality, and the human story. Without this instrument, the public becomes more vulnerable to manipulation—whether by AI-generated fake news or a growing sense of uncertainty about what is real [1]. The EO’s suspension of documentaries since 2 December 2025 is not a technical issue, but a societal loss [1].
How to spot fake news: practical tips for the digital public
In an age where AI-generated fake news spreads quickly and plausibly, media literacy has never been more crucial. The most effective way to detect fake news is by critically examining sources. One of the most important tips is: ‘Do not trust AI-generated answers without verification’ [2]. If an AI response lacks sources, has no date, or exudes excessive confidence, it is likely fabricated [2]. For instance, on 2 December 2025, Google Search claimed that Niels van der Laan was married to Eva Crutzen and had a daughter—a completely invented claim [2]. The only way to expose this was to check the actual sources. In this case: none existed. A second tip is using multiple sources. If several reliable sources tell the same story, the likelihood of truth is higher. But if only an AI response or a single website states it, it is likely a hallucination or deliberate manipulation [2]. A third tip is watching for subtle errors, also known as ‘small hallucinations’: minor, plausible inaccuracies that are difficult to detect but can alter the truth [2]. For example, an AI might incorrectly write a command in a PowerShell script by adding a double colon after a variable, rendering the script unusable [2]. Such errors can only be identified by an expert manually reviewing the code. A fourth tip is trusting people and investigative journalists, not automated systems. If a story comes from an EO documentary or an investigation by a reputable newspaper, it is more likely to be true than if it originates from an AI response [1]. A fifth tip is using tools designed to detect AI-generated content, such as GPTZero or Turnitin, though these are not yet 100% reliable [2]. Finally, it is important to recognise your own emotions: if a message makes you angry, fearful, or overly excited without cause, it may have been designed to manipulate your feelings [2]. Doubting information is not a sign of weakness—it is a sign of intelligence.