AIJB

How an AI-Generated Video Could Undermine a Democracy

How an AI-Generated Video Could Undermine a Democracy
2025-11-18 nepnieuws

Amsterdam, dinsdag, 18 november 2025.
Imagine a video of a political leader exchanging a warzone for a statement that was never actually made. By 2025, such a scenario is no longer science fiction but a tangible risk. Deepfakes—AI-generated false images—are now being used to erode public trust, influence elections, and disrupt diplomatic negotiations. The most troubling aspect? The quality of this fabricated content is so high that even experts struggle to distinguish the real from the fake. When Hans Wijers stepped down as formateur, a private message was leaked—but what if that message itself was a deepfake? This is precisely the danger: a misleading reality that appears undeniably authentic at first glance. The era of clear boundaries between fact and fiction is over—and this changes everything we think about truth and democracy.

The Blurring Line Between Fact and Fiction in the Digital Age

By 2025, the boundary between reality and simulation has become increasingly blurred, especially due to the accelerated development of AI-generated content such as deepfakes. This technology, once confined to the realm of science fiction, is now actively deployed in geopolitical contexts to undermine public trust, influence elections, and disrupt diplomatic negotiations [1]. The quality of these fabricated images is so advanced that even experts find it difficult to differentiate genuine from false content, posing a fundamental challenge to democratic processes [1]. In the context of Dutch politics, where a new Second Chamber was installed in November 2025, the social stack—the foundational principles of societal cohesion—was explicitly named as a core mission, with the threat of disinformation and deepfakes placed at the centre [2]. The rise of hyperrealistic visuals, where images and narratives appear more compelling than reality itself, makes it increasingly difficult for individuals to objectively judge what is true [3]. This phenomenon, historically traceable to colonial utopias and Cold War space races, is now accelerated by the integration of AI into policy and media [3]. In the AI era, the speed and saturation of these future visions have intensified through deepfake-level visualizations and 24/7 media echo chambers [3]. The combination of emotional appeal and technological precision causes image and fact to become indistinguishable, making the presentation quality of a message increasingly more relevant than its truthfulness [3].

Deepfakes as Tools in Geopolitical Information Wars

By 2025, the use of deepfakes in geopolitical flashpoints has become a reality fueling the global AI race. Nations and actors are employing AI-generated content as tools in information wars, particularly in conflict zones, elections, and negotiation sessions [3]. In Ukraine, where the American tech company SpaceX plays a crucial role in digital infrastructure via Starlink, dependence on foreign technologies has grown significantly. The recent €17 billion purchase of European radio frequencies by SpaceX strengthens the control of American tech companies over essential systems, leading to a weakening of national regulation and democratic oversight [2]. This dependence creates a vulnerability that can be exploited through disinformation campaigns. It is now possible that a hyperrealistic video of a high-ranking political figure, shown in a warzone, could falsely depict an unplanned military attack or a declaration of surrender—despite being entirely fabricated [3]. The impact of such manipulations extends beyond isolated events; they gradually erode trust in public institutions and damage the collective worldview [2]. The text ‘The Mask of AI’ emphasizes that the societal, ethical, and democratic implications of AI are now urgent, especially due to the shifting relationship between technology and power [1]. Responsible management of AI-generated content is therefore no longer a choice, but an essential component of informed civic responsibility [1].

The Role of Big Tech in Undermining National Control

The growing influence of American tech giants such as Google, Facebook, Microsoft, Amazon, and Apple—whose market value exceeds that of many European nations’ GDPs—poses a direct threat to national sovereignty [2]. These companies, expanding their reach through digital infrastructure, satellite internet, and radio frequencies, operate as ‘freedom cities’ or network states functioning beyond traditional democratic frameworks [2]. The €17 billion acquisition of European radio frequencies by SpaceX, owned by Elon Musk, is a concrete example of how foreign power holders are increasingly exerting control over critical systems essential to a state’s security and communication [2]. Maxime Februari warns that this process is known as ‘Big Tech eats away at this national layer,’ undermining national legal and political authority [2]. The result is a system in which public trust in the democratic rule of law is eroded by algorithms generating personalized media experiences that progressively destabilize the collective worldview [2]. Consequently, the digital infrastructure of the nation-state is effectively undermined by external actors, reinforcing the need for transparency and regulatory measures [2].

How AI Literacy Forms a Barrier Against Misinformation

To counter the threat of deepfakes and disinformation, there is an urgent need for AI literacy as a foundation for media competence. In November 2025, the Catholic Church, through Sister Maria Catharina Al of the Dominican Order, delivered a lecture on the impact of AI on society, discussing the future of AI and its ethical implications [4]. These initiatives demonstrate that responsible AI use is no longer merely a technical issue but a pressing social and ethical responsibility [1]. In the book ‘The Mask of AI,’ teacher and AI expert Wim Casteels examines how algorithms reinforce bias, how AI-driven hiring perpetuates inequality, and how the ecological footprint of data centers drives the digital future [1]. Students and citizens are encouraged to think critically about the sources of information and the motivations behind AI-generated content [1]. The quality of presentation is becoming increasingly more important than truthfulness, meaning readers must learn not only to look but also to ask: who created this, why, and what is the purpose? [3]. The combination of technological knowledge and critical reflection is the key to preserving shared reality in an era where the line between fact and fiction has been erased [2].

Practical Tips for Recognizing Fake News and Deepfakes

Readers can defend themselves against disinformation by following several practical steps. First and foremost, it is crucial to verify the source: is the message from a reliable, certified media organization, or a private messaging app that may have been compromised? For example, the leaked private message from Hans Wijers, which led to his resignation as formateur, illustrates how sensitive information—even if genuine—can be misused for political purposes [2]. Second step: check the time and context. Does the message or video have a false timestamp, unusual audio quality, or unnatural body cues such as unexpected eye movements or mismatched lip movements? Deepfakes often exhibit physical inconsistencies that are noticeable to informed observers [3]. Third step: use deepfake detection tools. There are now open-source tools available that analyze AI-generated content based on micro-movements, light sources, and audio anomalies [1]. Fourth step: do not rely on a single source. Compare the message with other trusted sources and check whether it is reported across multiple media outlets [2]. Before sharing a video or message, ask yourself: does this align with what I know about the person or event? If it feels ‘too good to be true,’ it may not be [3]. The rise of hyperreality means the best stories are not always truthful, but the most convincing [3].

Sources