AIJB

How AI is Transforming the Digital Battlefield – and Why Speed Now Determines Everything

How AI is Transforming the Digital Battlefield – and Why Speed Now Determines Everything
2025-11-14 nepnieuws

amsterdam, vrijdag, 14 november 2025.
Imagine a cyberattack that repairs itself, adapts in real time, and predicts how you’ll defend – all within milliseconds. Since last week, such autonomous, AI-driven attacks are no longer theoretical but a reality. In the Netherlands and the United States, over 1.2 million data records were intercepted by an AI-inspired attack on a national registry. What makes this so dangerous? The adversary no longer operates at human speed, but at machine speed. In response, defensive strategies must also become automated – without relinquishing human judgment. The future of digital security lies not in more people, but in intelligent, collaborative AI systems that combine speed, intelligence, and control. This is not science fiction: it is the new normal.

AI as Weapon and Watchman: The Dual Role of Artificial Intelligence in Cybersecurity

Artificial intelligence is transforming cybersecurity from within and from without. On one hand, governments and organizations use AI to proactively detect and thwart threats; on the other, AI itself is being weaponized by state-backed and criminal actors. In the Netherlands, an AI-based attack was confirmed on [alert! ‘date not specified in source, but described as recent’] against the national digital identity registry, resulting in the interception of 1.2 million user records [1]. This incident is a concrete manifestation of a trend that has been accelerating since 2022: AI-driven cyberattacks have increased by 300% since then [2]. In the US and the EU, such attacks are primarily targeting public services, with a 68% rise in automated attacks between 2024 and 2025 [1][2]. Beyond data interception, these attacks employ techniques such as synthetic phishing emails with deepfake voices and hyper-personalized content, which accounted for 68% of all targeted intrusion attempts in 2025 [2]. The implications of this shift are profound: it is no longer a question of reactive defense, but of a continuous, self-learning battlefield where the adversary is always one step ahead. The US military and US Cyber Command confirmed in a classified briefing on [alert! ‘date not exact in source, but described as last week’] that Russian and Chinese state-backed groups have launched AI-generated disinformation campaigns targeting the 2026 European elections [2]. In the Netherlands, this threat is taken seriously: the National Cyber Security Council reported on [alert! ‘date not exact in source, but described as last week’] that 47% of all identified attacks between 15 October and 10 November 2025 were carried out using AI-driven tools for proactive threat detection [1]. These figures demonstrate that AI is no longer a supportive tool but the central element of digital confrontation.

From Reaction to Automation: The Necessity of Autonomous Defense

The speed of modern cyberattacks is the primary challenge. While traditional security systems rely on human response, autonomous AI-powered attacks operate in real time and adapt dynamically to evade defensive measures [3]. Sam Kinch, former cyber officer in the US military and now Federal Field CTO at Tanium, emphasized that ‘when adversaries deploy autonomous agents capable of real-time adaptation, responses at human speed are futile’ [3]. This demands a fundamentally different approach: defensive operations must incorporate autonomous response capabilities, while human oversight remains strategic [3]. The US government recognizes this shift and is building a new foundation: IT operations must be conducted under a true weapons system approach, with guarantees for fundamental cyber hygiene, autonomous patching, and continuous compliance [3]. In this context, Tanium has been recognized since 13 November 2025 as a CVE Numbering Authority (CNA), meaning it can issue its own CVE identifiers for vulnerabilities discovered in its products [4][5]. This marks a step toward greater transparency, but also a necessary measure in a world where speed is critical. The next version of the Joint Cyber Defense Collaborative (JCDC) playbook must streamline information sharing between DHS, Cyber Command, and the FBI, while preserving verification standards, to keep pace with AI-driven threats [3]. For the Netherlands, this presents a clear parallel: collaboration between NCSC, Defence Cyber Command, and Team High Tech Crime must be further accelerated, as autonomous threats leave no room for bureaucratic inertia [3]. The future of defense lies not in more personnel, but in integrated, collaborative AI systems – a ‘vendor-in-depth’ approach, where multiple integrated AI agents reinforce each other and errors in one technology are compensated by another [3].

AI and Fake News: A Fifth Column in Digital Democracy

The spread of fake news is one of the most direct threats AI poses to democracy and media literacy. AI-driven disinformation campaigns, such as those deployed by Russian and Chinese state-backed groups, are no longer isolated incidents but centralized, scalable, and highly effective tools for manipulating public opinion [2]. These campaigns are not only financed but also created using AI: voice cloning, deepfake videos, and personalized messages are mass-produced and disseminated via social media, email, and devices [2]. In 2025, the volume of AI-driven disinformation increased significantly, particularly in the lead-up to the 2026 European elections [2]. In response, the European Commission announced the AI Cybersecurity Act (AICA) on 10 November 2025, a law requiring real-time AI threat detection systems in all public digital services by 30 June 2026 [2]. This legislation is a direct response to the growing severity of AI-based attacks on democratic institutions. The impact extends beyond legal frameworks; it is also psychological: people are losing trust in what they see and hear, and are increasingly skeptical of institutions such as the media and government. According to Dr. Elise van Dijk, Cybersecurity Director at NCSC-NL, ‘AI is no longer just a defensive tool – it has become the weapon of choice for both state and non-state actors’ [2]. She warns that ‘AI-driven attacks are already shaping the digital battlefield – democracies must respond with proportional, automated defenses, or risk systemic collapse’ [2]. Thus, the spread of AI-generated fake news is no longer merely a technical issue, but a civil and ethical dilemma that undermines the foundation of a functioning democracy.

How to Spot Fake News: Practical Tips for the Digital Citizen

As a citizen on the digital battlefield, it is essential to remain aware of the risks posed by AI-generated fake news. The first step is to ask questions: who is the source? What is the intent behind the text or video? The SANS Institute, a global leader in cybersecurity training, offers courses in 2025 specifically designed to manage the risks associated with generative AI and large language models [6]. The SEC545: GenAI and LLM Application Security course teaches professionals how to analyze security-sensitive AI applications and identify risks [6]. For the general public, media literacy remains the most important defense. Tips for identifying fake news include: verify the origin of information using advanced search tools like Google Lens or TinEye; scrutinize audio and video quality – they are often too perfect to be human-made; check whether the content has appeared previously using resources such as Archive.org; and use your organization’s or government’s security platforms, such as the Dutch government’s secure newsletters or NCSC-NL alerts. On 5 November 2025, the European Commission warned that 34% of national cyber defenses across the EU still do not implement AI-based threat models, meaning many countries remain vulnerable to AI-fueled interference [1]. This underscores the need for collective responsibility: anyone who shares information bears part of the burden. The world of AI is no longer insulated: it is an integral part of daily life, and it is crucial to engage with it consciously.

Sources