How AI bots will change campaigning in The Hague from September — what voters need to know
Den Haag, maandag, 8 september 2025.
From September 2025, campaigns in The Hague will deploy AI bots on a large scale: from automated chatbots answering voters’ questions to algorithmically generated flyers and social media posts. Most intriguingly, these tools are no longer used only by major parties; mid‑sized campaigns and digital agencies are also testing thousands of message variants and microtargeting on individual voter profiles. That increases efficiency and reach, but also raises risks of targeted disinformation, deepfakes and hidden manipulation. The article describes the techniques behind the bots, how journalism and information provision may come under pressure, and the ethical and regulatory questions involved. Practical tips are given for recognising and checking AI content, as well as recommendations for libraries, media‑aware citizens and policymakers to ensure transparency, human oversight and democratic integrity while campaigns become digitally smarter.
The rise of AI bots in The Hague’s campaigns — who is involved?
Campaigners in The Hague are increasingly deploying AI bots from September 2025: from automated chatbots that answer voters’ questions to algorithmically generated flyers and social media posts [1]. This tool is no longer confined to the largest parties; mid‑sized campaigns and specialised digital agencies are also experimenting with thousands of message variants and microtargeting to test effectiveness [1]. This trend fits a broader movement in which news, creators and bots redistribute public attention, causing traditional actors to lose market share if they do not anticipate the change [3].
Which techniques lie behind them?
The AI tools used by campaigns typically combine language models for text generation (chatbots, personalised posts), algorithms for A/B testing and automation tools to scale large numbers of variants — techniques that make it possible to quickly generate hundreds to thousands of messages and measure which variant produces the most engagement [1]. At the same time, AI is used to monitor social media reactions and detect behavioural patterns, which facilitates microtargeting and rapid adjustment of messages [1][3].
Practical applications: from chatbot to personalised flyer
Concrete applications already appearing in campaigns include: chatbots that answer frequently asked questions and direct voters to polling stations or policy information; automatically generated flyers and posts tailored to demographic or psychographic profiles; and systematic testing of millions of ad variants to find the most effective phrasings — all with limited manual intervention [1]. This makes campaigns more efficient and scalable, and lowers the threshold for smaller actors to apply advanced digital tactics [1][3].
Risks to information provision: disinformation, deepfakes and covert influence
The same technical features that bring efficiency also increase the risk of targeted disinformation: automated creation makes it easy to produce large numbers of convincing but misleading messages, and microtargeting can show such messages to vulnerable groups without broad visibility or debate [1][4]. There is also a broad international pattern of AI and automation being used to manipulate online dynamics, which puts the reliability of public information under pressure [3].
Impact on journalism and news consumption
News organisations lose ground to alternative information producers — such as creators, communities and AI‑driven interfaces — if they do not respond strategically in time, because attention is scarce and the public increasingly prefers concise, tailored content [3]. For journalism, this means both verification capabilities and product innovation must be scaled up: fact checks, source verification and clearer disclosure of used AI tools become more essential to retain trust [3][1].
Cyber and security risks in the picture
It is not only information integrity that is under pressure: the increase in AI‑driven automation and scalability of digital tools is associated with greater cyber risks, such as advanced DDoS campaigns and automated attacks that can disrupt infrastructure and online campaigns — analyses show that AI is already being used to make attacks more precise and effective [6]. This translates into a geopolitical context in which digital attacks and influence operations can reinforce each other [6].
Where journalism is most vulnerable
Newsrooms can be most vulnerable at three points: first, prioritising speed over verification when picking up viral material; second, failing to recognise synthesised sources or automated comments that create a false sense of consensus; third, losing audience attention to concise, tailor‑made AI content, causing in‑depth journalism to generate less revenue [3][1][4].
Ethics and regulation: what questions arise?
Key ethical questions concern transparency about AI use (should a chatbot always identify itself?), authority and human final control over political messages, and rules around microtargeting and data use for political purposes [1][3][4]. At the same time, there are concerns about a blurring line between legitimate personal communication and manipulative influence, and about policy options that could affect freedom of expression if they intervene too far in digital campaigning practices [1][5].
Practical detection tips for media‑aware voters
Some concrete signs that help to recognise AI‑generated political content are: repetitive phrasing and slightly inconsistent tone across multiple posts (can indicate batch generation), sudden waves of similar accounts or reactions appearing simultaneously (possible bot activity), and lack of transparent source attribution or human contact information in chat interfaces [1][4][3]. Voters are advised to always trace messages back to primary sources and official documents, perform basic metadata checks and be wary of content that attempts to provoke strong emotional reactions [3][4].
What libraries, citizens and policymakers can do concretely
Practical recommendations are: deploy libraries and media coaches to improve digital literacy and offer tools for source verification; invite newsrooms to joint training programmes on AI detection; and urge policymakers to set transparency rules for politically targeted AI communication and consider stricter requirements on data use for microtargeting [3][1][5]. At the same time, it is important that regulation preserves space for innovation and legitimate information, and that human final responsibility is enshrined in law [1][5].
Warning about uncertainties
[alert! ‘The speed at which campaigns adopt AI and the exact scale in September 2025 remain partly uncertain because available reports vary and some observations are based on qualitative analyses and examples rather than complete quantitative datasets’] [1][3][4].