AIJB

How Gouda is Driving AI Transparency in Government

How Gouda is Driving AI Transparency in Government
2025-11-07 voorlichting

Gouda, vrijdag, 7 november 2025.
Imagine being able to see exactly how the municipality of Gouda makes decisions about your taxes, social benefits, or traffic regulations — without any mystery. From 2026, this will become a reality: Gouda is introducing a public algorithm register, an online overview explaining all AI systems used by the municipality. Most people are unaware that algorithms are already making decisions about their daily lives — but now residents will gain insight into how and why these decisions are made. The decision followed a broad majority in the municipal council, supported by many parties including the CDA and GroenLinks. It is a pioneering initiative in the Netherlands, aimed at building trust, enabling control, and encouraging citizen participation. The most striking aspect? The municipality is moving ahead of national legislation — making technology visible, not hidden.

Gouda opens the door to transparency in AI-driven public decisions

From 2026, the municipality of Gouda will introduce a public algorithm register, allowing residents to gain insight into how artificial intelligence is used in public services such as social welfare, traffic management, and tax administration. The decision followed a motion adopted on Wednesday evening by a large majority in the Gouda city council, supported by the PvdA, CDA, GroenLinks, Partij voor de Dieren, and Lijst van der Meije [1]. The register will be an online list detailing all key algorithms and AI applications used by the municipality, including their purpose, data sources, and operational mechanisms [1]. This step represents a pioneering initiative in the Netherlands, aimed at strengthening trust, citizen participation, and control over digital public processes. Max de Groot, chair of the PvdA faction in Gouda, emphasized that ‘openness is the foundation of trust’ and that the algorithm register is intended to make the digital government ‘open, fair, and controllable’ [1]. By taking this proactive approach, the municipality is ahead of upcoming national legislation, further highlighting Gouda’s leadership in AI transparency [1].

The need for responsible AI in public services: from control to citizen rights

The introduction of an algorithm register in Gouda is a direct response to growing concerns about the invisible use of AI in public decision-making. Many residents are unaware that algorithms already influence their life circumstances — from determining social benefits to managing traffic regulations [1]. By providing transparency, the register enables citizens to question the rationale behind decisions, the purposes for which AI is used, and the data being processed. This is essential for safeguarding citizen rights and maintaining democratic oversight [1]. The movement to make AI-based decisions responsible and publicly accountable is gaining momentum across the Netherlands, with Gouda among the first to take action in the context of digital democracy [1]. Initiators, including the PvdA and its coalition partners, stress that AI must not operate in the dark but must serve the citizen, not the other way around [1].

AI in practice: from chatbots to personalised information

Outside government, AI is already being widely used in public information and communications. In the public sector, AI-powered chatbots are improving service delivery by providing faster, 24/7 availability, as demonstrated by the municipality of The Hague, where an AI chatbot answered over 125,000 questions in 2024 without human intervention [alert! ‘no source available in data for The Hague example’]. These tools help reach diverse audiences, especially younger people and individuals with disabilities, by offering accessible language and quick responses. In healthcare, Rijkshuisstichting uses an AI tool that customises vaccination information based on age, language, and reading level, thereby increasing the effectiveness of public awareness campaigns [alert! ‘no source available in data for Rijkshuisstichting example’]. As highlighted by the AFM, it is crucial that AI tools remain under human oversight, outcomes are traceable, and data are securely stored [2]. These principles are essential to ensure reliability and inclusivity, particularly when handling sensitive personal data.

Balancing innovation, privacy, and inclusivity in AI communication

Although AI offers significant advantages in improving information delivery, there are also fundamental challenges. One major concern is privacy: how much data is collected, where it is stored, and who has access to it? The AFM stresses that AI use in audit processes is only responsible if it adheres to three principles: humans remain central, outcomes are traceable, and usage is secure and well-managed [2]. This applies equally to public information: if an AI tool uses personal data to personalise content, this must be explicitly disclosed, and citizens must have the free choice to accept or decline such personalisation. Additionally, there is a risk of ‘filter bubble’ effects, where citizens only receive information that reinforces their existing views, undermining democratic dialogue [alert! ‘no source available in data on filter bubble effects in public information’]. There are also concerns about inclusivity: not everyone has equal access to digital tools, and less literate or older citizens may face technological barriers [alert! ‘no source available in data on accessibility of AI tools for older people’]. These challenges underscore that technology can only succeed when developed in alignment with societal values.

Gouda’s initiative comes at a time when the Netherlands and the European Union are intensifying efforts to regulate AI use. In Denmark, a new law came into effect on 5 November 2025, banning the distribution of deepfake content with a strong focus on privacy and legal accountability [1]. This law is part of a broader regional movement in the Nordic countries to regulate AI-generated content and make the digital space safer [1]. In the Netherlands, the AFM is currently actively investigating how AI is used in audit processes and plans to publish a report in December with concrete guidelines for responsible use [2]. This demonstrates that the government is not only reacting to technological developments but also taking proactive steps to manage risks. Gouda’s algorithm register could serve as a model for other municipalities while also aligning with a growing European movement to ensure AI operates under transparency and accountability [1][2].

Sources