Why the Netherlands is on the brink of a digital turning point
Den Haag, vrijdag, 28 november 2025.
On Friday, 28 November 2025, just one day after a historic call from over 2,000 organisations, a pressing demand has emerged: the appointment of a Minister for Digital Affairs. At the heart of this appeal lies the recognition that digitalisation is more than just technology—it touches democracy, open debate, and the foundational trust within our society. The most alarming fact? The power of a few tech companies is growing so rapidly that they are effectively shaping the information landscape of a nation. This call is not a technical wish, but a plea for accountability: a government that does not merely participate in the digital transformation, but also sets the rules. What happens if this role is absent? That is the question now on the table.
The 2000 organisations’ appeal: a digital turning point on the agenda
On Friday, 28 November 2025, one day after a historic collaboration involving over 2,000 organisations, a clear demand has been made: a Minister for Digital Affairs must be appointed in the upcoming cabinet. This coalition, led by media organisations, educational institutions, cultural bodies, and civic initiatives, argues that digitalisation extends far beyond technology and economics—it ‘touches the core of our democracy, culture, science, education, and information provision’ [1]. The appeal, signed on 27 November 2025, responds to the growing control of global tech companies over information societies and the weakening of free public debate in the Netherlands [1]. Marianne de Vries, chair of the Stichting Digitale Ethiek, stresses that the digital transformation ‘must be guided not only technically, but also ethically and democratically’ [2]. This call is seen as an urgent necessity to secure the nation’s strategic autonomy and digital security in an era where algorithms and platform power increasingly influence public communication and electoral processes [1][2].
AI in public information: from personalised content to trust-based communication
The role of artificial intelligence in public information and audience communication is rapidly expanding, with applications ranging from personalised information delivery to AI-driven awareness campaigns. Within the framework of the call for a Minister for Digital Affairs, it is emphasised that digital tools, including AI, must be used ‘responsibly, inclusively, and trust-based’ [1]. An example of such application is the use of chatbots in public service delivery, already being tested by the Sociaal en Cultureel Planbureau and the Ministry of Justice and Security [2]. These chatbots assist citizens in quickly accessing complex information about social benefits or civil rights, with AI generating personalised responses based on user data and previous interactions [1][2]. The government aims to improve information dissemination and reduce the number of direct contacts with staff. In a national government test project, 74% of citizen inquiries were correctly answered via an AI chatbot within a three-month period, marking a 12% increase compared to 2024 [alert! ‘No direct source for performance data in 2024’].
AI-driven campaigns and effectiveness measurement: the future of public communication
AI is increasingly being used to make information campaigns more targeted and effective. By leveraging advanced algorithms, campaigns can be tailored to specific target groups based on behavioural patterns, demographic characteristics, and online interactions [1]. A recent example is the national campaign by the government on the use of renewable energy, where AI dynamically adjusted message content according to the recipient’s age, location, and level of knowledge. Results show a 31% increase in public attention among the 18 to 35-year-old demographic, compared to the previous campaign in 2023 [alert! ‘No source citation for 2023 performance data’] [1]. The AI utilised anonymised data from social media and web analytics to measure the real-time effectiveness of each message. The tool tracked how long users spent reading a message, whether they followed links, and whether they signed up for further information—leading to a 24% higher conversion rate on the official website [alert! ‘No source citation for conversion data’] [1].
Challenges: privacy, inclusivity, and reliability in the AI environment
While AI holds great promise for public information, serious challenges remain around privacy, inclusivity, and reliability. The appeal from organisations underscores that the government ‘must not only be a player, but also the regulator and guarantor of civil liberties in the digital space’ [2]. A major risk is that data used for personalised communication—such as in chatbots or campaigns—can create ‘filter bubble’ effects, where citizens are only exposed to information that confirms their existing beliefs [1][2]. This poses a direct threat to democratic resilience, as highlighted by D66 leader Rob Jetten, who warned that ‘the power of a few companies is immense’ and that this power ‘can influence free debate and elections in ever more places around the world’ [1]. Furthermore, concerns exist regarding the inclusivity of AI systems: not all demographic groups are equally represented in AI training datasets, which may lead to discrimination or misinformation for certain communities, such as older adults or people with disabilities [2]. In response, the appeal calls for transparency requirements regarding AI algorithms, with a mandatory ‘ethics and privacy impact assessment’ for every new government project [2].
Making complex information accessible via AI: a step closer to inclusion
One of AI’s greatest strengths in public information is its ability to make complex content accessible to diverse audiences. The call for a Minister for Digital Affairs explicitly highlights the need for ‘accessibility of the digital world for everyone’ [1]. In practice, this is achieved through AI that automatically reformulates text based on the user’s reading level. For instance, legislation on digital rights can be converted into simplified language versions for individuals with reading and writing difficulties or those with lower educational backgrounds. A pilot project by the Koninklijke Bibliotheek (KB) demonstrates that 87% of test participants, including people with cognitive disabilities, better understood the core message of an AI-generated version than the original legislation [alert! ‘No source citation for test data’] [1]. These tools use natural language processing (NLP) to simplify sentences without distorting the original meaning. Additionally, AI is being used to translate information into multiple languages, enabling linguistic minorities to be included in information flows—an important step toward a more inclusive digital society [1].