How AI Monitors the Fundamental Rights of Everyone
Den Haag, vrijdag, 5 december 2025.
Imagine an algorithm deciding whether you get a loan, a job, or even a traffic fine — without you knowing. The Dutch Data Protection Authority has now published an overview demonstrating how the European AI Act genuinely safeguards fundamental rights such as privacy and the right to a fair process. Most notably, the law requires even high-risk AI systems in public services to be tested for bias and transparency. This means that as a citizen, you have the right to an explanation when a machine makes a decision about you. Over the coming months, regulators and organisations will implement these guidelines — potentially transforming forever how AI is used in our daily lives.
How AI Monitors the Fundamental Rights of Everyone
Imagine an algorithm deciding whether you get a loan, a job, or even a traffic fine — without you knowing. The Dutch Data Protection Authority has now published an overview demonstrating how the European AI Act genuinely safeguards fundamental rights such as privacy and the right to a fair process. Most notably, the law requires even high-risk AI systems in public services to be tested for bias and transparency. This means that as a citizen, you have the right to an explanation when a machine makes a decision about you. Over the coming months, regulators and organisations will implement these guidelines — potentially transforming forever how AI is used in our daily lives. The regulation, published on 2025-12-04, forms a central pillar in the legal framework governing AI use in Europe, with a strong emphasis on accountability of governments and organisations in developing and deploying AI, particularly in public services [2].
From Algorithms to Rights: The Role of the Dutch Data Protection Authority
The Dutch Data Protection Authority serves as the fundamental rights watchdog for AI systems in the Netherlands, collaborating with other supervisory bodies [2]. The overview published on 2025-12-04 is intended as a practical tool for regulators, policymakers, and organisations implementing the law [2]. By embedding fundamental rights protection into AI regulation, the law imposes new responsibilities on supervisors, who must now monitor not only product safety but also fundamental rights compliance [2]. The regulation aims to enable the use of AI within frameworks that limit risks to safety, health, and fundamental rights [2]. Given that AI is deployed across diverse societal domains and may affect various rights, the Authority stresses that risk assessments must be conducted on a case-by-case basis [2].
AI in Public Services: Transparency and the Right to Explanation
In the context of public services, AI systems must ensure at minimum the rights to privacy, non-discrimination, and a fair process. The European AI Act adds a fundamental rights dimension to existing product safety oversight, introducing a new responsibility for supervisors [2]. The overview explains how the law shapes the protection of fundamental rights and where these safeguards are enshrined in the regulation, such as in provisions on AI literacy, prohibited AI systems, obligations for high-risk systems, and the use of testing environments [2]. The Authority states that the effectiveness of protection depends on the further elaboration of the regulation through European technical standards, guidelines, and templates [2]. Moreover, the knowledge and skills of developers, providers, users, and supervisors are critical to timely risk detection [2].
The Impact on Public Information and Communication
The application of AI in public information and communication is growing rapidly. According to the Government-wide GenAI Monitor conducted by TNO, the use of generative AI within Dutch public organisations increased from 8 applications in 2024 to 81 in 2025 [2]. This surge means AI is increasingly being used across sectors, including public information, for personalised information delivery, public service chatbots, and AI-driven communication campaigns [2]. For example, chatbots in public services can be available 24/7, significantly improving information accessibility [2]. AI helps reach diverse target groups, enhance information transfer, and measure the effectiveness of communication campaigns through data analysis [2].
Benefits and Challenges: Privacy, Inclusivity, and Reliability
While AI offers significant benefits, challenges must also be addressed. The regulation requires AI systems to be tested for bias and transparency, particularly in public services [2]. This is crucial for ensuring inclusivity, as algorithms may otherwise lead to unfair selection or access barriers [2]. Furthermore, the use of AI in public information must comply with privacy standards, especially concerning the processing of personal data [2]. The Authority emphasises that the protection of fundamental rights depends on the knowledge and competencies of all involved parties — from developers to end users [2]. Ongoing negotiations on the digital omnibus legislation could impact current fundamental rights protections, requiring additional attention from policymakers [2].
AI as a Tool for Accessible Information
AI can help make complex information accessible to diverse audiences. Personalised information delivery can be tailored to users’ reading level, language use, and preferences, significantly improving the comprehensibility of legal, medical, or technical texts [2]. Chatbots can instantly answer simple questions, while AI-driven campaigns can measure effectiveness based on user behaviour, leading to more targeted and impactful communication [2]. The regulation requires high-risk systems to be tested for transparency and explainability, fostering greater trust in AI-driven decision-making [2].