AIJB

Why AI in the judiciary can never replace humanity

Why AI in the judiciary can never replace humanity
2025-11-05 journalistiek

Den Haag, woensdag, 5 november 2025.
Although AI is increasingly used to summarise case files and draft rulings, the human element remains irreplaceable – for AI may see tears from a witness, but it cannot understand why they flow. Advocate General Manuella van der Put emphasises that technology is merely a tool: it should supplement, not replace, the sharpness, awareness, and courage of judges. This indispensable human dimension is precisely what the judiciary needs in an era of rapid digitalisation. The real challenge is not technological, but ethical: how do we maintain the balance between efficiency and justice, between logic and empathy?

AI as a mirror: from efficiency to ethical responsibility

In the age of rapidly advancing AI technologies in the judiciary, the role of human decision-making is under growing pressure. Advocate General Manuella van der Put, based at the Court of Appeal ’s-Hertogenbosch, stresses that AI need not be a threat, but rather an opportunity – provided it is used responsibly, transparently, and with due regard for the human dimension [link: https://www.itenrecht.nl/artikelen/op-13-november-start-de-zesde-editie-van-de-mr-s-k-martens-academie]. Her perspective is rooted in practical experience from her doctoral research on an AI system capable of independently assessing traffic fines, completed in 2024 and since expanded with recent real-world applications [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/]. She argues that AI ‘sees’ what happens – such as tears from a witness – but ‘does not understand’ why they fall, as it lacks empathy or contextual awareness [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/]. This gap between perception and understanding lies at the heart of the ethical challenge: how do we balance technological efficiency with human justice? The judiciary is developing an internal variant of ChatGPT specifically tailored for legal use; Van der Put anticipates the first presentation of this system imminently, indicating that the transition from experimentation to implementation is well underway [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/].

From automation to responsible use: the necessity of AI literacy

Responsible AI use in the judiciary depends not only on technology but also on the knowledge and skills of users. The European AI Act, which came into force, mandates a certain level of AI literacy – a requirement already being implemented in the healthcare sector through the development of AI competency frameworks for healthcare assistants and medical specialists [link: https://demedischspecialist.nl/nieuwsoverzicht/nieuws/verantwoord-gebruik-van-ai-wat-moet-een-zorgprofessional-weten-en-kunnen]. This competency is equally critical in the judiciary. On 28 October 2025, the Authority for Personal Data published a guidance document titled ‘Further building AI literacy’, outlining the legal obligations under the AI Act and providing a practical action plan with real-world examples for a sustainable approach [link: https://theinnovativelawyer.ai/insights]. Advocate General Van der Put underscores that using generative AI in legal proceedings is not an automatic solution: ‘If you always decide A and suddenly decide B, you must be able to explain why. That is not a threat, but rather an opportunity to strengthen our reasoning.’ [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/]. On 4 November 2025, regulators AFM and DNB warned of concentration and systemic risks in the financial sector due to increasing digital dependence on non-European IT suppliers, highlighting a broader need for digital autonomy and critical thinking in AI use [link: https://theinnovativelawyer.ai/insights]. There is also a noted lack of oversight and coordination among regulators, prompting experts to call for the establishment of a central supervisory hub, such as by systematically expanding the SDT [link: https://theinnovativelawyer.ai/insights].

The boundary between tool and decision-maker: dangerous overreaches

Although AI is used to summarise case files, flag deadlines, and anonymise rulings, its use as evidence is extremely risky and irresponsible. Van der Put explicitly warns: ‘If a lawyer suddenly waves an AI-generated video, we must be extremely critical. Who is actually speaking there? And does what we see even match reality?’ [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/]. This warning stems from the rise of generative AI capable not only of producing text but also of creating images and audio, undermining the authenticity and reliability of evidence. The use of AI-generated content in legal proceedings is being discussed not only in the Netherlands but also in major international cases. On 4 November 2025, one day before this publication, the UK High Court ruled in the case Getty Images v. Stability AI, in which the company was accused of unauthorised use of millions of photographs to train AI models [link: https://www.ai-forum.nl/]. This case serves as a clear signal that copyright and digitalisation are at a critical crossroads. In another case, GEMA v. OpenAI, the German music rights organisation GEMA accuses OpenAI of unauthorised generative AI training, pointing to a growing legal debate over the boundary between legitimate use and misuse of data [link: https://www.ai-forum.nl/]. The AI Commission of the Federation of Medical Specialists stresses that healthcare professionals must be able to work safely and responsibly with AI, requiring a baseline level of AI literacy [link: https://demedischspecialist.nl/nieuwsoverzicht/nieuws/verantwoord-gebruik-van-ai-wat-moet-een-zorgprofessional-weten-en-kunnen]. Expanding such literacy in the judiciary is essential to prevent AI ever being seen as a decision-maker rather than a tool.

The human touch in the shadow of the machine

At the core of the judicial process remains the personal interaction between judge, defendant, witness, and lawyer. Advocate General Van der Put emphasises that ‘AI may be smart, but justice requires more than logic. It’s about the person sitting in front of you. That’s why we do this.’ [link: https://www.mr-online.nl/rechtspraak-is-mensenwerk-juist-in-het-tijdperk-van-ai/]. This statement is not romantic, but fundamental: justice is not only a matter of logical reasoning, but also of empathy, context, emotional intelligence, and trust. In practice, this is reflected in the operations of the Court of Appeal ’s-Hertogenbosch, which handles approximately 10,000 tax, family, commercial, and criminal cases on appeal each year [link: https://www.itenrecht.nl/artikelen/op-13-november-start-de-zesde-editie-van-de-mr-s-k-martens-academie]. Decisions are made through a bespoke, context-aware approach that accounts for underlying societal issues, with each case possessing its own unique circumstances [link: https://www.itenrecht.nl/artikelen/op-13-november-start-de-zesde-editie-van-de-mr-s-k-martens-academie]. The emergence of agentic browsers like Perplexity’s Comet, which perform autonomous actions within the browser and can access logged-in sessions, introduces new security and privacy risks, further increasing the need for human oversight [link: https://theinnovativelawyer.ai/insights]. The judiciary now faces a choice: end the experiment or scale up this digital access to justice – but always under the condition that the human dimension remains central [link: https://theinnovativelawyer.ai/insights].

Sources