How libraries can responsibly use AI without losing public value
Den Haag, donderdag, 27 november 2025.
The Royal Library is currently conducting qualitative research into the responsible use of artificial intelligence in public library work. One of the most intriguing findings: AI systems often function as ‘black boxes’, where the process behind a result is impossible for people to explain. This creates a challenge for transparency and democratic access to knowledge—core values that libraries stand for. The research aims to identify concrete services and tools that can support the sector in balancing technological advancement with ethical responsibility. The goal is to develop a practical framework that ensures inclusivity, reliability, and public value, without undermining the power of AI.
AI as a black box: the transparency challenge in libraries
Within public library work, the use of artificial intelligence (AI) is growing, but with this growth comes a fundamental challenge: the transparency of AI systems. Ivar Timmer, lecturer in Legal Management & Technology at the Amsterdam University of Applied Sciences, emphasizes that generative AI often functions as a ‘black box’, where the process leading to a particular outcome is ‘practically impossible’ to explain to humans [1]. This contradicts one of the core values of libraries: democratic access to knowledge. If users cannot understand how a recommendation or search result was produced, the information loses its credibility. The Royal Library’s research therefore focuses on developing tools that ensure responsible AI use, without compromising the public value of transparency and accountability [2]. This critical perspective is relevant not only for technology but also for the legal system: Timmer points out that the quality of the modern rule of law is significantly determined by the quality of the supporting technology [1].
From search systems to digital assistants: AI applications in practice
AI is already being used in various ways within modern libraries. AI-powered search systems can help users find relevant sources more quickly by using language models that analyze context and meaning [2]. Personalized recommendations, such as those offered at Kulturhus Borne - Bibliotheek, are generated based on user behavior and preferences, enriching the visitor experience [3]. Automated cataloging enables more efficient organization of large volumes of materials, reducing the workload for staff [2]. Digital assistants, such as chatbots in digital service delivery, provide 24/7 support for questions about opening hours, conditions, or book availability [4]. These applications enhance service delivery, but require a strong focus on ethics and privacy. Since February 2, 2025, the VVBAD has introduced an AI literacy requirement for using tools such as ChatGPT and Copilot within the information management sector in Flanders, indicating that the sector is aware of the risks associated with uncontrolled use [5].
The changing role of library staff in the AI environment
As AI grows in library work, the role of staff is shifting from executing cataloging tasks to becoming knowledge curators, AI output supervisors, and user coaches [2]. The Royal Library’s research focuses on developing guidelines that support the sector in responsible AI use, both technically and ethically [2]. Staff are being encouraged not only to develop technical skills but also critical thinking abilities to assess AI outputs for reliability, bias, and inclusivity. Ivar Timmer emphasizes that AI ‘has no meaning and no worldview’ of its own, and therefore always depends on human judgment for fair decisions [1]. This responsibility requires in-depth training, such as that provided in the VVBAD courses on effective AI use, opportunities, risks, and differences between tools [5].
Privacy, accessibility, and the democratization of knowledge
The accessibility of AI services is essential for the democratization of knowledge, yet also a source of concern. When using AI systems, user data can be collected, creating privacy risks—particularly when external tools like ChatGPT are used [5]. The Royal Library is therefore seeking services that ensure a responsible balance between innovation and privacy protection. The research focuses on applying public values such as inclusivity and democratic access, meaning that AI services must also be usable by people with disabilities or lower digital literacy [2]. This focus is also evident in VVBAD activities, such as the three-part course ‘Shape Your Archive!’ and the continuing education program Belevenisbibliotheek, both aimed at strengthening sector-wide competencies [5].
The future of libraries: a framework for responsible AI
The goal of the Royal Library’s research is to develop a practical framework for responsible AI use, based on expert consultations, literature reviews, and analysis of existing projects [2]. The project is part of the national ‘Werk aan Uitvoering’ (WaU) programme of the Ministry of Social Affairs and Employment, which aims to strengthen implementation organizations in public service delivery [2]. The outcome will be a research report with recommendations for new products and services that support the sector in responsibly managing AI and applying public values [2]. This approach aligns with Ivar Timmer’s vision, who warns against using AI without human oversight, and stresses that technology can only make a positive contribution when integrated into an ethical framework [1].