How the Royal Library is Collaborating on Safe AI for Our History
londen, donderdag, 4 december 2025.
Imagine an artificial intelligence capable of reading, understanding, and summarising old books, newspapers, and archives—without damaging the original documents. That vision is now one step closer. The Royal Library has joined an international network of over 40 heritage institutions, from the British Library to the Vatican, to develop safe and transparent AI techniques. The most notable development? They are working on an open-source toolkit that will be available within a year. This means libraries worldwide can learn from one another—without risking historical documents. The agreement, signed in London, is not a technological spectacle but a responsible step toward preserving our cultural heritage for future generations.
An International Agreement for Safe AI in the Heritage Sector
The Royal Library (KB) signed an international cooperation agreement on 2025-12-03 with the AI4LAM initiative (Artificial Intelligence for Libraries, Archives and Museums), which focuses on developing safe, transparent, and ethical AI applications for cultural heritage institutions [1][3]. This agreement, signed in London, marks a crucial step in the digital transformation of cultural heritage and is supported by more than 40 institutions worldwide, including the Library of Congress, the Smithsonian Institution, national libraries from France, Germany, England, Sweden, Denmark, Belgium, Finland, the Vatican, and the Rijksmuseum [2][3]. The agreement aims to establish a shared AI usage framework with security protocols for using digital collections in AI applications, running for three years with a review scheduled for November 2028 [3]. According to Prof. dr. Anja van Dijk (Professor of Digital Heritage Management, University of Amsterdam), this agreement represents a ‘crucial step toward responsible integration of AI in preserving and making cultural heritage accessible’ [3]. The KB serves as coordinator of the initiative and has been allocated a budget of €2.3 million for the period 2025–2028 [3]. This collaboration builds on the ambition to connect Dutch initiatives with international developments in artificial intelligence [2].
Open-Source Toolkit for All Heritage Institutions
One of the most concrete outcomes of the agreement is the development of an open-source AI toolkit for heritage institutions, set to be available from 15 March 2026 [3]. This toolkit is designed to support libraries, archives, and museums worldwide in safely and responsibly applying generative AI to their collections, without compromising the integrity of historical documents [2][3]. The toolkit will include security protocols, ethical guidelines, and technical standards developed through close collaboration among participating institutions [3]. The project is part of a growing community committed to advancing open, reliable, and interoperable knowledge infrastructure, with the Barcelona Declaration on Open Research Information being recognised as essential for the future of scientific and cultural knowledge [3]. The open-source nature of the toolkit ensures transparency and accessibility, enabling smaller institutions to benefit from advanced technology without requiring major investments [3].
AI in Practice: From Curator Bot to Data Curation
The KB has long experimented with AI techniques, and the new agreement strengthens this practical approach. A recently tested ‘curator bot’ allows users to ask questions about collection materials, with the system able to analyse and summarise historical documents based on integrated knowledge [2]. Additionally, the KB is working on so-called data curation, where AI is used to better distinguish between individuals with the same name or to automatically link digital versions of the same title [2]. These applications are not only technically innovative but also essential for improving the discoverability and accessibility of historical sources. The KB is focused on developing tools that help researchers and visitors better understand, search, and explore the collections [2]. These practical examples illustrate how AI can contribute to more efficient and improved services, without undermining the core values of libraries—such as transparency, democratic access, and public value [1][2].
Responsible AI in the Public Sector: The Dutch Context
The KB’s agreement fits within a broader trend of responsible use of generative AI in the Dutch public sector. According to the TNO report Government-wide Monitor Generative AI, published on 1 December 2025, 81 identified applications of generative AI are in use by Dutch government organisations [4]. Of these, 37 (46%) have been implemented, 29 (36%) are in the experimentation phase, and 8 (10%) have an unknown status [4]. Municipalities are the most active organisations, with 34 applications (42%), followed by cooperation networks (15) and central government (6) [4]. The majority of applications (63%) target government employees as end users, while 14 (17%) have citizens as end users [4]. There are also examples of successful collaboration, such as the GEM cooperation network of municipalities, which has worked since 2019 on a virtual assistant for residents and has been scaled up since 2025 with funding from the Ministry of Health, Welfare and Sport (BZK) [4]. These experiences demonstrate that a gradual, transparent, and user-centred approach to AI innovation is key to sustainable implementation [4].
Challenges: Privacy, Knowledge, and Technical Infrastructure
Despite progress, significant challenges remain. The TNO report highlights that 42% of applications use foundation models developed by US-based companies, with European models such as DeepL and Mistral in the minority [4]. This creates dependency and risks related to data protection, particularly given the ambiguity surrounding legislation such as the EU AI Act and the GDPR [4]. Furthermore, there is a lack of AI literacy within public organisations, which hampers innovation, and insufficient technical infrastructure—including computing power and space for data centres—slows the expansion of AI applications [4]. Respondents in the TNO survey stressed the importance of space for innovation, financial resources, and joint efforts: ‘Only civil servants won’t make it. Only entrepreneurs won’t make it. Only universities won’t make it. You really need to work together, otherwise you won’t get anywhere’ [4]. These statements underscore that responsible AI is not only a technical issue but also requires a cultural and organisational shift [4].