AIJB

Erasmus MC Launches Ethics Lab for Responsible AI in Healthcare

Erasmus MC Launches Ethics Lab for Responsible AI in Healthcare
2025-08-15 voorlichting

rotterdam, vrijdag, 15 augustus 2025.
The Erasmus MC has established an ethics lab to ensure the responsible development and application of AI in healthcare. The lab’s focus is on researching and guiding the ethical implications of AI, a pressing issue given the increasing use of this technology in the medical sector. Intensive care physicians Michel van Genderen and Diederik Gommers lead this initiative, which aims to create clear guidelines and standards for responsible AI use in healthcare.

An Urgent Necessity

The rise of AI in healthcare brings both opportunities and challenges. While AI can improve patient diagnosis and treatment, it also raises serious ethical questions that need to be addressed. Michel van Genderen, an intensive care physician and innovator, emphasises the undeniable need for clear guidelines and standards. ‘Can we afford to leave aside technologies that mean so much? I don’t think so,’ states Van Genderen [1]. Therefore, the ethics lab will work on building a solid foundation by checking data, storing it properly, and ensuring that the model does not generate false information and remains fair [1].

Collaboration and Innovation

The ethics lab is a collaboration between the Erasmus MC, TU Delft, and Erasmus University, with support from software company SAS. This multidisciplinary team consists of nurses, doctors, data scientists, and ethicists working together on responsible AI applications in healthcare. In early 2025, TU Delft was appointed as the official advisor to the World Health Organization on AI in healthcare, enhancing the legitimacy and expertise of the project [1].

Practical Applications

One of the first projects of the ethics lab is researching the use of virtual reality headsets in treating stress and trauma after ICU admission. Van Genderen initiated this research in 2018 and believes that AI will revolutionise the field with applications such as summarising family conversations, predicting complications, and advising on treatments [1]. The lab will also work on implementing AI models in clinical practice, a process that often stalls at the prototype phase. Less than two percent of AI models actually make it into clinical practice, with many getting stuck in the prototype stage [1].

Ethical Challenges

One of the biggest challenges in implementing AI in healthcare is avoiding biases and unethical practices. Van Genderen explains: ‘If we build AI models based on thirty years of data, you will see those biases reflected.’ Therefore, the ethics lab is working on methods to check and refine data to ensure that AI applications are fair and reliable [1].

Future Perspectives

In 2024, the Erasmus MC AI Accelerator was established to consolidate all knowledge in this area. This initiative supports the development and implementation of AI applications in healthcare, with a strong focus on ethics and responsibility [1]. The ethics lab will also be part of larger initiatives, such as the EU’s 70 million euro Bigpicture project, where Erasmus MC plays a leading role in developing the world’s largest public dataset of digital pathology images [4].

Sources