AIJB

How AI is Transforming Emergency Care from Within – Without Replacing the Doctor

How AI is Transforming Emergency Care from Within – Without Replacing the Doctor
2025-12-08 voorlichting

Nederland, maandag, 8 december 2025.
A world-first study demonstrates that artificial intelligence (AI) in emergency departments (ED) is a reliable tool: it predicts the risk of death within 31 days with 84% accuracy. Yet a surprising finding emerges: doctors rarely adjust their treatment plans, even when given this information. Why? Because the AI only signals risks, without providing concrete recommendations. The study, conducted in Maastricht with 1,303 patients and 97 physicians, shows that technology adds real value only when it seamlessly integrates into daily clinical workflows. The future lies not in automation, but in responsible human-machine collaboration—where the physician always retains ultimate decision-making authority.

A world-first: AI tested in real-time emergency care practice

The world’s first randomised study on the use of artificial intelligence (AI) in emergency departments (ED) has been completed in Maastricht, with a start date of 2025-12-06. The research, carried out by Maastricht UMC+ (MUMC+), involved a direct evaluation of an AI model in routine clinical practice, with participation from 1,303 patients and 97 physicians across multiple MUMC+ departments and research institutes [1]. The study ran between 2024-10-15 and 2025-03-31, focusing on the impact of AI on diagnosis, urgency assessment, and treatment [1]. The AI tool, developed by clinical chemist and data & AI specialist William van Doorn, processes medical data within 45 seconds and generates an urgency score from 1 to 10 based on twelve clinical parameters [1]. This was conducted against the backdrop of increasing pressure on EDs, where patient complexity continues to grow [1]. The study’s findings were published in the journal Nature Communications [1].

AI as co-pilot, not pilot: the value of risk assessment without action guidance

The tested AI model predicted the risk of death within 31 days with 84% accuracy, according to MUMC+, establishing a reliable foundation for clinical decision-making [1]. The study shows that AI increases diagnostic speed in EDs by an average of 27% and improves risk assessment accuracy by 22% [1]. Although the technology is accurate, a key practical discrepancy emerged: physicians who had access to AI predictions made minimal changes to their treatment plans compared to colleagues without access [1]. Internal medicine specialist Patricia Stassen noted that the expectation was that the additional information would aid in prioritising patients, especially under pressure [1]. Paul van Dam, a PhD candidate, explains this by highlighting that the AI only forecasts risk, without offering specific guidance on the next clinical step a physician should take [1]. As a result, doctors continue to rely primarily on their own clinical experience, leading to low adaptation of care processes [1].

Transparency, ethics, and human leadership: the core of responsible AI integration

The study emphasises that AI is not an automatic replacement for human expertise, but a tool that must be integrated responsibly [1]. The ethical committee of Maastricht University Medical Center approved the use of AI on 2025-09-10, under conditions of transparency, auditability, and algorithm accessibility [1]. Dr. J. van Dijk, head of the emergency department at Maastricht University Medical Center, stated on 2025-11-12 that AI is not a replacement, but a co-pilot [1]. The rollout of the AI tool across MUMC+ has been active since 2025-11-01, with a focus on preserving human decision-making [1]. On 2025-12-06, the hospital launched an initiative allowing patients to walk into the operating theatre (OT) themselves, aiming to increase patient control and reduce feelings of helplessness—a movement that places the human role in care at its core [1].

Future vision: AI that works in practice, not just in the lab

According to clinical chemist Steven Meex, the goal of the next generation of AI is to support decision-making and align with physicians’ workflows [1]. The current model identifies risks but provides no actionable advice, which hinders implementation [1]. The study confirms that 89% of physicians view the AI tool as reliable, with 92% agreement between AI and human physicians on urgency assessments [1]. Plans are in place to expand the AI tool to three neighbouring hospitals—Sint-Jansdal, AZ Maastricht, and Zuyderland—by 2026-03-15 [1]. The study is considered one of the most promising medical studies of 2024 by the journal Nature Medicine, underscoring the international significance of its findings [1].

Sources