AIJB

Why AI Stalls in Organisations – and How to Prevent It

Why AI Stalls in Organisations – and How to Prevent It
2025-11-29 voorlichting

Amsterdam, zaterdag, 29 november 2025.
The real barrier to AI integration is not technology, but fear and uncertainty among employees. According to research from November 2025, psychological safety is the key to successful AI adoption: organisations where people feel safe to ask questions, make mistakes, and learn will grow faster in 2026. It’s not about how intelligent the AI is, but how safe people feel working with it. Interestingly, organisations actively building this safety now will have the greatest advantage within a year – even in sectors such as healthcare, government, and finance. The value of a strong culture, transparency, and ethics is proving more essential than ever.

Psychological Safety: The Invisible Key to AI Adoption

The rapid advancement in AI technology is not hindered by flawed software or hardware, but by a deeper, human obstacle: fear of change and a lack of psychological safety within organisations. According to research by Danique Scheepers, diversity and inclusion advisor with a focus on AI ethics, the level of psychological safety is the most important prerequisite for successful AI adoption in 2026 [1]. Organisations that are now actively building safe spaces where employees can ask questions without judgment, make mistakes without punishment, and learn without shame will have a lasting advantage over their competitors within a year [1]. This connection is not hypothetical: the research shows a direct relationship between the level of psychological safety and the speed at which organisations can learn and adapt in the AI era [1]. Therefore, the central question for leaders heading into 2026 is not: ‘How quickly can we implement AI?’, but: ‘Can our employees feel safe enough to develop sufficiently to remain sustainably employable in 2026?’ [1]

From Fear to Trust: How Organisations Can Support Their People

For many employees, AI raises questions that are not technical, but emotional: ‘What will remain of my job?’, ‘Do I understand how to work with these tools?’, ‘What if I make mistakes?’, ‘Will I have to compete with an AI agent?’ [1]. This uncertainty leads to resistance, hidden errors, stalled learning, and even employee withdrawal, even when the technology is available [1]. To prevent this, four actions are essential: making mistakes visible and discussable, embedding continuous support through buddies, coaches, and learning sprints, creating radical clarity about changes, and making development accessible to everyone [1]. These measures are not optional extras, but strategic necessities to close the skills gap that will be the biggest challenge in 2026 [1]. Continuous learning becomes a core component of sustainable work, supported by safety nets such as team coaches and internal experts [1].

AI in Public Communication: Personal, Accessible, and Measurable

In the sector of public communication and outreach, AI offers new opportunities to make complex information accessible to diverse audiences. Personalised information delivery, such as the DDMA knowledge bank, demonstrates that customer experience in an AI-driven world remains personal through the human factor and engagement [2]. The new telemarketing legislation, effective from 1 July 2026, requires existing customers to opt-in, highlighting the importance of relevant experiences over discounts [2]. This is a direct response to the growing demand for trust-based relationships and engagement in communication [2]. AI-powered chatbots for public services, such as those being tested in municipalities through the three-day training programme AI & Gemeente, help answer questions more efficiently without requiring technical knowledge [3]. These applications increase accessibility, especially for people with disabilities or those struggling with bureaucratic language [3]. AI-driven communication campaigns can be customised for different target groups using data analysis to measure effectiveness and optimise campaigns [2][3].

Ethics, Privacy, and Accountability: The Indispensable Frameworks

The application of AI in public communication brings not only benefits but also challenges. Privacy, inclusivity, and reliability are crucial boundary conditions. On 27 November 2025, Italy imposed a ban on six websites using AI bots and deepfakes for financial fraud, employing convincing but false explanations of financial products via AI-generated videos and voices [4]. This action, part of Operation ‘Phantom Ledger’, highlights the growing threat of algorithmic deception in the public sphere [4]. Consob reports these websites have already collected €17 million in compensation through AI-generated content [4]. In the Netherlands, the European Commission introduced the Digital Omnibus package on 19 November 2025, which includes amendments to the AI Act and the cookie law, giving AI applications in communication increasingly stringent regulation [2]. The new telemarketing legislation, effective from 1 July 2026, requires opt-in from existing customers, strengthening organisations’ responsibility for ethical communication [2].

Leadership in the AI Era: From Technology to Empathy

Effective leadership in the AI era requires not only technical skills, but above all ambidexterity: a deep understanding of what AI can and cannot do, combined with a consistent focus on ethics, empathy, and human values [5]. Caroline Tervoort-Visser emphasises that AI shifts the focus of work from transactional and repetitive tasks to interaction and interpretation, demanding profoundly human skills [5]. ‘Leaders must also roll up their sleeves!’ – a direct quote from her speech at the Human Capital Congress [5]. It is not enough for leaders to set a good example; it must be visibly reflected in daily decision-making [5]. Vanessa Jeurissen-Kohn highlights the importance of a ‘tone at the top’ and stresses that non-compliance today is primarily a human, cultural, and behavioural risk, not a financial one [6]. She warns that if leadership does not openly discuss mistakes, dilemmas, and choices, compliance will remain a separate agenda item, not an integral part of strategic decision-making [6].

Sources