Why AI is not neutral — and how organisations can responsibly implement it in practice
Amsterdam, vrijdag, 14 november 2025.
AI is not an objective machine, but a mirror of human choices, values, and interests. Marc van Meel, expert in ethics and governance, shows that every algorithm is shaped by the data fed into it, the goals of its developers, and the cultural context in which it is deployed. The most astonishing fact? When an algorithm fails, there is no technical error — only a moral decision that was made. From questions about accountability to the implications of data on equality and access to opportunity: this is no longer just a technical issue, but a social and ethical dilemma that every organisation must confront. With concrete insights and reflective questions, he helps teams and leaders make conscious choices during the digital transition. It’s not about hating or fearing technology — but about engaging with it mindfully.
AI as a mirror of human choices: Why technology is never neutral
AI is not an objective technology, but a reflection of human intentions, interests, and values. Marc van Meel emphasises that algorithms are shaped by the data collected, the goals of developers, and the cultural norms in which the technology is applied [2]. It is a fundamental misconception to believe that AI is neutral: every system bears the invisible hand of the people who design, train, and deploy it [2]. These insights are not abstract philosophy, but practical realities that organisations must accept when implementing AI. From the moment a dataset is selected to the moment a decision is made, choices are being made that directly affect fairness, accessibility, and trust. Today, therefore, it is no longer merely a technical issue, but an ethical and social dilemma that every organisation must address [2].
Responsible innovation in practice: How organisations make conscious choices
Organisations aiming to use AI responsibly must begin with reflection on fundamental questions: who is accountable when an algorithm fails? What values does your organisation wish to uphold? How do you involve your team in change? And what does it mean when data determines who gets opportunities and who does not? [2] Marc van Meel helps teams and leaders not to sidestep these questions, but to use them as starting points when designing AI applications [2]. His approach focuses on integrating ethics, governance, and human considerations into strategy, culture, and decision-making, so that technology is not only efficient but also fair, safe, smart, and human-centred [2]. The workshops he offers, such as ‘AI in Practice’ and ‘Ethical and Safe AI’, include concrete tools like fairness analyses, risk matrices, governance models, and stakeholder mapping to support conscious decision-making [1]. These tools have been developed for teams across various sectors, including government, healthcare, education, and finance, and are designed to build safe and achievable AI applications [1].
AI in information and public communication: From personalised content to effectiveness measurement
In the field of information and public communication, AI is already being used to make complex information accessible to diverse audiences. Personalised information provision leverages AI to tailor content to individual preferences, age, language proficiency, and technological skills, significantly improving information accessibility [1]. Chatbots for public services, such as those used by municipalities and government agencies, help citizens find information, apply for documents, and resolve simple queries — often 24/7 and without waiting times [1]. AI-driven awareness campaigns analyse behaviour, language use, and interaction patterns to optimise messaging and measure effectiveness. By analysing data, organisations can identify which messages resonate, which channels are most effective, and how information delivery can be improved [1]. These techniques are already being applied across various sectors, where organisations use AI to reach target groups more effectively, enhance communication quality, and increase public feedback [1].
Risks and challenges: Privacy, inclusivity, and reliability
Despite its benefits, AI in communication also brings risks, particularly concerning privacy, inclusivity, and reliability. When personal data is collected for personalisation, there is a risk of misuse or unauthorised access, especially when data is not adequately secured or used without sufficient transparency [1]. Inclusivity is another challenge: if AI systems are trained on data that overlook or exclude certain groups, algorithms may reinforce discriminatory patterns instead of reducing them [1]. This results in unequal access to opportunities, such as access to information, services, or financial products. Reliability is a third critical factor: inaccurate or outdated information disseminated by AI can cause misinformation and erode trust, especially in critical situations like public health crises or emergencies [1]. Marc van Meel stresses that organisations cannot wait until an incident occurs but must proactively work on governance and accountability [2]. This means establishing transparent AI policies, conducting bias analyses, and involving diverse stakeholders in the development of AI applications [1].
From theory to practice: Concrete examples of responsible AI in organisations
Various organisations in the Netherlands and beyond are using Marc van Meel’s insights to implement AI responsibly. In government, AI systems are used to predict social needs and optimise service delivery, with ethical considerations at the core [1]. In healthcare, AI is used to personalise patient support, but only when done with consent, transparency, and a clear accountability structure [2]. In education, AI tools help students with varied learning styles understand material more effectively, while teachers are supported with feedback and assessment [1]. In the financial sector, AI algorithms are used for credit assessment, with a strong emphasis on fair lending and preventing discriminatory patterns [1]. These examples demonstrate that AI is not only technically feasible, but also ethically responsible when integrated within a comprehensive governance strategy that never loses sight of the human dimension [2].