How AI and Brain Technology Together Can Better Understand People
twente, dinsdag, 4 november 2025.
Imagine an AI that not only analyses data but also understands how culture, emotional support, and social context influence individuals — such as a young Black man with depression, where AI simulates a realistic treatment journey without using real patient data. Imagine a brain implant, designed with energy-efficient algorithms, that continuously adapts to the brain activity of a patient with epilepsy. The new generation of technologies, such as those in the EBRAINS-Neurotech project and Professor VanHook’s research, go beyond efficiency: they focus on humanity, inclusivity, and ethics. Most striking? These systems are not meant to replace people, but to create care, connection, and understanding — where the true value lies not in the technology itself, but in the human values it strengthens.
AI as a Bridge in Public Awareness: From Standard Messages to Human Interaction
AI systems in the public sector are increasingly going beyond automating routine tasks. Rather than simply disseminating information, they are being used to connect people — a core principle of the new ethical framework developed by Dr. Kristy Claassen from the University of Twente [1]. This approach, inspired by the Ubuntu ethic which states that ‘I am because we are’, aims to ensure technology does not replace people but strengthens their social and emotional connections [1]. For example, in a practical project by the new organisation EthicEdge, AI is used to support survivors of sexual violence through a chatbot carefully designed based on cultural and emotional context [1]. This application illustrates how, when developed with a value-driven approach, AI not only enhances accessibility but also creates a safe space for vulnerable groups to share their stories without fear of judgment [1]. The focus is on integrity, creativity, and community orientation — values often overlooked in current debates about responsible AI [1].
Personalised Awareness: From Generic Campaigns to Individual Needs
In the healthcare sector, researchers are developing an innovative framework that uses generative AI to simulate personalised mental health care [2]. In a study by Professor Cortney VanHook from the University of Illinois Urbana-Champaign, a fictional client, Marcus Johnson — a young Black man with depressive symptoms — is used to generate realistic treatment pathways [2]. The AI integrates evidence-based models such as Andersen’s behavioural model and the five components of access, analysing personal factors like work-related stress, cultural barriers, and support networks [2]. This results in a safe, privacy-compliant environment where clinicians, students, and trainees can practice without using real patient data [2]. According to VanHook, this approach offers a practical, evidence-based way to integrate AI into education and clinical practice, with the goal of improving culturally competent care [2]. Over 40% of respondents in a MIND poll conducted in September 2025 reported having already spoken with an AI chatbot about mental health, with 44% finding it helpful [2]. The advantages include low barriers to access, anonymous availability, and a non-judgmental nature, which help individuals articulate their feelings and prepare for therapy [2].
AI in Journalism and Information Provision: From Automation to Meaningful Stories
In journalism and information services, AI is increasingly being used to make complex information accessible to diverse audiences [1]. Research from the University of Twente, led by Dr. Claassen, emphasises that AI should not only promote efficiency but also integrate humanity and inclusivity [1]. Within the ‘Algorithms for All’ series, which began on 4 November 2025, researchers are exploring how algorithms can contribute to a society in which technology connects people rather than drives them apart [1]. This approach extends beyond the Dutch public sphere and is also applied in South Africa through the organisation EthicEdge, where AI ethics are grounded in an African perspective within education and the business world [1]. The ‘Algorithms for All’ project fuels public debate on AI design through interaction between academic thinking and real-world practice, with a focus on transparency and community orientation [1]. AI systems thus help translate technical information into language understandable to people without academic backgrounds, which is essential for inclusive public awareness [1].
Brain Technology and AI: The Next Generation of Human-Machine Integration
In the EBRAINS-Neurotech consortium, led by the University of Amsterdam and funded with €18.3 million from the NWO-LSRI programme, advanced brain technologies are being developed with careful integration of ethical and transparent principles [3]. The Centrum Wiskunde & Informatica (CWI) plays a key role by developing software that compresses large-scale brain models into energy-efficient algorithms for neuromorphic chips [3]. This technology is used in brain-machine interfaces (BCIs) that continuously learn and adapt to brain activity, with applications in neurological conditions such as epilepsy, depression, stroke, and Alzheimer’s [3]. Nanotechnology is employed to miniaturise electrodes and increase the number of stimulation points for more precise control of brain circuits, akin to pacemakers [3]. Optogenetics — using light to influence neural activity — is combined with large-scale brain activity measurements to refine treatments [3]. The development of these implants focuses not only on clinical effectiveness but also on safety, fairness, and the avoidance of bias in algorithms — core values consistently highlighted in ethical AI development [3]. Professor Sander Bohté from CWI emphasises that the goal is to make neurotechnological devices not only more effective but also more practical for patients and researchers alike [3].
The Challenges of AI: Privacy, Inclusivity, and the Limits of Objectivity
Despite the promising applications, serious challenges remain, particularly concerning privacy, inclusivity, and reliability. The use of AI in mental healthcare is restricted by legislation: Illinois’ Wellness and Oversight for Psychological Resources Act states that AI tools may only be used for educational and administrative purposes, not for direct clinical decision-making [2]. This underscores the need for clear guidelines and professional standards, which are still lacking in many applications, where most AI systems remain in pilot phases [2]. Moreover, it is stressed that generative AI is not yet capable of mapping all factors influencing mental health, although it does assist in understanding fairer systems [2]. In the EBRAINS-Neurotech project, explicit attention is given to avoiding bias in algorithms, data security, and ethical development — essential considerations when using implants that directly interact with the brain [3]. The challenge lies in balancing technological advancement with human values: technology must not be automatically accepted as neutral, but must be explicitly evaluated for inclusivity, transparency, and moral responsibility [1].