AI Chatbots: New Friends or Potential Problems?
brussel, donderdag, 30 oktober 2025.
More and more people are developing personal bonds with AI chatbots, raising questions about the possible positive and negative effects. Experts discuss the implications for human interaction and emotional well‑being, while warning about risks such as addiction and greater difficulty in forming real relationships. It is a fascinating subject that forces us to think about the future of our social connections.
Intro — Why AI in public information is relevant now
AI chatbots and conversational models are visibly appearing in public debate and in personal lives: people form bonds with digital interlocutors and organisations are exploring how AI can be used for information provision and public communication [1][3]. This phenomenon was recently highlighted in multiple VRT productions, including a podcast episode and investigative reports showing that AI relationships have both personal and societal implications [1][4][3].
How often and in what forms people seek AI relationships
Research and journalistic reports indicate that relational use of AI — friendship, therapeutic conversations or romantic interactions — constitutes a substantial part of total AI usage: about 31 per cent of AI use goes to relational experiences according to a VRT report, and apps like Replika report millions of users worldwide (more than 35 million users in the coverage) [3]. These figures show that AI relationships are not marginal, but are becoming a widely adopted practice [3].
Case study: Replika and individual testimonies
Journalistic reconstructions show how people form intensive, long‑term relationships with chatbots: a profile in the Pano report describes a 61‑year‑old man who has lived with an AI partner for two years and states he experiences emotional support and daily companionship via the Replika app, which is mentioned in that report as widely used worldwide [3]. Such examples concretely illustrate how AI can function as an information provider or companion, but also raise questions about dependency and continuity when commercial services change or stop [3].
Risks visible in practice: abuse of voices and sexually themed bots
Investigative journalism from VRT shows that AI platforms can be abused for sexually themed conversations using fake voices of well‑known Flemish personalities, which rings legal and ethical alarm bells; Studio 100 calls it ‘horrible’ and flags possible measures and controls against misuse of characters and voices on platforms like Character.ai [2]. That case demonstrates that public information and communication with AI cannot be seen separately from abuse risks and the need for active moderation and legal instruments [2].
Expert warnings: unreliability and transparency
Academic and professional voices warn that language models are fundamentally unreliable and therefore must offer honesty and transparency to users; Pascal Wiggers (lector Responsible IT) emphasises that such models string words together without intent or consciousness, causing them to sometimes produce incorrect or misleading outputs and users should remain critical [5]. Such expert remarks underscore that public information initiatives with AI should explicitly state the limitations and reliability of the technology [5].
Opportunities for public information: personalised information and audience reach
AI offers concrete advantages for public communication: personalisation of information, 24/7 availability via chatbots for public services and the ability to adjust language level and tone for different audiences — applications that have emerged in recent coverage of AI relationships and public discussions as drivers behind rapid adoption [1][3][5][4]. At the same time, effective deployment requires communicators to make design choices around inclusivity, accessibility and control mechanisms to counter misinformation and misuse [5][1].
Measuring effectiveness: what can AI add and where are the limits?
AI makes it technically possible to analyse responses, click behaviour and conversation outcomes and thereby map campaign effectiveness, but concrete, generally applicable success measures and standards for public information are still missing in the coverage and must be developed on a case‑by‑case basis [GPT][alert! ‘no direct source in the provided links that describes methods and standards for measuring effectiveness in information campaigns’]. Journalistic examples do show that platforms and creators use data to moderate and improve content, reflecting how monitoring can work in practice [2][3].
Ethics, privacy and continuity: the challenge of commercial dependence
The Pano report and related coverage stress that the commercial models behind many relationship AIs introduce risks: dependence on a commercial app can cause emotional vulnerability if a service stops or changes, and there is also misuse of voices and characters raising legal and privacy questions [3][2]. Such cases show that public organisations and policymakers must consider safeguarding privacy, clarity about data ownership and strategies for continuity of service [2][3].
Accessibility: making complex information understandable with AI
AI can make complex dossiers and technical explanations more accessible by simplifying text, giving examples or offering information in multiple languages and formats — possibilities presented in media attention around AI and public interaction as important added value [1][4][GPT]. At the same time, it remains essential that simplification does not lead to loss of nuance or dissemination of incorrect facts; transparency about sources and errors therefore remains crucial [5][alert! ‘there is no specific case in the provided sources that describes a controlled evaluation of simplification methods’].
AI, economy and scale: why investment is increasing
The rapid commercial valuation of AI‑related technologies and companies emphasises the scale and investment flow around these technologies — a recent example is that Nvidia, a key player in AI hardware, became the first company ever to reach a market capitalisation above 5,000 billion dollars, after a rise from more than 4 trillion dollars four months earlier according to coverage [7]. To quantify that relative increase compared with the earlier value, the percentage rise can be represented as 25 based on the amounts mentioned in the report [7].
Practical lessons for communicators and organisations
Recent journalistic cases and expert reactions point to some concrete lessons for organisations wishing to use AI in public information: explicit transparency about what AI can and cannot do; built‑in moderation and complaint mechanisms; clear safeguards against voice and personal misuse; and preparation for scenarios in which commercial services disappear or change [2][3][5][1]. Such measures help both to improve the effectiveness of information delivery and to protect public trust [2][5][3].