AIJB

ChatGPT as Therapist: Potential and Risks in Mental Healthcare

ChatGPT as Therapist: Potential and Risks in Mental Healthcare
2025-08-01 voorlichting

amsterdam, vrijdag, 1 augustus 2025.
ChatGPT and other AI programmes are increasingly being used as therapeutic resources, particularly by young people and those in economically distressed or war zones. While users express enthusiasm about the availability and reliability, experts warn of the dangers of isolation, incorrect diagnoses, and lack of emotional depth. Despite these limitations, some professionals see potential in AI as a supplement to traditional therapy, provided it is strictly regulated and monitored.

Enthusiasm and Concerns

Although ChatGPT and other AI programmes are increasingly being used as therapeutic resources, users show both enthusiasm and concerns. Young people and those in economically distressed or war zones find ChatGPT to be a cost-effective and accessible option. A survey by 3Vraagt reports that one in ten Dutch young people (aged 16 to 34) use AI programmes to talk about their mental health [1]. In Lebanon, where thousands of people have been affected by war and economic crises, many seek AI assistance because traditional therapy is often too expensive [2]. However, experts warn of the dangers of isolation, incorrect diagnoses, and lack of emotional depth [1][2][3].

Personal Stories

Mirjam, a patient with post-traumatic stress disorder (PTSD), has been using a ChatGPT bot named Lumi to support her mental health since October 2024. She describes Lumi as her ‘bosom friend’ who helps her set concrete goals and organise her thoughts [1]. Zainab Dhaher, a 34-year-old mother of two children who fled her southern Lebanese village due to Israeli bombings, uses ChatGPT for emotional support. However, she acknowledges that the diagnosis from ChatGPT shocked her and that it is not a substitute for professional therapy [2].

Ethical and Safety Concerns

Ethical and safety concerns play a significant role in the discussion about AI therapy. Sam Altman, CEO of OpenAI, warns that personal conversations on ChatGPT should receive the same protection as conversations with therapists, but emphasises that AI is not suitable for severe psychological crises [4]. There have been reported cases of chatbot psychosis, where users experience severe psychological crises, paranoia, and delusions [1]. American scientists also warn of the risk of dangerous or inappropriate responses from chatbots in severe psychological situations [1].

Benefits and Limitations

Despite the limitations, some professionals see potential in AI as a supplement to traditional therapy. Bart van der Meer, a mental health psychologist, suggests that mental health institutions could offer special chatbots for patients with panic attacks in the future [1]. ChatGPT provides a space free from social risks and can help with organising thoughts and improving communication [5]. However, AI cannot replace the deep human capacities that are crucial for successful therapy, such as active listening, vulnerability, and creativity [3].

Regulation and Oversight

To ensure the safety and effectiveness of AI therapy, experts advocate for strict regulation and oversight. Software must first be approved for medical use, and there must be a clear line of responsibility between practitioners and developers [1]. Margot van der Goot, a researcher at the University of Amsterdam, points to the confirmatory nature of chatbots as an attractive aspect for users, but notes that this can also lead to manipulative behaviour [1].

Privacy and Reliability

Confidentiality and reliability are major concerns with tech-based therapy platforms. Local therapists provide a higher degree of privacy and adhere to strict privacy laws, unlike tech platforms that often offer limited protection [5]. ChatGPT and other AI tools can also provide incorrect or biased information, which can affect the quality of treatment [1][5].

Sources