AIJB

ChatGPT to Allow Erotica for Adults from December

ChatGPT to Allow Erotica for Adults from December
2025-10-16 journalistiek

san francisco, donderdag, 16 oktober 2025.
Sam Altman, CEO of OpenAI, announces that ChatGPT will allow erotica for verified adults from December. This decision has attracted significant attention and criticism, particularly from Mark Cuban, who warns of potential negative consequences for children and parental trust. OpenAI has introduced new safety controls, but the decision remains controversial, with legal actions and ethical debates about the role of AI in society.

Sam Altman Announces Major Changes

Sam Altman, CEO of OpenAI, announced on X that ChatGPT will allow erotica for verified adults from December. Altman emphasised that OpenAI is not the ‘self-appointed moral police of the world’ and that the decision was made to treat adult users as adults [1]. This means that the chatbot will permit more content, including erotica, as long as users are verified as adults [2].

Safety Measures and Reactions

In recent months, OpenAI has taken significant steps to improve the safety of ChatGPT. The company has introduced parental controls and is working on an age prediction system to apply appropriate settings for users under 18 [1][2]. Despite these measures, the announcement has drawn much criticism, especially from Mark Cuban, a well-known investor, who warns that the decision could ‘backfire badly’ and that parents will lose trust in the system [3].

OpenAI’s decision stands in stark contrast to recent legal actions against the company. In August 2025, OpenAI was named in a wrongful death lawsuit by a family who blamed ChatGPT for their 16-year-old son’s suicide [1][2]. Additionally, the Federal Trade Commission launched an investigation into OpenAI and other tech companies in September 2025 due to concerns about the negative impact of chatbots on children and teenagers [1].

Defence and Future Plans

Altman defended the decision, stating that OpenAI has mitigated serious mental health issues and developed new tools [1][2]. He also stressed the importance of substantial protection for minors and the need to treat adult users as adults [1][2]. OpenAI has also formed a council of eight experts to advise on the impact of AI on mental health, emotions, and motivation [1].

Community Responses

Haley McNamara, director of NCOSE, warned that sexualised AI chatbots are inherently risky and can lead to real mental health problems through synthetic intimacy, in a context of poorly defined industry safety standards [1]. Jay Edelson, the lawyer representing the family in the wrongful death lawsuit, called OpenAI’s announcement ‘an attempt to change the subject’ and demanded that ChatGPT be taken offline [2].

Impact on the AI Industry

OpenAI’s decision is seen as a sign of the priority AI companies give to short-term growth at the expense of long-term consumer loyalty, especially at a time when demand for OpenAI subscriptions in Europe is flat and user spending on ChatGPT has stagnated [3]. Despite the criticism, Altman remains determined to relax restrictions, which could fundamentally change the future of AI and how we interact with it [1][2][3].

Sources