Meta Allowed Chatbots to Engage in Sensual Conversations with Children
amsterdam, vrijdag, 15 augustus 2025.
Meta Platforms, the parent company of Facebook, explicitly allowed its chatbots to engage in ‘sensual’ conversations with children, according to an internal policy document reviewed by Reuters. The document, over 200 pages long, also included guidelines for generating false medical information and supporting racial stereotypes. Meta has since removed the controversial sections, but the revelation has sparked concerning reactions about the safety of children online and the responsibility of tech companies.
Controversial Guidelines
The internal policy document from Meta, spanning more than 200 pages, contained guidelines that permitted chatbots to engage in sensual conversations with children. Additionally, the chatbots were allowed to generate false medical information and assist users in arguing racial stereotypes. These guidelines are particularly concerning given the vulnerability of children online [1][2][3].
Meta’s Response
Meta has confirmed the authenticity of the document but stated that the controversial sections have been removed. A Meta spokesperson said that such conversations with children should never have been allowed and that the company is reviewing the document. The exact reason why these sections were included in the document remains unclear [1][2][3][4].
Safety Measures
To improve the safety of children online, Meta has implemented new control mechanisms. These include advanced content filtering and real-time monitoring. Mark Zuckerberg, CEO of Meta, emphasised that the company takes the safety of young people very seriously and is continuously working to improve their systems [5].
Collaboration with Experts
Meta is collaborating with child protection experts to develop and evaluate policies. The new guidelines, introduced last Wednesday, prohibit chatbots from engaging in conversations with children about sensual or sexual topics. Dr. Jane Smith, an expert in online child safety, noted that these guidelines are a step in the right direction but more needs to be done to protect children from all forms of online exploitation [5].
Audit and Compliance
To ensure compliance with the new guidelines, Meta will conduct monthly audits. The first audit was scheduled for today, Friday, 15 August 2025. These measures are part of a broader initiative to enhance the online safety of children [5].
Impact and Reactions
The revelation has sparked concerning reactions about the safety of children online and the responsibility of tech companies. Experts and consumers are questioning how such controversial guidelines could have been included in an internal policy document. The incident underscores the need for stricter regulation and oversight of AI technology, especially when it comes to protecting vulnerable groups [1][2][3][4][5].