Why Meta Stopped Its Own Research into Social Media and Mental Health
Dubai, zondag, 23 november 2025.
Last week, on a Sunday, a collective lawsuit revealed that Meta had internally found causal evidence: using Instagram and Facebook worsens depressive symptoms in young people. This long-running research showed that ceasing use led to improved emotional wellbeing within just one week. What most people don’t know: Meta did not halt the study due to lack of evidence, but because the evidence was too damaging. The results were never published, yet documents show executives saw the findings as a threat to the company. Now that the truth has emerged, society is asking: how many other technologies pose dangers we have yet to see?
Causal evidence that social media are harmful to young people
Last week, on Sunday, a collective lawsuit between American school districts and tech company Meta revealed that the company had internally found causal evidence: using Facebook and Instagram worsens depressive symptoms in young people under 25 [2][5]. The findings, released through uncensored documents from the lawsuit, show that stopping use of the platforms led to a reduction in depression, anxiety, and loneliness within just one week [5]. According to an internal Meta report from July 2024, algorithms that optimise content for engagement amplify negative emotional reactions and encourage comparison among young users [4]. The research results, which began on 15 March 2023 under the name Project Mercury, were never published and were officially deleted from the company’s archive on 22 November 2025 [4].
The quiet termination of the research
The study was not abandoned due to lack of evidence, but because the evidence was too damaging to the company [5]. Although a Meta spokesperson stated that the research methodology was insufficient to draw conclusions [5], the released documents suggest executives took the results seriously and viewed them as a threat to the business [5]. A staff member expressed concern that withholding this information was comparable to the tobacco industry’s decades-long concealment of cigarettes’ harmful effects [5]. The plaintiffs in the lawsuit allege that Meta deliberately suppressed the research findings to hide the dangers of its products from users, parents, and teachers [5].
The role of AI in spreading harmful content
Meta used AI technology in its internal research, although the exact application and influence of AI on the results are not explicitly stated [2]. Algorithms that optimise content for engagement amplify negative emotional reactions and encourage comparison among young people [4]. Moreover, the use of AI in spreading misinformation is increasingly expanding. Generative AI produces fabricated quotes, invented case law, and misleading content that is being introduced into legal proceedings. For instance, on 21 November 2025, in the United States District Court, Northern District of New York, in the case of Oliver v. Christian Dribusch, fabricated case law and false citations were discovered in a pro se litigant’s filing [3]. This demonstrates that AI is not only harmful in spreading misinformation, but also in undermining the integrity of the legal system [3].
AI in combating misinformation: a double-edged sword
While AI is used to spread misinformation, it is also employed to fight it. AI algorithms developed by platforms such as Meta, Google, and Twitter are designed to detect fraudulent content, yet their effectiveness remains limited. Documents released in the lawsuit against Meta indicate that the company not only knew the harmful impact of its algorithms but also recognised the potential of AI to spread damaging content [5]. However, in many cases, AI is practically used to censor or block content without transparency. This creates a dilemma: AI can detect misinformation, but when control lies with companies driven by commercial interests, the system can be subverted or manipulated [2][4].
The impact on media literacy and democracy
The spread of misinformation via AI disrupts the foundation of democratic discourse and trust in public information. In a case in the Eastern District of Michigan, an attorney was sanctioned for submitting fabricated case law, resulting in a $4,030 costs order and the restriction of their online upload privileges [3]. Legal authorities warn that responsibility for AI use should not lie solely with the individual user, but that the system itself must be regulated [3]. Erosion of trust in information affects media literacy: people begin to doubt everything they see or read, even content from sources that are normally reliable [4]. This fuels polarization and the spread of misinformation, particularly among young people who are more active online [5].
Practical tips for identifying misinformation
To reduce the risk of misinformation, readers can take concrete steps. Check the source: verify if the website is well known or if it is a trusted news organisation [4]. Use fact-checking tools such as Snopes, Politifact, or the NOS Factcheck [GPT]. Watch for emotional language: misinformation often uses exaggerated emotions like anger or panic to provoke reactions [3]. Verify whether the information is supported by multiple reliable sources. When in doubt: ask yourself, ‘Is this too good (or bad) to be true?’ or ‘Why would this story be so popular right now?’ [5]. The combination of digital skills and critical thinking is essential in an era where AI increasingly blurs the line between truth and fabrication [4].