Why young people will only be allowed on social media from age 16 starting in 2026
Brussel, donderdag, 27 november 2025.
The European Parliament has submitted a pioneering proposal: from 2026, young people must be at least 16 years old to access social media. The reason? Generative AI, such as deepfakes, manipulative chatbots, and addictive algorithms, poses a serious threat to youth mental health. According to research from 2019, one in four minors already showed signs of problematic smartphone use. The Parliament’s approval, adopted on 26 November, sends a strong message that technology must not grow without boundaries. The proposed age limit is not arbitrary but based on scientific analyses highlighting the vulnerability of 14- to 16-year-olds. Implementation is scheduled for spring 2026, with an EU app for age verification on the way. This is not a panic measure, but a responsible step to protect young people in a world where AI is becoming increasingly influential.
The European Parliament’s legislative move: a response to the AI challenge
On Thursday, 26 November 2025, the European Parliament adopted a resolution with 483 votes in favour, 92 against, and 86 abstentions, proposing a harmonised minimum age of 16 for access to social media, video platforms, and AI companions [1]. This move is a direct response to the growing threat posed by generative AI, including deepfakes, emotionally manipulative chatbots, and addictive design principles that leave young users vulnerable to psychological harm and misleading content [1][4]. The resolution specifically states that the mental and physical health of minors is seriously at risk from online content driven by engagement algorithms designed to exploit emotional reactions [1]. The proposed age is not arbitrary: it is based on scientific advice from the European Commission, published on 19 November 2025, which identifies an increased vulnerability of 14- to 16-year-olds to AI-driven manipulation [4]. The Commission is required to submit a concrete legislative proposal within 30 days of 26 November 2025, setting a deadline of 26 December 2025—a target that has not yet been met [4]. Implementation is planned for spring 2026, with the aim of protecting the younger generation from the risks of AI-inspired manipulation [4][5].
Legislative proposals: tackling AI risks
The European Parliament has put forward a comprehensive set of measures to protect young users. In addition, a ban is proposed on the most harmful addictive practices, such as endless scrolling, automatic playback, ‘pull-to-refresh’, reward loops, gamification, ‘loot boxes’, ‘pay-to-progress’ mechanics, and financial incentives for kidfluencing [1]. The Commission is urged to extend the Digital Services Act to online video platforms such as YouTube and to ban engagement-based recommendation systems for minors [1]. The proposed legislation places strong emphasis on responsible technology and personal accountability for senior leadership that allows serious and persistent rule violations [1]. Within the framework of the upcoming Digital Fairness Act, a robust enforcement action by the Commission is promised, including measures against persuasive technologies such as targeted advertising, influencer marketing, and ‘dark patterns’ [1]. On 19 November 2025, the European Commission presented a Digital Omnibus package, incorporating technical amendments to GDPR, NIS2, DGA, and DataAct, aimed at reducing regulatory fragmentation [4]. The proposed EU app for age verification and the European digital identity wallet are seen as crucial tools to ensure accuracy and privacy protection when young people access digital services [1].
International pressure and legal responses to social media
The European Parliament’s move comes at a time of increasing international pressure on tech companies. On 24 November 2025, over 1,800 parents and school leaders in California filed a class-action lawsuit against Meta, TikTok, Snapchat, and YouTube [2]. The complaint accuses these platforms of pursuing deliberate growth strategies that endanger youth mental and physical health [2]. According to plaintiffs, Meta downplayed harm to teenage users when it conflicted with goals to increase engagement and limited the effectiveness of youth safety features when they threatened growth [2]. Meta is accused of using distorted or selectively presented data in its research, while the company claims it is fully investing in measures to protect young users [2]. Snapchat is criticised for using haphazard age detection and designing features like Snap Streaks that are deliberately addictive [2]. TikTok is accused of deploying manipulative design choices to boost engagement among young users [2]. YouTube’s recommendation systems are alleged to repeatedly expose minors to harmful content, underscoring the need for a ban on recommendation systems for younger users [1][2]. The case is still under consideration and has no preliminary ruling or final deadline [2].
The role of generative AI in spreading and combating misinformation
Generative AI plays a central role in both spreading and combating misinformation. AI is used to create realistic deepfakes—realistic videos, audio, and text that mimic authentic content—threatening the identity and credibility of individuals and institutions [1][4]. Research shows that generative AI spreads rapidly in the public domain, especially via social media, where algorithms amplify emotionally charged content, accelerating the spread of misinformation [1][4]. The European Commission describes AI use on social media as a ‘serious ethical risk’ for users under 16, particularly due to the potential for emotional manipulation via chatbots and the generation of misleading content [5]. To combat this, new tools are being developed. TikTok announced on 21 November 2025 new features to limit ‘AI-slop’ (AI-generated content) and increase transparency on its platform [5]. The European Commission is working on a Digital Omnibus package that includes technical amendments to GDPR and other regulations, aiming to simplify privacy rules and reduce bureaucracy—thereby contributing, in a symbolic way, to better control over AI-driven content [2][4]. The IAPP (International Association of Privacy Professionals) highlights the need for digital responsibility and AI governance, with a focus on balancing innovation with the protection of civil rights [4].
Practical tips for readers to spot misinformation
Readers can actively help identify misinformation by following a few simple yet powerful steps. First, it is essential to check the source of a message: is it a reputable news organisation or an unknown site with a vague name or domain? Questions about authorship, intent, and data used can help assess reliability [GPT]. The second step is to verify the publication date: misinformation often spreads rapidly without verification, while credible news stories typically include a date or timestamp [GPT]. Third, use fact-checking tools such as Snopes, FactCheck.org, or the European fact-checking platform EUvsDisinfo, which are specifically designed to verify online claims [GPT]. Fourth, be alert to emotional language: misinformation often uses strong emotions like anger, fear, or shock to manipulate, whereas serious reporting usually maintains a neutral tone [GPT]. Fifth, avoid sharing content without verification: every time you share something, you reinforce its spread—even if you’re not fully convinced [GPT]. Finally, it’s important to recognise that AI-generated content often looks indistinguishable from real content, especially in photos and videos. Use tools like Google Reverse Image Search or Microsoft’s Bing Image Search to check if an image has been used on other websites before [GPT]. The combination of media literacy and technological tools is crucial in an era where AI increasingly shapes how information is produced and consumed [GPT].