How the EU will try to cleanse the web of fake videos from 2026
Brussel, donderdag, 13 november 2025.
Imagine: a video of a well-known politician saying something they never said – and it is seen millions of times. Since 2023, AI-generated fake news, also called deepfakes, has increased by 300%. The European Union is responding with new legislation that, from 1 January 2026, will require all AI-generated media to be labelled – from political campaigns to ordinary social media posts. This will enable users to distinguish genuine from fake. The legislation, based on the AI Act corpus, is a direct response to growing uncertainty about truth and fiction. In particular, dissemination via algorithms on platforms such as TikTok and X (formerly Twitter) amplifies the impact of disinformation. Mandatory labelling is not merely a technical step but a crucial measure for preserving democratic transparency and trust in the digital space.
The explosion of deepfakes: from technological marvels to a threat to democracy
Since 2023, AI-generated fake news, also called deepfakes, has increased by 300%, a figure that illustrates the speed of digital disinformation [1]. This technology can now create such realistic videos and audio clips that even experts can hardly distinguish between genuine and fake. One example: on 10 November 2025, 17 million AI-generated fake news items were identified during the European elections, primarily targeting the centre-right political coalition in Germany [1]. These campaigns used fabricated statements from political leaders, with the videos so convincing that they immediately went viral on social media. The impact is not limited to politics: ordinary citizens are also targeted, with deepfakes circulating about health, finances and personal relationships. Mandatory labelling of AI-generated content from 1 January 2026 is a direct response to this development [2][4]. The European Commission submitted a proposal on 12 November 2025 for a legislative framework for AI that includes mandatory labelling of deepfakes [4]. This initiative is embedded in the AI Act corpus and is intended to increase transparency of digital content, especially on platforms where algorithms control distribution [1][4]. The combination of AI and algorithms creates a system in which fake news spreads faster and more pervasively than authentic content, endangering democratic processes [1].
Algorithms as distributors: how social media accelerate the spread of fake news
The spread of fake news is significantly amplified by the algorithms of social media platforms such as TikTok and X (formerly Twitter). According to a study by the MIT Media Lab, published on 11 November 2025, these algorithms are responsible for 78% of the viral spread of fake news [1]. The reason? Algorithms do not optimise for truth but for engagement and emotion – and that makes them a superpower for disinformation [1]. Dr Lena Vogt of Ruhr University Bochum calls this ‘the core of the problem’: ‘These algorithms do not stimulate truth but emotion and engagement – and that makes them a superpower for disinformation’ [1]. On 11 November 2025, the European Parliament’s Committee on the Internal Market and Consumer Protection (IMCO) decided to approve new rules to protect children from dangerous toys and to investigate the impact of social media on young people [4]. The report ‘Impact of social media and the online environment on young people’ is scheduled for a committee vote in January 2026 and a plenary vote in April 2026 [4]. This shows that the EU is aware of the systemic consequences of algorithmic content distribution. Mandatory labelling of AI-generated media from 2026 is an attempt to break this dynamic by helping users distinguish between real and fake [1][4].
The legislative process: from proposal to the start of the labelling obligation
The process to implement mandatory labelling of AI-generated media from 1 January 2026 is underway but not yet finalised. The European Commission submitted a proposal on 12 November 2025 for a legislative framework for AI, including mandatory labelling of deepfakes from 2026 [4]. This proposal is part of the AI Disinformation Prevention Act (AIDPW), which must be adopted by the European Parliament [4]. The Commission had already submitted the proposal on 1 November 2025, but the first vote in the European Parliament has not yet taken place [4]. The introduction of the labelling obligation is a planned event that has not yet come into effect, with a provisional adoption date of 31 January 2026 [4]. [alert! ‘The exact date of adoption has not yet been confirmed; the source mentions a target but no definitive date.’] The mandatory labelling applies to all digital content generated with artificial intelligence, including videos, audio, photos and text, and must be applied prior to publication [4]. The legislation is based on the AI Act corpus and will be implemented via a combination of recognition technologies and accountability for content providers [1]. For example: a political campaign using a deepfake must mark the file in advance as ‘AI-generated’ – both on the platform and in the metadata [1][4]. This is a crucial step in preserving media literacy and democratic transparency [1].
Practical tips for readers: how to recognise fake news in a world of AI-generated content
In a world where deepfakes are becoming ever more realistic, it is important that you as a user are critical and can take concrete steps to recognise fake news. Start by checking the source: see whether the video or post comes from a certified media organisation or an official website [1]. Use tools such as the European Digital Media Observatory (EDMO), which since 2023 has identified 17 million AI-based fake news items [1]. Also look for physical inconsistencies in videos: eyes that do not blink, lips that are not properly synchronised with the voice, or unnatural lighting effects – things AI often overlooks [1]. Another approach is to use reverse image search or AI-detection tools, like those being developed by start-ups supported by the EIC Accelerator [3]. The European Commission aims for common specifications for AI content, where harmonisation is lacking, to facilitate enforcement of the labelling rules [4]. The Platform for Digital Democracy (PDD) reported on 2 November 2025 that 62% of Dutch citizens have been uncertain about the authenticity of news items since June 2025 [1]. This emphasises that media literacy is not only technical but also psychological: you must learn to doubt, ask questions and not automatically believe what you see or hear [1].