AIJB

When an image no longer matches reality: what the Netherlands is doing about deepfakes

When an image no longer matches reality: what the Netherlands is doing about deepfakes
2025-12-04 nepnieuws

Den Haag, donderdag, 4 december 2025.
Imagine a video in which a well-known person says something they never actually said. In the Netherlands, such content will become a criminal offence by 2025 – provided it is deliberately created and disseminated. The proposed legislation, currently open for consultation, aims to protect individuals from digital manipulations that could damage their reputation, privacy, and self-image. Notably, the law does not only safeguard artists, but also ordinary people and their deceased relatives. Penalties could include up to six months in prison, though exceptions exist for satire and parody – as long as these are clearly indicated. The proposed framework seeks to build trust in information, yet still leaves questions open about how boundaries will be drawn in practice. This represents a significant step in the fight against fake news and misleading content in the digital age.

A new law against deepfakes: what’s happening now

On 30 October 2025, a draft bill was put forward for consultation that would criminalise the deliberate creation, use, and dissemination of deepfakes involving natural persons. The proposal extends the Law on Neighbouring Rights beyond artists to cover everyone – including deceased relatives – who might be intimidated or offended by digital manipulations of image and sound [650]. Penalties for a criminal act could reach a maximum of six months’ imprisonment, depending on the severity and circumstances of the use [650]. The legislator requires intent for criminal liability, but does not clarify whether this is required when disseminating or acknowledging a deepfake [650]. The concept is introduced at a time when image and audio manipulation using AI technology is becoming increasingly realistic, posing risks to reputations, privacy, and trust in information [650]. An important expansion is the allowance of deepfakes for satire or parody, provided the use complies with standards of social conduct and is transparent: it must be clearly indicated that the content is a deepfake, in line with the AI Regulation (OJ:L_202401689) [650]. However, uncertainties remain regarding the practical application of this exception, particularly concerning what constitutes ‘reasonably permissible’ in social interaction, which poses a risk to legal certainty [650]. To date, the Public Prosecution Service has not prosecuted any cases under the Law on Neighbouring Rights, meaning there is no case law to serve as a guide [650].

How AI accelerates the spread of fake news – and how it can be countered

AI technology is used both to spread and detect fake news. On platforms such as TikTok, audiovisual content and trends are already being generated using AI, enabling users to create videos such as a ‘SAGA STQ EXPOSÉ DOGGAD’ using standardised AI-based tools [TikTok]. These tools allow real-time manipulation of people’s voices and images, shifting the responsibility for authenticity from content creators to users and platforms. In practice, AI is also being used to detect fake news: on 1 December 2025, the Dutch government launched an AI-powered search engine for the labour market, designed to identify unauthorised or misleading information in online job postings [oprijk.nl]. Although targeted specifically at employment, this illustrates how AI-based systems are being deployed to filter trustworthy information. On the other hand, an international study by the International Centre for Counter-Terrorism (ICCT), commissioned by the National Coordinator for Counter-Terrorism and Security (NCTV), examines how AI platforms can assist in detecting and moderating terrorist, illegal, and implicit extremist content without infringing on freedom of expression [112wwft.nl]. The report highlights the complexity of balancing security and freedom, particularly in cases where content is not explicitly illegal but still poses a danger [112wwft.nl]. The findings of this research will be used over the coming months to develop guidelines for online platforms and government agencies.

What you need to know to spot fake news: practical tips for readers

To protect yourself from fake news, media literacy is essential. The first step is critically assessing the source: pay attention to whether the message comes from an official, recognised institution such as the NOS, VPRO, or a scientific research organisation [nos.nl]. Use tools such as the NOS fact-checking platform or the National Media Office to verify the truth of a video or article. When watching videos with AI-generated voices or images, look out for unnatural lip movements, irregular eye movements, or audio or visual quality that is too high or too low to match the rest of the video. It is important to know that some deepfakes are released as satire or parody – but these should include clear warnings [650]. If you see a video of a public figure saying something highly unlikely, or discussing an event that hasn’t occurred yet, it is advisable to first check whether the story is covered in multiple reliable sources. Using tools such as reverse image search or AI detection tools – such as those developed by research institutes – can also help uncover manipulations. The Dutch government advises citizens to submit their leave entitlements by December 2025 before they expire, but also recommends taking time to review and assess your digital footprint and what you share online [oprijk.nl].

Sources