Marijke Synhaeve's Policy: 'Deepfake Pornography is Sexual Violence and Undermines Democracy'
amsterdam, donderdag, 7 augustus 2025.
Former Member of Parliament Marijke Synhaeve has decided to speak openly about her experience as a victim of deepfake pornography. She emphasises that this type of content is not only a form of sexual violence but also undermines democracy. Synhaeve, who initially remained silent, has now filed a police report and advocates for stricter legislation to protect victims and combat the spread of deepfakes.
Marijke Synhaeve’s Experiences
Former Member of Parliament Marijke Synhaeve has been targeted three times by deepfake pornographic videos using her image without her consent. A year and a half ago [2024-02-06], she was first warned by the security of the House of Representatives. Six months later [2024-08-06], she received another warning, and approximately six months ago [2024-02-06], she was informed about the images for the third time. Each time, Synhaeve filed a police report with the support of the House of Representatives’ security team [1].
Impact on Personal and Professional Life
Synhaeve stresses that this form of sexual violence has affected her not only as an individual but also as a mother. She states that it is a violation when someone uses her images without her consent, especially in such a context. Although she has not viewed the images herself, as she does not see how it would help her, she has decided to speak openly about the impact of deepfakes on victims and society [1].
Democratic Implications
Synhaeve warns that these forms of intimidation can narrow political debate and make it less diverse. She emphasises that deepfake pornography undermines democracy by eroding trust in media and digital content. She wants to send a clear message that this behaviour will not be tolerated and refuses to be intimidated [1].
International Context
Actress Scarlett Johansson has also experienced deepfake pornography. She notes that while these images are humiliating, the legal battle to combat them is often futile. The technology uses AI to superimpose a person’s face onto existing video material, which is difficult to stop. These issues are prevalent worldwide and require international cooperation [2].
Legal Measures in Denmark
Denmark has introduced a bill to better protect citizens against deepfakes through copyright law. The bill grants people copyright over their own face, voice, and body, meaning deepfakes can only be created or distributed with explicit consent. This is a response to a deepfake video where the Danish Prime Minister supposedly announced the abolition of Easter and Christmas, causing social unrest and political controversy [3].
Future Perspective in the Netherlands
The Netherlands lacks a similar bill. Existing legal protection is fragmented and often ineffective. Organisations like Stop Online Shaming and politicians such as VVD MP Rosemarijn Dral are enthusiastic about a comparable approach. It is uncertain when or if such a law will be introduced in the Netherlands, but inaction is not an option [3].
Technological Solutions
Developers are working on tools to detect and remove deepfakes. Platforms like TikTok, Meta, and X (formerly Twitter) can ask for permission upon upload. Without consent, the video is removed. These technological measures are crucial for protecting victims and preventing abuse [3].
Practical Tips for Readers
To identify fake news and deepfakes, it is important to critically assess sources. Always check the original source and look for multiple confirming reports. Watch out for spelling errors, poor language use, and unlikely claims. Use fact-checking websites and stay informed about the latest technologies and media trends [GPT].