Deepfakes and the Law: Legal Implications Explored
amsterdam, dinsdag, 19 augustus 2025.
In a recent blog post, Arnoud Engelfriets discusses the legal implications of deepfakes, focusing on the limitations of personal data and the application of the General Data Protection Regulation (GDPR). A fascinating aspect is that the Court of Justice determined in 2023 and 2024 that there is no automatic right to compensation for unlawful processing of personal data; actual damage must be demonstrated. This means that victims of data breaches often struggle to quantify their damage and therefore frequently do not file damage claims.
Legal Implications of Deepfakes
In a recent blog post, Arnoud Engelfriets discusses the legal implications of deepfakes, focusing on the limitations of personal data and the application of the General Data Protection Regulation (GDPR). A fascinating aspect is that the Court of Justice determined in 2023 and 2024 that there is no automatic right to compensation for unlawful processing of personal data; actual damage must be demonstrated. This means that victims of data breaches often struggle to quantify their damage and therefore frequently do not file damage claims [1].
Legislation and Deepfakes
The discussion about deepfakes is not limited to the GDPR. In Denmark, there is a proposal to make deepfakes punishable by bringing images and sound under copyright law. Professor Dirk Visser proposed an anti-deepfake law in 2024, which requires permission for creating or publishing deepfakes. This proposal is part of the Neighbouring Rights Act and not the Copyright Act [2].
International Measures
Internationally, steps have also been taken to combat deepfakes. The French legislative proposal since late 2024 focuses on sexually explicit deepfakes. Additionally, the EU’s Cyber Resilience Act (CRA) sets strict security requirements for smart products, with exceptions for open-source stewards. This law requires open-source stewards to have a cybersecurity policy, respond to requests from market supervisors, and cooperate in reporting vulnerabilities [2].
Detection of AI-generated Content
Alongside legislation, technological solutions have been developed to detect deepfakes. Recently, new methods and tools have emerged that have significantly improved detection effectiveness. One of the most promising techniques is the use of machine learning algorithms trained on large datasets of real and fake videos. These algorithms can detect subtle deviations in facial expressions and movements that are not visible to the naked eye [GPT].
Effectiveness and Challenges
While these new methods and tools offer significant advantages, challenges remain. The rapid evolution of AI technologies makes it difficult to stay one step ahead. Deepfake creators continuously adapt their methods to circumvent detection tools. Moreover, there is still a lack of standardisation in detection technologies, affecting interoperability and reliability [GPT].
Future Perspectives
To address the arms race between AI creation and detection, a multidisciplinary approach is needed. Scientists, lawyers, technologists, and policymakers must collaborate to tackle both the technological and legal aspects of the problem. Developing robust legislation and technologies is crucial to minimise the negative impact of deepfakes and ensure the integrity of digital information [GPT].