AIJB

Why a video of your face can be online without your consent

Why a video of your face can be online without your consent
2025-11-11 nepnieuws

Brussel, dinsdag, 11 november 2025.
Steffi Mercie, known for her honest and personal content, reveals in her new VRT series how realistic yet false videos—known as deepfakes—are already being used today to threaten and blackmail people. The most shocking part? She shares that deepnudes of her have been circulating for years, despite her never having created those images herself. Her series, ‘Merci Mercie’, goes beyond technology: it explores how society can learn to cope with the impossibility of trusting what we see. Read more about ways you can protect yourself—and why media literacy is more crucial than ever.

How deepfakes are made and why they are so dangerous

Deepfakes are video or audio recordings generated using artificial intelligence to mimic a person, often without their knowledge or consent. This technology uses algorithms that analyze large volumes of existing images or audio recordings to create realistic but false representations. In her series ‘Merci Mercie’, Steffi Mercie reveals that deepnudes of her have been circulating for years, despite her never having created those images [1]. These false images are often used to extort young people, with the aim of extracting money or damaging reputations. The process has become accessible thanks to open-source AI tools developed for manipulating images and videos, enabling even technically inexperienced users to deploy deep technologies [1]. The impact is not limited to personal humiliation; deep technologies can also be used to mislead political figures, influence election outcomes, or undermine public trust in news. According to experts, the pace at which this technology is evolving poses a significant challenge to the reliability of information in the digital space [1].

The role of social media and the dissemination mechanism

Social media is the primary medium for spreading deepfakes. Because these images appear realistic, they can go viral quickly before being corrected or debunked. Steffi Mercie emphasizes in her series that the constant scrolling on smartphones—a phenomenon referred to as ‘doomscrolling’—increases the likelihood that people unknowingly encounter deepfakes [1]. Platform algorithms like those on Instagram and Facebook are designed to maximize screen time, which accelerates the spread of sensational or shocking content, including deepfakes [1]. This automated dissemination is sometimes unavoidable: even if a user does not share content themselves, the algorithm may still recommend it based on previous interactions [1]. In the context of the Bisou Awards 2025, where Steffi Mercie was named Influencer of the Year, her role in highlighting these risks was recognized as a significant contribution to digital societal awareness [2]. The fact that such images circulate in the public domain without legal consequences or technical barriers makes their spread particularly challenging.

How to protect yourself from deepfakes

There are concrete steps you can take to protect yourself from the use of your image or voice in deepfakes. Niels Van Paeme, policy officer at Child Focus, advises proactively uploading images you wish to protect to the website Takeitdown [1]. This tool assigns the image a unique code; if someone attempts to upload the same image to a platform connected to Takeitdown, the upload is automatically rejected [1]. This system is specifically designed to prevent sexually intensified deepfakes, such as deepnudes [1]. Steffi Mercie herself uses this tool after discovering that images of her have been circulating for years [1]. Additionally, it is crucial not to pay extortionists who exploit deep technologies, as their sole objective is to extract money [1]. The temptation to pay may be greater under emotional pressure, but doing so strengthens the extortionist and encourages further abuse [1]. No current legislation or platform offers a complete guarantee against the appearance of false images, so personal preventive measures are essential [1].

Media literacy as a democratic foundation

The rise of deepfakes is changing the way we trust news and information. The ability to manipulate a video without it being visibly apparent undermines trust in public communication, particularly in journalism and political discourse [1]. In her series, Steffi Mercie not only examines the technology but also explores how citizens can learn to navigate this uncertainty [1]. She highlights the need for media and digital literacy, especially among young people, who are the most vulnerable group when it comes to digital manipulation [1]. In conversations with experts and peers, she demonstrates how mindful smartphone use can help reduce exposure to harmful or false content [1]. The series ‘Merci Mercie’ is seen as a crucial step in strengthening critical thinking in an era of rapidly evolving AI technologies, and forms part of a broader societal discussion about the responsibility of platforms, political institutions, and citizens [1][2].

Practical tips to recognize fake news and deepfakes

There are concrete signs that can help you identify fake news or deepfakes. Watch out for unnatural movements in eyes, lips, or hands, or irregular shadows and light sources in videos [1]. Deepfakes are often not fully realistic at a high level, particularly in fast movements or complex facial expressions. Also, use tools such as reverse image searches via Google Images or TinEye to trace images back to their origin [1]. Steffi Mercie advises using your smartphone more mindfully—for example, by limiting screen time and avoiding ‘doomscrolling’—to become less susceptible to sensational or false content [1]. Additionally, it is important to verify sources: check whether the information comes from a reliable, recognized media organization and whether more than one source confirms the same story [1]. The series ‘Merci Mercie’ provides practical tips based on expert discussions, offering both technical and behavioral recommendations [1].

Sources