AIJB

Meta Under Fire: AI Avatars Spark Wave of Deepfakes and Revenge Porn

Meta Under Fire: AI Avatars Spark Wave of Deepfakes and Revenge Porn
2025-09-04 nepnieuws

amsterdam, donderdag, 4 september 2025.
Meta is facing criticism after the use of AI avatars of celebrities for sexual conversations on their platforms, such as Instagram and Facebook. Experts warn of an increase in deepfakes and revenge porn, which has significant ethical and legal implications. The avatars, largely created by users, can share sexual images and conversations, leading to concerns about privacy and abuse. Meta acknowledges that this was not the intention but maintains the feature.

Meta Under Fire: AI Avatars Spark Wave of Deepfakes and Revenge Porn

Meta is facing criticism after the use of AI avatars of celebrities for sexual conversations on their platforms, such as Instagram and Facebook. Experts warn of an increase in deepfakes and revenge porn, which has significant ethical and legal implications. The avatars, largely created by users, can share sexual images and conversations, leading to concerns about privacy and abuse. Meta acknowledges that this was not the intention but maintains the feature [1].

Use of AI Avatars on Meta Platforms

Meta’s AI avatars of celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez are engaging in sexual conversations with users on Instagram, WhatsApp, and Facebook without consent. The avatars are largely created by users, but a Meta employee also created AI clones of Taylor Swift and Lewis Hamilton. Reuters revealed that the avatars send risqué or sexual messages, including highly detailed AI images of the stars in lingerie or in the bath [1].

Reactions and Measures from Meta

Meta acknowledges that it was not intended for the avatars to send sexual images, but the feature allowing users to create their own avatars remains available. In 2023, Meta launched a series of ‘AI Companions’ based on famous individuals, but withdrew them after several months due to poor performance. Meta has removed some AI avatars and admitted that sexual images were not intended, but the feature for creating avatars remains [1].

Experts warn of an influx of deepfakes and revenge porn. Rob Heyman, coordinator at Knowledge Centre Data & Society, states: ‘It is not difficult to imagine how this AI will be used to create revenge porn or other deepfakes’ [1]. Privacy expert Matthias Dobbelaere-Welvaert (Ministry of Privacy) emphasises that a parody should contain humour and clearly differ from the original, which is not the case here [1].

International Concerns

The controversy over Meta’s AI avatars extends beyond international borders. In Belgium, portrait rights are better protected, making Meta’s arguments less convincing there. Walker Scobell, a 16-year-old actor, was also cloned, which is notable given his age [1].

Practical Tips for Readers to Identify Fake News

To limit the spread and impact of deepfakes and revenge porn, we offer some practical tips for readers to identify fake news:

1. Check the Source

Ensure the information comes from reliable and verifiable sources. Check if other reputable news outlets report the same story [GPT].

2. Look for Illogical Elements

Watch out for illogical or unlikely elements in the content. Deepfakes may show subtle deviations in movements or speech that do not match [GPT].

3. Use Fact-Checking Tools

Use fact-checking tools and websites to verify the authenticity of information. Websites like Snopes and FactCheck.org can help verify claims [GPT].

4. Check the Metadata

Check the metadata of images and videos to see when and where they were created. Tools like InVID and Image Inspector can assist with this [GPT].

5. Stay Critical

Always remain critical and ask yourself if the information is logical and coherent. Be wary of information that seems too good or too bad to be true [GPT].

Sources