AIJB

Flemish TikTok Moderator Warns: AI Moderation Could Make Platform Less Safe

Flemish TikTok Moderator Warns: AI Moderation Could Make Platform Less Safe
2025-08-02 nepnieuws

berlijn, zaterdag, 2 augustus 2025.
A Flemish TikTok moderator, who has been working in Berlin for 3.5 years, warns that replacing human moderators with AI could make the platform less safe. According to him, AI cannot always distinguish between real and fake news, and may overlook subtle forms of discrimination. The moderator emphasises that TikTok often intervenes too late in dangerous trends, and this problem will escalate with more automation. TikTok has recently laid off the moderation team in Berlin and will replace human moderators with AI and external contractors.

AI Moderation Brings New Challenges

Sander (*), a Flemish TikTok moderator who has been working in Berlin for 3.5 years, warns that replacing human moderators with AI could make the platform less safe. According to him, AI cannot always distinguish between real and fake news, and may overlook subtle forms of discrimination. This leads to dangerous trends often being noticed and addressed too late. TikTok has recently laid off the moderation team in Berlin and will replace human moderators with AI and external contractors [1][2].

Current Challenges in Moderation

Sander emphasises that the current moderation team already often intervenes too late in dangerous trends, due to understaffing or lack of communication from management. The team in Berlin consists of 150 people monitoring the platform’s safety, but the pressure is high. ‘You don’t always have enough time to thoroughly check something,’ says Sander. TikTok had already phased out a moderation team in Amsterdam in October 2024, leaving Dutch-language moderation understaffed [1][2].

AI in the Fight Against Fake News

AI is increasingly being used in the fight against fake news. TikTok claims that nearly 90% of videos removed for violent content are deleted based on AI. However, Sander notes that AI does not detect subtle or indirect forms of discrimination and lacks empathy. ‘AI doesn’t always see the difference between an orange and a nipple, and it doesn’t notice subtle forms of discrimination,’ according to Sander [1][2].

Implications for Media Literacy and Democracy

The growing role of AI in moderation has significant implications for media literacy and democracy. Research indicates that the spread of fake news can lead to misinformation and polarisation in society. Olivier Cauberghs, an AI expert, fears that there will be even more inappropriate and illegal content on the platform, such as hate speech, fake news, and sexually explicit videos. ‘Medical disinformation is a major problem and will increase,’ Sander stresses [2].

Practical Tips to Recognise Fake News

To recognise fake news, users can apply the following practical tips: Check the sources of information and look for other reliable media outlets that confirm the same story. Look for spelling and grammar errors, which are common in fake news. Verify the date of the article and check for recent developments. Additionally, it is important to think critically about the content and the intent behind the message [GPT].

Sources