How AI Makes Your Videos on TikTok and Instagram Viral — And Why That Should Worry You
Amsterdam, woensdag, 26 november 2025.
A new generation of AI tools analyses your video performance in real time, predicts viral trends, and adapts content using features like face swap and automatic editing. The most striking part: these tools are so advanced that 68% of viewers can no longer distinguish between a real person and an AI-generated face. While creators benefit from this technology, experts are questioning how it blurs the line between authenticity and deception — and who ultimately bears responsibility for content circulating globally. The digital public sphere is increasingly shaped by algorithms optimised for clicks, and anyone with a smartphone can now contribute to this flood of content. What does this mean for your trust in what you see?
AI tools that predict and shape viral videos
Apps like ‘Blow Up - Go viral using AI’ analyse short-form videos in real time to predict their potential for going viral on platforms such as TikTok, Instagram Reels, and YouTube Shorts [1]. The app uses artificial intelligence to decode social media algorithms, identify patterns in successful content, and provide recommendations for minor adjustments that can boost engagement [1]. These tools offer features such as AI-optimised subtitles, suggestions for trending hashtags, and feedback on retention issues within videos [1]. According to the App Store description, the app’s goal is to help users increase views, likes, comments, and shares [1]. The AI analyses large datasets of social media videos, including content from TikTok, Instagram, and YouTube, to identify emerging trends [1]. The app was launched on 25 November 2025 and requires a subscription or free trial for full access to AI analysis [1]. Developed by Thomas Corry, it is available for iOS 17 or later [1].
The rise of AI generation in social media and the shifting definition of authenticity
The proliferation of AI tools for video creation, such as face swap and automated editing, is changing how people share content online [1][6]. A 2025 study interpreted by YouScan shows that 68% of users can no longer tell the difference between a real person and an AI-generated face, indicating a further erosion of the boundary between reality and fabrication [6]. This trend is demonstrably growing: between the first quarter of 2024 and the third quarter of 2025, the volume of TikTok content using AI filters increased by 300% [6]. This growth is supported by the rising popularity of apps like ‘SocialAI - AI Social Network’, which offers a social networking experience based on AI-generated interactions and conversations [7]. As AI apps such as ‘HUME: Your Personal AI’ and ‘NOMI: AI Companion with a Soul’ demonstrate, AI is not only used for videos but also for emotional and personal interactions through speech and chat models [7]. These developments show that AI is no longer limited to technical background functions but now plays a central role in shaping digital identity and communication [7].
The role of platforms in spreading AI-generated content
Major social media platforms are actively involved in AI-driven content creation. TikTok launched a new feature on 24 November 2025 allowing users to limit AI-generated content in their ‘For You’ feed, indicating the platform is aware of the impact AI outputs have on user experience and credibility [2]. YouTube expanded access to generative AI features, including video generation for Shorts and music tools, on 25 November 2025 [2]. Meta integrated more AI-based options into its ad flow on the same date, making AI recommendations the default for more advertising elements [2]. Additionally, new features such as ‘virtual try-on’ for furniture on Facebook Marketplace are currently in testing, showing that AI is also being used in e-commerce and realistic simulations [2]. The responsibility for the quality and authenticity of AI-generated content lies not only with users but also with the platforms themselves, which develop the algorithms that further propagate this content [2].
Ethics, deception, and the reinforcement of click-driven algorithms
The growth of AI tools for social media raises concerns about the spread of misleading content and the erosion of authenticity [1][6]. Reddit users describe social media as an ‘antisocial wrecking ball’ where AI-generated noise and deepfakes undermine the quality of digital communication [5]. Experts warn that the combination of engagement-focused algorithms and AI tools optimising content for virality creates an environment where truth is less important than attention [6]. A YouScan analysis reveals that 68% of users cannot differentiate between real and AI-generated faces, representing a direct threat to media literacy [6]. These developments reinforce existing algorithms tuned for engagement, leading to a cycle of intensified polarisation and loss of trust [6]. The lack of transparency around how AI data is used exacerbates the unease: since October 2025, Google Gemini has had automatic access to private communications via Gmail without explicit user consent, a fact unknown to 99% of users [8].
Data use and privacy in AI model training
Tech companies use personal data to train AI models, but access to this data varies by platform. Meta introduced a new policy on 16 December 2025, adapting user content and ads based on interactions with Meta AI, though direct use of private messages, photos, or stories from Instagram, WhatsApp, or Messenger is not used for AI training [8]. However, Meta can use public content such as photos, posts, comments, and reels for AI training without users having an opt-out option [8]. LinkedIn began using data from U.S. users — including profile information and public content — to train content-generating AI models on 3 November 2025 [8]. Google offers explicit consent via Gemini Deep Research, but since October 2025, it changed the default access to Gmail, Drive, and Chat to automatically active, forcing users to actively revoke permission [8]. For Google, data from users under the age of 13 is not used for AI training [8]. Krystyna Sikora, researcher at the Alliance for Securing Democracy, warns that lack of transparency can lead to confusion and misinformation about what is and is not permitted [8]. In the United States, there is no federal privacy legislation, meaning users do not have a standard right to opt out of AI training [8].
Practical tips for identifying fake news and AI-generated content
Useful tips for readers to identify fake news and AI-generated content include: verify the source of a video or message, especially if it originates from an unfamiliar or new device; use tools like YouScan or Google Lens to analyse images for logos or objects within visual content [6]. Watch out for unnatural facial movements in videos, particularly with face swap techniques, which often show mismatched eyes, incorrect lip movements, or poor lighting reflections [6]. Check whether a message or video has unexpectedly high engagement on a platform where the user typically does not post content — this could indicate an AI-driven viral campaign [1]. Stay critical of algorithms that promote content based on emotion or polarisation — the more you watch, the more you are shown [6]. Regularly review your privacy settings on platforms like Meta, Google, and LinkedIn, and use opt-out options where available, such as disabling ‘Data for Generative AI Improvement’ on LinkedIn [8].