AIJB

How an AI-Generated Video Could Spark National Tension

How an AI-Generated Video Could Spark National Tension
2025-11-30 nepnieuws

Den Haag, zondag, 30 november 2025.
A deepfake video of Indian military chief Gen Upendra Dwivedi, falsely claiming to make a statement about an activist, was disseminated by Pakistani propaganda accounts. The fake video, created using advanced AI technology, appeared so convincing that international media had to verify its authenticity. The Indian government confirmed the digital evidence was fabricated and intended to undermine trust in the armed forces. This incident highlights how vulnerable democratic institutions are to information warfare in the digital age—and how quickly a credible message without evidence can transform into a political explosion. The question is no longer whether AI can do this, but whether we can recognise it.

The Rise of a Digital Storm: An AI-Generated Video of a Military Chief

On Wednesday, 26 November 2025, Pakistani propaganda accounts spread a deepfake video of Gen Upendra Dwivedi, India’s chief of the army staff. The video, generated using advanced AI technology, depicted the military chief delivering an official statement expressing ‘deep regret’ over the alleged ‘death of activist Sonam Wangchuk in state custody’—a claim that was entirely false. Wangchuk, a well-known climate activist, had indeed been under arrest since September 2025 under the National Security Act (NSA), but remained alive and was not killed in state custody [8]. The video rapidly spread across social media platforms such as TikTok, X (formerly Twitter), and Telegram, with targeted focus on regions in South Asia and the Middle East, where trust in India’s military institutions was already strained [8]. Within 24 hours, the Indian government, through the Press Information Bureau (PIB), confirmed the video was ‘digitally fabricated’ and ‘used AI technology to create a forgery’ [8]. According to the PIB, the intent was ‘to spread disinformation and undermine confidence in the Indian armed forces’ [8]. The campaign was identified by Indian intelligence agencies on 25 November 2025, with the first AI-generated content appearing on social media on 24 November 2025 [8].

AI as a Weapon in Information Warfare: From Manipulation to Hybrid Warfare

The technology behind the video was not simple video editing, but an advanced generative AI model trained on public video and audio archives of Indian military officials, including senior officers [8]. This enabled the creation of a highly convincing, credible simulation of Gen Dwivedi, including his tone of voice, facial expressions, and body language. The Indian Ministry of Defence described the activity as a ‘coordinated effort in information warfare’ aimed at ‘destabilising regional and international confidence’ [8]. An analysis by the Indian Cyber Security Agency identified 148 AI-generated content pieces in the campaign, of which 127 were detected on social media between 24 and 26 November 2025 [8]. This development marks a new phase in hybrid warfare, where digital manipulation functions both strategically and psychologically—‘We are witnessing a new phase in hybrid warfare where AI-generated content is weaponised to create chaos and erode institutional trust,’ said an unnamed military intelligence officer [8]. The campaign was combined with other disinformation strategies, such as fake social media accounts and targeted messaging, to amplify its impact [8]. While the technology behind deepfakes is evolving rapidly, security measures continue to lag behind [2]. Researchers, however, have found ways to detect deepfakes by examining physical cues such as blood vessels in the face, providing a scientific foundation for detection [2].

International Reactions and Media Trust: A Test for Democracy

The video reached international media outlets, including WION, which had already identified the false evidence and warned about its misleading content [8]. Yet, the false material continued to spread, demonstrating how quickly a credible message without proof can escalate into a political crisis [8]. The Indian government announced the activation of a special counter-disinformation unit on 30 November 2025 to track and neutralise deepfake videos [8]. This response illustrates how quickly democratic institutions are being forced to adapt in an era where AI not only generates information but also manipulates it [8]. Globally, concern over this form of manipulation is growing: a 2025 US survey shows that 45% of the population is concerned about ‘disinformation’ caused by AI, and 46% about ‘human manipulation’ via AI [4]. This concern is not limited to India; in the United Kingdom, creating sexual deepfakes has been a criminal offence since 25 November 2025 [2]. In Australia, a man was fined €190,000 in the first case involving deepfake pornography on 15 November 2025 [2], indicating a growing legal response to these technologies. The EU has asked tech platforms, including Meta, how they manage AI risks, with a request for response by 15 December 2025 [2].

How to Spot Fake News: Practical Tips for Readers

The speed at which AI content can be created makes it essential for citizens not only to be critical but also aware of indicators of fake news. The first step is verifying the source: false content is often spread through accounts with little or no history, or via platforms lacking fact-checking mechanisms [2][8]. Examine the language: AI-generated text is often overly perfect, overly generic, or excessively emotional, lacking the typical imperfections of human writing [8]. For audio and video content, watch for unnatural movements—such as uneven eye movements, strange lip sync, or blood vessels that don’t align with lighting sources [2]. Platforms like Facebook are developing methods to detect deepfake software, though these are not yet fully effective [2]. Researchers at Oigetit, an AI fact-checker, developed an algorithm that analyses news articles for factual accuracy, bias, and source credibility in fractions of a second, assigning a reliability score [3]. The tool is available via a free app for iOS and Android and uses a database of historical articles for comparison [3]. For journalists and content creators, there is a clear warning: an experiment by LAist Studios on 29 November 2025 showed that an AI-generated fictional news story about a man-made island off the Santa Monica Pier was so convincing that it was difficult to distinguish from real journalism, even with full knowledge of the source [7]. The experimenter asked: ‘Can my work be created by a machine?’—a question now more relevant than ever [7].

Sources