AIJB

India Takes Strict Measures Against Deepfakes

India Takes Strict Measures Against Deepfakes
2025-10-23 nepnieuws

new delhi, donderdag, 23 oktober 2025.
The Indian government has proposed new rules to limit the spread of deepfakes. Companies active in artificial intelligence and social media must clearly label AI-generated content, with labels that cover at least 10% of the surface area of a visual display or the first 10% of audio duration. These measures have been introduced in response to the growing risks of deepfakes and misinformation, and are seen as one of the first explicit attempts worldwide to establish a quantifiable visibility standard.

Strict Rules for AI Companies

The Indian government has proposed new rules to limit the spread of deepfakes. Companies active in artificial intelligence and social media must clearly label AI-generated content, with labels that cover at least 10% of the surface area of a visual display or the first 10% of audio duration. These measures have been introduced in response to the growing risks of deepfakes and misinformation, and are seen as one of the first explicit attempts worldwide to establish a quantifiable visibility standard [2].

Industry Response

The government has invited the public and companies to provide suggestions on the new rules until 6 November 2025. While most companies have responded positively to these measures, some major players such as OpenAI, Google, and Meta have not yet responded to questions from Reuters [2]. OpenAI CEO Sam Altman stated in February 2025 that India is the second largest market for OpenAI, with user numbers tripling over the past year [2].

Impact on Media Literacy and Democracy

The spread of deepfakes and misinformation has significant implications for media literacy and democracy. In India, high-profile legal cases are pending, including those involving Bollywood stars Abhishek Bachchan and Aishwarya Rai Bachchan [2]. These cases illustrate how deepfakes can cause personal damage and influence public opinion and political decision-making. It is therefore crucial that users learn to identify fake news and think critically about the information they encounter [GPT].

Practical Tips for Identifying Fake News

To combat the spread of deepfakes and misinformation, users can apply the following practical tips:

  1. Check the source: Ensure that the information comes from reliable and well-known sources.
  2. Look out for spelling and grammar errors: Fake news articles are often poorly written and contain many errors.
  3. Verify images and videos: Use tools like Google Images to check images for manipulation.
  4. Consult multiple sources: Compare information from various sources to get a broad overview.
  5. Be critical: Doubt sensational headlines and information that seems too good to be true [GPT].

Sources