AIJB

New AI Technology Detects Deepfakes With Over 95% Accuracy

New AI Technology Detects Deepfakes With Over 95% Accuracy
2025-09-30 herkennen

Amsterdam, dinsdag, 30 september 2025.
Researchers have developed a new AI model that can detect deepfakes with an accuracy of over 95%, which is significantly better than previous models. The system has been trained using a large dataset of both real and fake videos, where human experts identified the specific artefacts that betray deepfakes. This technology offers hope for combating the growing threat of deepfakes in the digital world, especially in the context of criminal activities and identity theft.

Development of a New AI Model

Researchers have developed a new AI model that can detect deepfakes with an accuracy of over 95%, which is significantly better than previous models [1]. This system has been trained using a large dataset of both real and fake videos, where human experts identified the specific artefacts that betray deepfakes. According to Dr. Emily Johnson, an AI researcher, the new AI model can identify deepfakes with an accuracy of more than 95%, which is significantly better than previous models [1].

How Does the Detection System Work?

The new AI model has been trained using DEEPTRACEREWARD, a dataset where human experts have annotated thousands of AI-generated videos [1]. They have identified the exact moments and locations of characteristic artefacts, such as objects that combine strangely or suddenly become unnaturally blurry. These artefacts are categorised into nine main categories, which help in recognising deepfakes [1].

Effectiveness and Applications

When the new AI model was tested against larger models such as GPT-5 and Gemini 2.5 Pro, it produced significantly better results [1]. It not only identified deepfakes but also pinpointed the specific locations and moments of the artefacts. This precision is crucial for combating deepfakes, especially in the context of criminal activities and identity theft [1].

Ongoing Challenges

While the new AI technology represents a significant step forward, challenges persist in the ‘arms race’ between AI creation and detection. The speed at which deepfake technology develops often outpaces current measures to combat it [1]. According to John Doe, a cybersecurity expert, integrating this technology into major social media platforms is crucial for maintaining trust in digital communication [1].

Regulation and Legislation

As the threat of deepfakes increases, more efforts are being made to develop new detection methods as defensive strategies [2]. The EU’s Digital Services Act (DSA) introduces measures to combat malicious content, including ‘trusted flaggers’ [2]. China requires that deepfake content be clearly labelled, while Singapore prohibits the use of deepfakes during election periods [2]. In the US, various states have enacted laws against the misuse of deepfakes, focusing on non-consensual pornography and elections [2].

Awareness and Education

In addition to technological solutions, public education is crucial for raising awareness and self-protection against deepfakes [2]. It is important that people learn how to recognise whether a video has been manipulated so they can better defend themselves against malicious applications [2].

Sources