AIJB

How an Invisible Pulse Makes the Difference in Exposing Fake Videos

How an Invisible Pulse Makes the Difference in Exposing Fake Videos
2025-11-25 herkennen

rotterdam, dinsdag, 25 november 2025.
Imagine a video of a well-known individual claiming to have said something they never actually said. The solution lies hidden in something you can’t see: an invisible pulse. Dutch forensic researchers are now using a smart technique that analyses subtle colour changes in the face—caused by each heartbeat—to determine whether a video is genuine or an AI-generated deepfake. Most impressively, these movements are too fast and too fine for the naked eye, yet they are perfectly detected by software. It marks a crucial step in the fight against manipulative media, where even the most realistic fake videos can be tested against an undeniable biological marker: life itself.

The Invisible Evidence: How a Pulse Reveals the Truth

In a world where images are often trusted more than words, the presence of a natural pulse serves as an indispensable proof of authenticity. Researchers at the Netherlands Forensic Institute (NFI) are developing an AI-based method that analyses subtle colour changes in the face caused by each heartbeat. These movements are so fine and rapid that they are invisible to the naked eye, yet perfectly detected by software. The process leverages the green channel in RGB images, as green light is strongly absorbed by blood just beneath the skin, creating a measurable change with every heartbeat [1]. This technique is vital in criminal investigations, where fake videos are used for identity fraud, pornographic abuse, and manipulation of evidence [1].

From Test to Application: How the Model Works in Practice

Sanne de Wit, a researcher at the NFI, tested the model using a dataset of approximately 200 real videos and 200 AI-generated deepfakes. The results were convincing: nearly all real videos were correctly identified. However, some deepfakes were incorrectly classified as genuine, highlighting the complexity of the challenge. The analysis primarily focuses on the area around the forehead and eyes, where colour changes are most visible in authentic videos [1]. Researchers also use the frequency of the electrical grid (50 hertz) as a time reference, by extracting net frequency from videos and comparing it with official records, enabling them to verify the timestamp of a video [1]. The current dataset is limited, requiring the model to undergo more varied and qualified training to improve its performance [1].

The Limits of the Technique: Why It’s Not Fully Reliable Yet

Despite progress, challenges remain. Unexpected results show that still images of people and mannequins are occasionally interpreted by the algorithm as ‘real’, indicating that the boundary between living and non-living is not yet fully broken [1]. Image quality is a critical factor: sharpness, stability, lighting, and unobstructed faces all influence the detection of pulse signals [1]. Poor lighting, motion, or partial face coverage disrupt the analysis and may lead to misinterpretations [1]. De Wit emphasises that the model must become more robust, capable of functioning under realistic conditions such as moving subjects, low light, and partially obscured faces [1].

A Combination of Techniques for Reliable Detection

According to De Wit, a robust deepfake detection model will ultimately need to combine multiple techniques rather than relying on a single method. Additionally, the NFI is developing a second dataset containing deliberately ‘harder’ images to test the algorithm under extreme scenarios [1]. Furthermore, the institute also uses unique, invisible ‘fingerprints’ of digital cameras, created by measurable deviations in the sensor. These camera fingerprints are compared against reference recordings to verify the origin of videos [1]. The NFI combines heartbeat detection, power grid frequency analysis, and camera fingerprints for a more reliable assessment of deepfakes [1].

The Future of the Fight: From Reaction to Anticipation

De Wit stresses that the goal is not merely to react to new developments, but to anticipate them. If deepfakes in the future must also contain a lifelike pulse signal, their creation will become significantly more expensive and complex, thereby reducing the risk of their use for criminal purposes [1]. The technique demonstrates that even the most realistic fake videos can be tested against an undeniable biological marker: life itself. This development represents a significant step in the ongoing arms race between AI creation and detection, where forensic researchers strive to keep pace [1].

Sources