AIJB

Innovative Technique from NFI Detects Deepfakes via Subtle Heartbeat Differences

Innovative Technique from NFI Detects Deepfakes via Subtle Heartbeat Differences
2025-05-27 nepnieuws

Den Haag, dinsdag, 27 mei 2025.
The Netherlands Forensic Institute (NFI) has developed a promising method to verify the authenticity of deepfake videos by utilising physical characteristics such as heartbeat. This technique concentrates on minimal colour variations caused by the heartbeat in the face—specific measurement points like the veins around the eyes, forehead and jaw are particularly effective. Although scientific validation is still ongoing, this research highlights not only the innovative power of forensic science but also the growing need to raise awareness about the influence of deepfakes on media and communication. This discovery could transform future technologies and legal processes, offering much hope in combating misinformation in our digital world. The research method was first considered in 2012, with now improved imaging techniques enabling precise measurements.

The Role of AI in the Spread and Combat of Fake News

Artificial intelligence (AI) plays a crucial dual role in both the dissemination and combat of fake news. On one hand, AI techniques such as generative applications, including deepfakes, make it possible to produce highly realistic but misleading content. These technologies can manipulate images and videos to a point where distinguishing them from authentic material is nearly impossible [1][2]. On the other hand, the same AI is used to detect and combat these misleading practices, as exemplified by the Netherlands Forensic Institute’s (NFI) recent development of a method to identify deepfakes by measuring subtle differences in heartbeat [3].

Concrete Examples of AI Use

A striking example of AI use in spreading fake news is the development of advanced AI apps that create realistic videos, such as those recently introduced by Google. This raises concerns about potential misuse in disseminating misinformation [4]. Conversely, the NFI has introduced its innovative method specifically targeting deepfakes by focusing on indirect human characteristics like heartbeats, indicating how AI can also be employed to safeguard against the very threat it creates [3].

Implications for Media Literacy and Democracy

The rise of AI-generated fake news presents a serious challenge to media literacy and democracy. AI’s ability to create content almost indistinguishable from real images makes it difficult for the public to separate fact from fiction, potentially undermining trust in news sources [5]. However, effective AI solutions, such as NFI’s deepfake detection methods, can help restore this trust through accurate identification of inauthentic material [3]. It remains crucial for citizens to remain critical and become more media literate.

Practical Tips for Recognising Fake News

To arm themselves against fake news, readers can look for various signals. Firstly, they should check the information’s source and verify if it is reliable. Furthermore, it is advisable to investigate whether other reputable news organisations share the same information. Examining the quality of images and videos can also help, as deepfakes sometimes contain subtle errors, such as unnatural eye movements or strange shadow patterns [GPT]. Lastly, checking social media platforms for reactions and discussions can provide clues about the credibility of a news item.

Sources