Research: People Struggle to Identify Deepfake Audio
amsterdam, zondag, 3 augustus 2025.
A recent study from University College London reveals that people can only identify 73% of deepfake voices, even after training. Nearly a quarter of deepfake voices remain completely unrecognisable, which has serious implications for the detection of fake news and disinformation. The study underscores the need for new technologies and training to identify these forgeries.
Research on Deepfake Audio
According to a recent study by University College London, participants could only identify 73% of deepfake voices, despite training. This means that nearly a quarter of the deepfake voices remained completely unrecognisable [1]. The researchers did not use the most advanced speech technology, suggesting that the percentage of unrecognisable deepfakes in real-world scenarios is likely higher [1]. Kimberly Mai, one of the researchers, noted that participants primarily relied on intuition and subjective signals such as naturalness and robotic tone when making their decisions [1].
Implications for Media Literacy and Democracy
These findings have serious implications for the detection of fake news and disinformation. In an era where information spreads quickly and widely, the unrecognisability of deepfake audio can lead to significant misinformation. Kimberly Mai emphasised that collective responses and seeking others’ opinions can still be helpful in identifying deepfakes [1]. Additionally, it is important to verify reference audio if one doubts the authenticity of an audio clip [1].
Automated Detection
While automated deepfake detectors performed slightly better than humans, they were still not reliable enough to serve as trustworthy tools [1]. Therefore, it is crucial that governments and organisations develop robust rules and policies in this area. Technology companies like Keysight, recently recognised by Frost & Sullivan for their excellent work in 6G testing and measurement, play a vital role in developing advanced detection technologies [2].
Practical Tips for Readers
To reduce the risk of being misled by deepfake audio, readers can apply the following practical tips: [1][2]
- Consult Multiple Sources: Always verify information by consulting multiple reliable sources.
- Check Reference Audio: Listen to reference audio to confirm the authenticity of an audio clip.
- Collaborate: Seek others’ opinions to get objective feedback.
- Improve Media Literacy: Participate in training and workshops to better recognise signs of forgery.
- Stay Informed: Keep up-to-date with the latest developments in deepfake detection technologies.