AIJB

How a simple video scan can save trust in news

How a simple video scan can save trust in news
2025-12-02 herkennen

Amsterdam, dinsdag, 2 december 2025.
Imagine watching a video online that isn’t real — but a creation made by AI. Reality Defender has launched RealScan, a tool that detects whether an image, video, or audio has been manipulated in seconds. Its most striking feature? It can identify unnatural eye movements or vocal anomalies, enabling a photo of three people to be flagged as fake with 93% probability. Designed for journalists, governments, and media organisations, RealScan helps identify false content during sensitive moments, such as elections. The tool is accessible via a web interface and delivers clear, legally usable reports. In an era where AI-generated fake content is rapidly growing, RealScan offers an essential means of protecting truth — without requiring technical expertise.

RealScan: the digital detective tool against AI-generated misinformation

Reality Defender launched the web application RealScan on Tuesday, 2 December 2025, a tool capable of detecting deepfakes in text, image, and video within seconds. Designed for journalists, government agencies, and media organisations vulnerable to digital content manipulation — particularly during sensitive events such as elections or disasters [1] — RealScan operates through a simple drag-and-drop interface. Users can upload video or audio files, or scan URLs of online content to verify authenticity [1]. The tool analyses uploaded content using multiple AI-powered detection models that identify physical and behavioural inconsistencies, such as unnatural eye movements, irregular lip movements, or deviations in vocal tone and speech patterns [1]. In a test case, a photo of three individuals was flagged with a 93% probability of manipulation, with the analysis providing detailed insights into detection methods and specific manipulated regions within the image [1]. An in-depth report is automatically generated, including an authenticity score ranging from low to critical, and can be downloaded for internal or legal use [1].

Accessibility and subscription models for diverse user groups

RealScan is available via a SaaS web platform and requires a registered account for access, limiting usage to verified users [1]. Two subscription tiers are offered. The ‘Analyst’ package costs $399 per month (approximately €350), or $319 per month (around €275) with an annual subscription [1]. This package provides access to up to 250 scans per month and supports three users, along with chat support and full access to detailed results reports [1]. For larger organisations, an Enterprise subscription is available, offering unlimited users, specialised support via Zoom, Teams, or Webex, and a dedicated customer success manager [1]. These models are designed to support both small editorial teams and large governmental institutions in managing risks associated with misinformation [1].

The technology behind RealScan: how AI is used against AI

RealScan’s deepfake detection relies on multiple advanced AI algorithms trained on large datasets of both authentic and manipulated media [1]. The software identifies subtle inconsistencies invisible to the human eye, such as irregular skin textures, unnatural shadow patterns, or inconsistent lip movements during speech [1]. In audio analysis, the tool examines vocal tone, frequency, amplitude, and the timing of sounds for anomalies indicative of AI generation [1]. During its beta testing phase, RealScan employed a combination of models, including those specifically developed to analyse eye movements and detect audio signal timing discrepancies — both critical for identifying deepfakes [1]. Although the technology is highly advanced, the company does not disclose exact details about model architecture or training epochs, which may limit full transparency [1].

The ongoing challenge: the arms race between creation and detection

The rapid evolution of AI generative tools presents a continuous challenge for detection systems like RealScan. While RealScan currently reports a 93% detection accuracy for a specific test image, it remains unclear whether this performance will hold against new generations of deepfakes produced using improved models such as GANs or diffusion models (like those used in Stable Diffusion or OpenAI’s Sora) [1]. OpenAI recently tightened its deepfake policies following criticism from Hollywood, highlighting that even major tech companies recognise the risks of AI-generated content [1]. Yet, creative tools continue to outpace detection models in development speed. If a new AI technology can realistically simulate natural eye movements or vocal anomalies without leaving detectable traces, RealScan’s effectiveness could significantly decline [1]. The critical question is whether detection processes will remain perpetually behind, or whether a low-latency system with continuous updates and real-time feedback will be essential to maintain balance [1].

The role of technological responsibility in public communication

RealScan is designed to preserve trust in information, especially in contexts involving elections, public safety, or legal evidence [1]. The tool generates reports that are legally admissible, which is vital for governments and media organisations required to provide proof against misinformation [1]. It has already been integrated into workflows of major institutions, such as government agencies tasked with preventing identity fraud or disinformation campaigns [1]. Thus, RealScan’s role extends beyond technical detection: it contributes to a broader culture of responsibility within the information society. By offering an accessible tool, it lowers barriers to using detection technology for non-experts, which is essential for wider protection of truth [1].

Sources