Republican Candidate Holds Controversial AI Debate with Opponent
richmond, vrijdag, 24 oktober 2025.
John Reid, the Republican candidate for lieutenant governor in Virginia, organised a YouTube debate featuring an AI-generated version of his opponent, Ghazala Hashmi. The move, which used a ‘deepfake’ rendition of Hashmi, highlights the growing role and ethical implications of AI in political campaigns. Although Hashmi’s campaign denounced the stunt as a ‘shallow gimmick’, experts warn this may be only the start of a new phase in political communication using AI.
The incident in Virginia: what happened
John Reid, the Republican candidate for lieutenant governor in Virginia, hosted a 40-minute YouTube debate against an AI-generated version of his opponent, Democratic state senator Ghazala Hashmi; the video was described by campaigns and media as a ‘deepfake’ and a ‘mock debate’ [1][2][3][4][5].
How the AI version of Hashmi was created
Reid’s campaign compiled responses for the AI version of Hashmi based on previously published interviews, information from her campaign website and — according to reporting — text from far-right sites; that produced robotic, composite answers that were used in the broadcast [1][5][4].
Public reactions and political repercussions
Hashmi’s campaign called the stunt a ‘failed use of deepfakes’ and ‘a desperate move’, the Democratic Party of Virginia declared the AI version the winner, and critics pointed to the lack of consent for using her likeness; experts and campaign staff warned that such actions raise questions about transparency and ethics during campaign season [1][2][3][5].
Legal framework and policy responses
In the aftermath of the incident, a bill (HB2479) that would have required disclosure of AI use in political advertisements received attention: Governor Glenn Youngkin vetoed the bill, citing enforceability and constitutional concerns, making clear that legal responses to AI political content remain contentious and incomplete [1][5].
Concrete examples of AI in political misinformation
The use of AI to create politically misleading or sensational images and videos has been documented before: recently AI-generated images and videos have been used politically — including a viral AI video featuring a well-known politician and deepfakes of foreign candidates — illustrating how quickly such tools can be deployed to influence campaigns [6][1].
AI as both weapon and defence against fake news
AI increases both the production of misleading content (rapid deepfakes, automated disinformation campaigns) and the capacity to detect the same content (deepfake detection algorithms, metadata analysis and watermarks). Media organisations and academics are developing detection tools, but effectiveness varies by technique and is continually challenged by faster generation algorithms [6][1][3][5][alert! ‘the precise effectiveness of detection tools varies by model and dataset and is constantly evolving, so this is an area of high uncertainty’].
Implications for media literacy and democracy
The Virginia example shows how deepfakes can undermine trust in political discourse: when voters cannot be sure whether a video is real or synthetic, it fragments the shared factual basis that supports democratic debate; experts warn that without clear norms and rules, AI-produced political content could become even more common and realistic by 2026 [6][1][alert! ‘prediction based on experts from sources — no certainty about the extent and timing of increase’].
Practical tips to recognise fake news and deepfakes
- Check the source: reliable news organisations and official campaign communications are preferable; unknown channels or accounts with little history require caution [2][3]. 2) Look for unnatural imagery and audio: stuttering lip movements, ‘robotic’ voices and unnatural lighting are warning signs of deepfakes [6][1]. 3) Verify quotes and context: compare statements with earlier, well-documented interviews or press releases from the person themself [5][1]. 4) Look for disclaimers or disclosures of AI use; in many cases that transparency is still missing despite legislative debate [1][5][3]. 5) Use fact-checkers and reverse image/video search to see if content circulated previously or has been manipulated [GPT].
What this means for journalists and voters
Journalists and news consumers must tighten verification procedures and actively demand transparency from campaigns and platforms; there is also growing pressure on lawmakers to craft enforceable rules without infringing free speech — a balance that so far has produced political friction and vetoes [1][5][3].