AIJB

Why AI-Generated Deepfakes Will Be Criminalised in America

Why AI-Generated Deepfakes Will Be Criminalised in America
2025-12-09 nepnieuws

Washington, dinsdag, 9 december 2025.
A new legislative proposal in the US aims to criminalise the use of AI-generated deepfakes for fraud, identity theft, and political manipulation. It is a direct response to the increasing realism of fake videos, audio recordings, and messages that are nearly indistinguishable from authentic content. What stands out most is that the proposal targets not only fraud but also the impersonation of public officials — a notably strong measure indicating that the threat posed by AI-driven deception is now seen as a serious danger to democracy. This step could set a new benchmark for how societies manage synthetic media.

A New Threat in the Digital Sphere: AI-Generated Deepfakes

The speed at which artificial intelligence (AI) produces fake videos, audio recordings, and messages has completely blurred the line between truth and fiction. According to a recent warning from Google, scammers have been actively using AI-based fraud since November 2025, impersonating real employers, technology companies, and even government agencies [4]. These forgeries are so realistic that they no longer display the classic telltale signs of inauthentic content — such as unnatural hands or confusing lighting effects — which were previously reliable indicators of fake material [1]. Instead, AI-generated imitations are now used to mislead individuals into applying for jobs, downloading apps, or transferring money. The threat is so significant that the US federal government is now introducing legislation that would make the creation and distribution of such false content a criminal offence, particularly when used for financial fraud, identity theft, or political manipulation [1]. The notable aspect of this proposal is that it explicitly includes the impersonation of public officials under the law, a legal move that signals the threat of AI-driven deception is now regarded as a direct threat to democracy [1].

The new legislative proposal, known as the AI Fraud Deterrence Act, seeks to update existing laws to reflect the modern reality of AI-generated forgeries. Rather than focusing solely on traditional forms of fraud, the definition of ‘fraud’ is being expanded to explicitly include the use of AI for deception [1]. Penalties are to be strengthened: the maximum fine for AI-assisted fraud would double, and the law would establish clear criminal liability for deepfake imitations of public officials, whether in audio or video form [1]. This measure is not a reaction to a single incident, but an integrated strategy to address the growing crisis of public trust in digital information. The government aims not only to limit abuse but also to restore confidence in digital media and democratic information systems [1]. This legal approach is a direct response to the increasing complexity of online deception, where technology companies, creative studios, and online platforms are coming under scrutiny for how AI is deployed [1].

AI as a Weapon in the Hands of the Criminal — and as a Shield in the Hands of Security

While AI is being used as a tool for fraud, it is increasingly being deployed as a solution to combat the very same fraud. Financial institutions in Europe and the US are now implementing a layered AI strategy comprising three levels: predictive, generative, and agent-based AI [2]. Predictive AI uses historical transaction and behavioural data to detect anomalies, reducing the volume of false alerts by up to 80% [2]. Generative AI synthesises complex data from multiple sources to accelerate investigations, while agent-based AI — such as the Sensa Summary Agent — can execute end-to-end workflows, including summarising investigations, conducting web research, and generating investigative reports (SAR files) [2]. According to Marcus, Group Head of Compliance at a European bank with €350 billion in assets, the old method of manually weaving together data — which took weeks — can now be completed in minutes with AI assistance [2]. This technology enables three times faster customer onboarding and the ability to process millions of records within minutes, resulting in audit-ready reporting [2]. This demonstrates that AI is not merely a threat, but a powerful weapon in the fight against financial crime, especially in an era where cyber threats are becoming increasingly sophisticated [2].

The Danger of Fake News: How AI Undermines the Media Ecosystem

The spread of AI-generated content has had a profound impact on trust in the media and democratic information structures. A report from the Pew Research Center published in December 2025 shows that public trust in the US federal government has reached a historic low: only 22% of the population trusts the government, a figure that is the lowest in nearly seven decades of research [5]. This decline in trust creates fertile ground for misinformation, particularly when AI is used to disseminate false messages about political events or government policies [5]. In this climate, fake job postings, review extortion, and fraud recovery scams are increasingly being used to deceive individuals [4]. For example, fraudsters impersonate official agencies promising to recover stolen funds, while actually requiring an upfront payment [4]. The presence of such scam operations has been documented in databases like that of Cryptolegal.uk, which published a list on 7 December 2025 of over 336 reported domains linked to crypto and investment fraud [3]. These websites often use fake and scam-related domain patterns such as ‘recovery scam’, ‘clone’, or ‘fake email’, and are frequently targeted at users in the US, Hong Kong, and China [3]. The combination of technological realism and declining public trust makes it increasingly difficult for readers to distinguish genuine information from fake news [5].

What Can the Individual Do? Practical Tips to Spot Fake News

To protect themselves from AI-driven deception, it is crucial for citizens to develop media literacy. Google advises individuals to apply only through verified company websites and never pay in advance for ‘training’ or ‘processing’ related to job postings [4]. When downloading apps, it is important to only use official app stores or verified domains, and to avoid deals that seem too good to be true [4]. For unexpected delivery notifications or emails demanding money transfers, it is essential to immediately verify the sender’s authenticity — these are often fake email addresses or domains that mimic legitimate ones [4]. The website Cryptolegal.uk offers a Scam Check service via https://www.cryptolegal.uk/scam-check-services, allowing users to check whether a website or domain appears in the fraud database [3]. Additionally, individuals can use the Ctrl+F or Command+F function to search for specific domain names within a list of over 300 scam websites [3]. Furthermore, it is important to know that legitimate government agencies never charge fees to initiate a fraud investigation, and genuine AI tools never request unnecessary access to phone files or cameras [4]. By following these simple steps, individuals can form a first line of defence against the rise of AI-generated forgeries [4].

Sources