AIJB

International cooperation crucial for effective regulation of deepfakes

International cooperation crucial for effective regulation of deepfakes
2025-09-18 nepnieuws

amsterdam, donderdag, 18 september 2025.
The online magazine Netkwesties discusses the challenges and possible solutions for regulating deepfakes. Researchers point to the need for international cooperation and faster enforcement, as current technological developments are profoundly changing regulation. The European Union and the United States have already introduced new legislation, but there is not yet a concrete, assembled international working group to address the challenges.

Why regulating deepfakes is an international task

The online magazine Netkwesties notes that the rapid development of deepfake technology makes regulation complex and that national legislation is easily circumvented by cross-border production and distribution, making international cooperation crucial for effective enforcement [1]. Researchers emphasise that enforcement, detection and the attribution of responsibility must operate across borders to prevent measures in one jurisdiction from having little effect on content created or hosted elsewhere [1][6].

How AI is used in the spread of fake news

Modern generative AI makes it cheap and scalable to produce and personalise realistic audio, video and text, enabling disinformation to be spread faster and more targeted; this mechanism is explicitly identified as a driving factor behind mass and personalised disinformation campaigns [6][1]. Examples in the literature and journalistic analyses show that AI‑generated audio and deepfake videos have already been deployed around elections and political events to undermine trust and manipulate public opinion [6][1].

Concrete policy responses: EU, US and national initiatives

Netkwesties reported that the European Union has proposed a new regulatory framework to tackle deepfakes, and that the United States earlier this summer enacted legislation criminalising the use of deepfakes with deceptive intent — policy steps that illustrate that both regulatory and criminal instruments are being considered and implemented [1]. In addition, Dutch public debate has discussed setting up a national deepfake monitoring system within three months after mid‑September 2025; the status of that plan is still unclear and requires progress monitoring [1][alert! ‘Netkwesties mentions the plan but the current implementation status is not confirmed in the available text’].

How AI helps detect and combat fake news

AI is not only used to produce fake news but also to detect it: machine‑learning models can identify manipulations by flagging pixel artefacts, inconsistent lip‑sync, and anomalies in audio frequencies; sources highlight this dual role of AI as both weapon and counter‑weapon in the information battle [6][1]. Netkwesties also notes that research and detection tools are crucial for enforcement, but that the speed of technological development makes it difficult to permanently stay ahead of detection methods [1][6].

Examples of societal risks

The literature and journalistic reports point to concrete risks: deepfakes can damage reputations, facilitate financial fraud, and erode public trust in the media and political institutions — effects that undermine democratic decision‑making and social cohesion [6][1]. Netkwesties warns that protecting deepfakes through neighbouring rights (for example as commercial interests) may be short‑sighted and could obstruct prioritising enforcement and public safety [1].

Implications for media literacy

The rise of convincing AI generation requires a higher level of media literacy: citizens must learn to recognise technical and contextual signs of manipulation, verify sources and scrutinise claims before sharing — Netkwesties emphasises that education and public information should be part of any regulatory approach [1][6].

Practical tips to recognise fake news and deepfakes

Read critically and check immediate evidence: 1) Verify the original source and publication date; 2) Watch for visual artefacts such as unnatural facial movements, inaccurate lip‑sync or inconsistent lighting; 3) Compare audio/video with reliable archival recordings; 4) Use fact‑checking services and look for multiple independent confirmations; 5) Be cautious about sharing sensational media without verification — these recommendations align with analyses warning about the scale and speed of AI‑driven disinformation and the need for detection tools and education [6][1][alert! ‘The precise technical detection methods are technologically complex and evolve rapidly; these tips are practical rules of thumb and not a substitute for forensic analysis’].

The role of platforms, insurers and security experts

Platforms play a central role in removal and labelling, but Netkwesties and other analyses note that commercial interests and international hosting make uniform enforcement difficult to achieve [1][6]. Insurers and cybersecurity analysts also see growing demand for protection against digital attacks and misuse of personal data, illustrating that technical and market mechanisms (such as cyber insurance) are part of the broader ecosystem around digital risks [3][7].

Why international cooperation and monitoring systems are necessary

As deepfakes are produced and distributed across borders, experts point to the need for joint monitoring, shared detection tools and standardised legal definitions to enable effective enforcement; Netkwesties and academic sources name international coordination as a precondition for keeping regulation from being merely symbolic [1][6]. At the same time, those same sources note that there is not yet a concrete, assembled international executive body overseeing development and enforcement at a global scale [1][6][alert! ‘Publications call for international teams but do not confirm an operational, permanent intergovernmental executive body as of the available sources’].

Sources