AIJB

New Course Helps Recognise Fake News with AI

New Course Helps Recognise Fake News with AI
2025-10-24 nepnieuws

Amsterdam, vrijdag, 24 oktober 2025.
DNA Next has launched a new course focused on developing digital skills, including recognising fake news and understanding AI. The course aims to improve media literacy and promote safe online behaviour. Through practical examples and interactive sessions, participants learn how to assess reliable information and spot fake news.

Course offering and DNA Next’s aim

DNA Next offers a corporate course ‘Digital Skills — Start’ that helps organisations work safely online and improve media literacy; the training is bespoke, price on request, and participation is arranged via the business contact information on the site [1].

Why a course on fake news and AI is relevant now

The combination of rapidly evolving generative AI and widely shared social media makes it easier to produce and quickly spread misleading messages and deepfakes; this increases demand for training in source evaluation and safe online behaviour, a theme also reflected in journalistic discussions and education offerings [3][1].

Concrete examples of AI-initiated deception

Recent cases show how AI is used to make deception more believable: according to investigative reports, deepfake videos were used to simulate political messages (for example: a fake video of an Irish presidential candidate circulated online) and large-scale fraud campaigns used advanced deepfakes and personalised spoofing to financially exploit victims [4][5].

Small, viral misinfo examples that illustrate practice

Popular media podcasts and social feeds demonstrate how difficult it is in daily life to separate fact from fiction: radio and entertainment shows regularly discuss viral stories that later turn out to be partly or entirely fabricated, underlining the need for public training in verification [7][4].

How AI helps detect and fight fake news

Research shows that modern, multimodal and explainable AI models can deliver significant gains in recognising misinformation: a recent peer‑reviewed article describes the HEMT-Fake model — a hybrid, explainable multimodal transformer — that showed improvements in Macro‑F1 and resilience against adversarial paraphrasing and AI‑generated misinformation, and that provides explanations that can support factcheckers [2].

Limitations and risks of AI detection

Although AI detection models are making progress, limitations remain: models can be biased, struggle with low‑resource languages and can be circumvented by sophisticated attacks; moreover, experts warn that relying on automated detection without human verification carries risks of false positives and false negatives [2][alert! ‘model performance depends on dataset, language and attackers; human validation remains necessary’].

Implications for media literacy and democracy

When misinformation can circulate at scale, it harms authorised decision‑making, the quality of public debate and trust in institutions; this issue also features on political agendas that emphasise strengthening the press and combating disinformation in policies and digital skills programmes [3][8].

Practical, immediately applicable tips to spot fake news

  1. Check the source: look for the original publication and contact details, and be cautious with content that has no traceable origin [1][3]. 2) Compare multiple, independently related sources before trusting or sharing a story [3]. 3) Scrutinise audio and video: watch for unnatural facial movements, lip‑sync issues and odd lighting (deepfake signals) and verify with other footage or press statements [4][2]. 4) Beware of unusual urgency or emotionally charged language that encourages sharing; this is often a viral lever for misinformation [3]. 5) Use tools and training: take courses or workshops on AI and verification practices to build skills (comparable offerings exist in education and library workshops on AI and source verification) [6][1].

How courses like DNA Next’s can help in practice

Corporate courses that combine practical examples and interactive sessions can make participants more skilled in source checking, recognising manipulation and responsible sharing; such training supports both individual employees and organisations in building resilience against misinformation and in formulating internal guidelines for online communication [1][3].

Sources