Fontys Tilburg Journalism Students Share Views on AI and Reliability
tilburg, vrijdag, 10 oktober 2025.
Students from the Journalism programme at Fontys Tilburg share their thoughts on the impact of AI on their field. Despite the challenges of fake news and social media, they remain optimistic about the future of journalism. Liz Heeren and Fiene Lehmann emphasise the need to use AI as a tool and to preserve core journalistic functions, while acknowledging the growing role of social media in news distribution.
Introduction: why one application of AI takes centre stage
A striking application of artificial intelligence (AI) in contemporary journalism is the rise of ‘virtual humans’ or digital humans deployed as presenters, interview partners or interactive sources in media productions; this technology is explicitly presented and examined during Dutch Design Week 2025 through Fontys projects that display digital humans and virtual doppelgangers, raising questions about trust and empathy in media [2][6].
The technology: what are ‘virtual humans’?
Virtual humans are composite systems of synthetic speech, deep‑learning‑driven facial animation and dialogues powered by language models and multimodal AI; Fontys describes projects at DDW that showcase digital doppelgangers and realistic AI personas, making visible the implicit combination of image, speech and text generation [2][6].
How this AI is applied in practice in news production
Practical applications range from automated newsreaders and animated infographics to interactive Q&As with a virtual reporter; Fontys projects at DDW demonstrate scenarios in which digital humans operate in education, healthcare and media, thereby showing concrete production forms that news organisations can adopt or test [2][6].
Young journalists and the daily use of AI
Journalism students at Fontys Tilburg indicate that they already use ChatGPT and similar AI tools routinely for information searches and assistance with assignments, and that this behaviour affects where and how they consume and produce news — an observation recorded in multiple Fontys publications and interviews with students [1][4][5].
Benefits for news production and consumption
Benefits of virtual humans and generative AI in news work include scalability (faster production of simple items), accessibility (24/7 interactive explanation via chat interfaces) and personalisation (adapting presentation to audience preferences); Fontys projects particularly illustrate the possibilities of making complex topics more accessible with digital means [2][6].
Potential drawbacks and risks
Risks include amplifying disinformation through hyperrealistic, misleading videos or quotes, reduced trust when the origin of a story is unclear, and the danger that readers accept AI‑generated content as human reporting — concerns also voiced by first‑year students who state that social media and AI are the greatest challenges for the profession and make the distinction between real and fake more difficult [1][4].
Ethical considerations for newsrooms
Ethical issues include transparency (clearly indicating when an AI entity speaks or has been used), accountability (who bears liability for errors or deception?) and verification (additional fact‑checking for AI‑generated sources); these themes are explicitly raised in public debate and in Fontys initiatives that investigate questions around virtual humans and trust [2][6].
How education and practice respond
Within Fontys programmes, AI is treated not only as a threat but also as a tool: students learn verification skills and reflect on the role of social media and AI in news consumption — that practical journalistic education is leading is also evidenced by recent student productions and awards, such as the Fontys News Prize awarded this week to two investigative students for critical research work (cited as an example of the enduring value of traditional investigative skills) [3][1][5].
Practical guidelines for newsrooms and educational institutions
Recommended steps emerging from the combination of Fontys projects and student experiences are: integration of AI skills into the curriculum (testing and assessing tools), mandatory verification procedures when using AI sources, and clear labelling of AI‑generated content to preserve public trust — these recommendations align with students’ observations and the themes of Fontys exhibitions that discuss the societal implications of digital humans [1][2][6].