AIJB

AI enriches journalism education at Fontys

AI enriches journalism education at Fontys
2025-10-31 journalistiek

eindhoven, vrijdag, 31 oktober 2025.
Fontys Journalism uses artificial intelligence to enrich the journalism programme, according to educationalist Margreet ter Horst. AI is used for news analysis and the generation of teaching materials, offering new opportunities for students to better prepare for the workplace. Ter Horst emphasises that AI does not make the profession smaller, but rather richer.

Practice: how Fontys Journalism uses AI within the curriculum

Fontys Journalism uses artificial intelligence in several ways within the programme: for news and data analysis, generating teaching materials and drafting sample texts for students to practise with, according to educationalist Margreet ter Horst of Fontys [1]. Ter Horst states that this use better prepares students for the workplace because routine tasks are automated and space is freed up for deeper journalistic craftsmanship [1].

The technology behind the application

The tools used in education and journalistic practice often run on language models and data-analysis pipelines that can process large amounts of newsfeeds and documents to generate summaries, search queries and draft texts; at Fontys AI is explicitly mentioned for data analysis and the generation of teaching materials [1]. Such systems function as a supportive layer: they aggregate and structure information, but require human quality control and educational evaluation to identify errors and bias [1][4].

What this means for news production

In the newsroom AI can take over routine source monitoring and first-draft writing, giving journalists more time for verification, in-depth investigation and contextualisation — tasks for which professionals remain indispensable according to Fontys and external educationalists [1][4]. At the same time, integrating AI into editorial processes requires explicit agreements on role distribution, quality control and ethical review, because the technology can rapidly change working methods [1][4].

Effect on education and cognitive engagement

Educational research indicates that using systems like ChatGPT for assignments such as essay-writing can lead to reduced cognitive engagement and lower intellectual effort among students, which can affect the learning process and the conversion of information into long-term knowledge [1]. Fontys teachers therefore stress that AI should be used as a learning tool that challenges students’ thinking rather than replacing it, and that educationalists together with subject experts must remain actively involved in curricula and assessment [1][4].

Impact on news consumption and public trust

AI-generated content can increase the speed and diversity of news consumption, but also carries risks: errors, bias and unintended framing can scale up quickly and put the trust of the public and colleagues under pressure. International research into professional acceptance of AI shows that professions such as doctors even experience a ‘competence penalty’ when staff rely too much on AI, indicating that societal and professional perceptions are crucial for acceptance [2]. This phenomenon illustrates that technological reliability alone is not sufficient; cultural adoption and positioning AI as a supportive partner remain essential [2].

Benefits: efficiency, scalability and practice environments

The benefits of AI for journalism education and news production are concrete: faster searchability of sources, automated summaries that shorten study time, and scalable practice environments where students can train with realistic newsroom scenarios and receive immediate feedback on style and fact-checking skills [1][4]. Such applications can shorten the learning curve and make students workplace-ready for editorial tasks sooner [1].

Risks and ethical considerations

Key drawbacks and ethical issues include: diminished critical thinking skills from overreliance on AI, the spread of inaccurate or biased information, and professional stigma when users appear dependent on algorithms [1][2]. For this reason education professionals at Fontys advocate clear guidelines, transparency about the use of AI in coursework and editorial processes, and ongoing human quality control (‘quality assurance is and must remain human work’) [1].

Recommendations for education and editorial practice

Concrete steps that Fontys and similar programmes adopt or recommend are: integrating AI tools into practical assignments with explicit assessment criteria, training in evaluative judgement and information literacy, and collaboration between educationalists and subject experts to ensure both technical and ethical competencies [1][4]. This aligns with calls from the education field to position AI as a supporting partner and not as a replacement for professional judgement [1][2].

Uncertainties and points of attention

There is uncertainty about the long-term effects of large-scale AI use on the development of journalistic skills and on professional cultures within newsrooms; many forecasts and risk assessments are based on ongoing research and predictions that still require further empirical evidence [alert! ‘long-term effects on skills and professional cultures are not yet sufficiently empirically substantiated’] [1][2][4].

Sources