AIJB

How journalists now collaborate with AI without compromising their integrity

How journalists now collaborate with AI without compromising their integrity
2025-11-12 journalistiek

new york, woensdag, 12 november 2025.
The Associated Press has introduced new guidelines clarifying that AI-generated texts must always be treated as unverified source material—no matter how professional they appear. This is crucial, as in May 2025, an American regional newspaper published an AI-written special on a heatwave featuring fabricated quotes and errors. The real breakthrough? AP states that AI-generated images may only be published as press photos if the subject itself is an AI—and then only with a clear caption. This ensures transparency without undermining AI’s power. Humans remain accountable, but gain a powerful collaborator in writing, editing, and visualising news.

AI as a creative partner in the newsroom

In June 2025, the Italian newspaper Il Foglio conducted an experiment that redefined the boundaries between human and machine creativity. Over the course of a month, a four-page supplement was entirely filled with articles written by ChatGPT Pro under the guidance of chief editor Claudio Cerasa [1]. Cerasa did not treat the chatbot as a replacement, but as a colleague producing content on politics, culture, and social issues, with two human editors refining the output [1]. The editor emphasised that the goal was not to replace human intelligence, but to enhance creative processes: ‘Anyone who tries to use AI to replace human intelligence ultimately fails. AI is meant to be integrated, not replaced’ [1]. This model of collaboration exemplifies how generative AI can function as a proactive partner, not merely an automated writing tool. OpenAI’s experimental ‘Tasks’ feature, introduced in January 2025, allows users to schedule assignments for automated execution, enabling ChatGPT to summarise content, monitor information, and send concise updates at set times or under specific conditions [1]. This development transforms the tool from reactive to proactive, potentially revolutionising the future of journalistic workflows.

Transparency and accountability in the AI era

On 12 November 2025, the Associated Press (AP) introduced new guidelines for the use of generative AI in journalism, stressing that AI-generated texts must always be regarded as ‘unverified source material’ requiring verification against journalistic standards [1]. These rules are a direct response to incidents such as the publication of an AI-written special on a heatwave by an American regional newspaper in May 2025, which contained fabricated quotes and errors, sparking public concern [1]. AP states that AI-generated images may only be published as press photos if the subject itself is an AI, and then only with a clear caption to prevent deception [1]. This requirement is essential for news transparency and maintaining public trust in the media [1]. Final responsibility for content and editorial decisions remains with humans, even when AI is used for writing, editing, or visualising articles [1]. This marks a critical departure from the idea that AI can replace humans, underscoring that technological tools are merely aids within an ethical framework of accountability.

The risk of deception and the rise of digital fiction

The rapid rise of AI-generated content has given rise to new forms of digital fiction and deception, blurring the lines between reality and simulation. A notable example is the satirical experiment by the Danish government, which on 6 November 2025 introduced the ‘AI Minister’ as pregnant with 83 digital children [2]. Although this was a fictional event, it was released as an official press release from the Ministry of Digital Affairs in Copenhagen and attracted international attention [2]. The government used this as a visual metaphor to alert the public to the ethical challenges of AI, such as the legal status of digital entities and the responsibility individuals bear when creating AI-generated content [2]. The 83 digital children in the narrative represented the total number of AI personas introduced by the ministry in 2025, highlighting the growth of automated content and the risks associated with ‘deepfake’ misuse [2]. While this experiment was not a real event, it illustrates how quickly generative AI can create illusions that appear authentic at first glance, reinforcing the need for media literacy in society [2]. The responsibility of the media to identify and expose such fiction has never been greater.

The role of AI in the news production process

In the news production process, AI now plays an increasingly significant role, both in content creation and personalised news delivery. Since 9 November 2025, BuzzFeed has been using OpenAI’s technology (ChatGPT) to write full articles and develop personalised quizzes, as announced by CEO Jonah Peretti [1]. He had already described AI in 2023 as ‘a new era of creativity’ and planned its integration into business operations to enable more efficient production and interactive formats [1]. This approach ensures that AI is not just a tool for rapid output, but also a partner capable of enhancing creative ideas. Apple introduced an AI feature in October 2025 that automatically summarises news stories into push notifications, following multiple instances of erroneous headlines falsely claiming a politician had been arrested [1]. This demonstrates that automation without human oversight carries risks, yet also shows that AI, when used appropriately, offers substantial benefits. It is expected that in the near future, AI will proactively detect events and alert journalists in real time with key details, tailored to specific information needs [1]. Companies already using this system include Crown Van Gelder, MPS Systems, and Landa, which activate AI notifications for topics such as ‘all local political news in my province’ or ‘news about EU climate reports’ [1].

Ethics, privacy, and the future of journalism

The growth of AI in journalism coincides with a comprehensive legal and ethical framework defining the limits of technological use. On 19 November 2025, the European Commission announced a major package of new rules under the name ‘Digital Omnibus’, integrating existing legislation such as the GDPR, Data Act, and Open Data Directive [3]. One controversial aspect is the relaxation of the GDPR, allowing companies to use personal data more easily to train AI systems based on their ‘legitimate interest’, thereby often eliminating the need for explicit consent [3]. Additionally, rules around cookies and tracking are being eased, and the Commission proposes that browsers automatically transmit privacy preferences to websites—a so-called ‘cookie fatigue’ reduction [3]. However, media publishers are granted an exception and are not required to respect these automatic signals, aiming to protect journalistic independence [3]. Critics warn, however, that this undermines personal data protection and digital rights, threatening the European Union’s role as a global leader in privacy [3]. In the Netherlands, the Digital Affairs Commission (DiZa) is focusing on the implications of AI for democracy, security, and privacy, having submitted a policy note on ‘Safe Online’ in November 2025 [4]. These developments demonstrate that the future of journalism is shaped not only by technology, but also by legislation and ethics.

Sources