Axel Springer Transforms Journalism with AI: Opportunities and Ethical Questions
berlijn, donderdag, 4 september 2025.
Axel Springer is implementing artificial intelligence (AI) in the journalistic process, automating workflows and improving fact-checking. Internally, ChatGPT is used for research and text production, offering new opportunities but also raising ethical questions about transparency and responsibility. An example of this is an article from Bild that was partially AI-generated and contained errors. Experts advocate for mandatory labelling of AI-generated content to maintain trust in the media.
Technology and Application
Axel Springer is using ChatGPT for research and text production, significantly altering workflows. This AI resource assists in gathering information, generating concepts, and enhancing fact-checking. By integrating ChatGPT into their internal processes, editorial teams can work faster and more efficiently without compromising journalistic standards and norms [1].
Impact on News Production and Consumption
The implementation of AI has a significant impact on how news is produced and consumed. Automated workflows speed up the production of news articles, allowing journalists to focus more on in-depth research and investigative reporting. Additionally, AI offers new possibilities for personalisation, ensuring readers receive targeted content that better aligns with their interests [1][2].
Benefits of AI in Journalism
The integration of AI provides several benefits for journalism. Firstly, it increases the efficiency and speed at which news articles are produced. Secondly, AI improves the accuracy of fact-checking by providing quick access to reliable information. Thirdly, it offers new tools for data analysis, making complex information easier to visualise and understand [1][3].
Ethical Questions and Potential Disadvantages
Despite the benefits, the implementation of AI also raises ethical questions. For example, an article from Bild that was partially AI-generated contained errors, demonstrating that AI is not always entirely reliable and poses risks of spreading incorrect information. Therefore, experts advocate for mandatory labelling of AI-generated content to ensure transparency and maintain trust in the media [1][4].
Transparency and Responsibility
Transparency is crucial when using AI in journalism. There is growing concern among readers about manipulation through AI generation. Professor Jessica Heesen from the University of Tübingen notes that a loss of trust in media communication could be a serious setback for democratic society. Therefore, it is important that media companies clearly indicate when content is AI-generated, as required by the European AI Regulation [1][4].
Data Protection and Bias
In addition to transparency, data protection and bias are important considerations when using AI. AI models can contain harmful biases and stereotypes, depending on the datasets they are trained on. Moreover, user privacy must be protected, especially when sensitive information is processed. Media companies must strive for a balance between technological innovation and ethical responsibility [1][3].
Future Perspectives
The future of journalism with AI is both promising and challenging. Routine tasks will likely be automated, while complex investigations and in-depth commentary will remain the domain of human journalists. Regulation will play a crucial role in establishing a framework for the responsible use of AI. The success of this transformation depends on how well media companies adhere to ethical principles and transparency [1][2][4].