Hilversum Explores C2PA: Towards Reliable Digital Content
hilversum, dinsdag, 22 juli 2025.
Media professionals in Hilversum discussed the implementation of C2PA, a technology that verifies the origin of digital content, during an expert session. This comes at a time when AI-generated media, deepfakes, and disinformation are becoming increasingly common. The meeting, organised by MediaCampus NL, emphasised the need for education and collaboration to restore trust in digital content.
Hilversum Explores C2PA: Towards Reliable Digital Content
Media professionals in Hilversum discussed the implementation of C2PA, a technology that verifies the origin of digital content, during an expert session. This comes at a time when AI-generated media, deepfakes, and disinformation are becoming increasingly common. The meeting, organised by MediaCampus NL, emphasised the need for education and collaboration to restore trust in digital content [1].
C2PA: A Technology for Transparency
C2PA, or Content Authenticity Initiative, is a technical specification that makes the origin of digital content transparent. By using digital signatures and metadata, C2PA can show who created or edited the content and whether it has been altered since signing. According to Laura Ellis, Head of Technology Forecasting at the BBC, C2PA is particularly valuable when multiple parties—from editors to platforms—contribute to a chain of reliability [1][2].
Critical Remarks and the Role of Education
During the session, critical remarks were also raised. The technology is still vulnerable to misuse, depends on large platforms, and is not widely adopted in the news sector. Participants stressed that success hinges on education: both editorial teams and the public must understand what origin labels mean [1].
Concrete Applications and Next Steps
In breakout sessions, participants explored concrete applications, such as AI-tagging in editorial workflows, verification during distribution, and audience-facing tools like ‘Verify Me’. There was a clear call to start small, experiment internally, and seek collaboration around standards and infrastructure [1]. MediaCampus NL will explore follow-up steps with partners. Would you like to stay informed about new sessions? Email us at info@mediacampus.nl with the subject ‘C2PA Interest’ [1].
Impact of AI in Journalism
The rise of AI in journalism brings both opportunities and challenges. AI-generated content (AIGC) can significantly speed up the production of news and media, but the risk of disinformation and deepfakes increases. For example, Netflix launched a new original series called ‘Dreamscapes’ on 15 July 2025, which uses generative AI. Ted Sarandos, Chief Content Officer at Netflix, emphasised that they are aware of the ethical implications and have set up a team of ethicists to find the right balance [3].
Advantages and Potential Drawbacks
The use of AI in journalism offers several advantages, such as the ability to produce and distribute news quickly and efficiently. AI can also help analyse large datasets and identify relevant trends. However, the potential drawbacks cannot be ignored. AI-generated content can be difficult to distinguish from real content, increasing the risk of disinformation. Additionally, AI systems can be biased, leading to inaccurate or unbalanced reporting [3][4].
Ethical Considerations and Future Perspectives
Ethical considerations play a crucial role in the use of AI in journalism. It is essential to ensure transparency and accountability in AI workflows. Media and information literacy (MIL) are crucial for navigating and critically engaging with information in the digital age. According to the World Economic Forum, MIL is a fundamental skill needed to navigate and combat complex digital landscapes [4].