NPO plans drastic changes: NPO 2 and NPO 3 overhauled
amsterdam, maandag, 27 oktober 2025.
The Dutch Public Broadcasting organisation (NPO) is in talks with various broadcasters about sweeping plans for the public channels and programming. From 2027 NPO 2 and NPO 3 will change significantly, with youth programmes moving to NPO 2 and NPO 3 focusing on live events. Budget cuts of almost €160 million from 2027 force the NPO to make painful choices, putting many beloved programmes at risk.
Opening: why this change matters for news media
The announced reorganisation of NPO 2 and NPO 3 affects not only television programmes but also the media landscape in which news organisations operate: cuts and re-profiling of channels influence where public money is allocated and what resources broadcasters retain for news production and innovation [2][6][1]. This context is relevant to the debate about the deployment of artificial intelligence (AI) in journalism: budgetary pressure increases the incentive to automate processes and to use technology for news production and distribution [GPT].
A concrete AI use case: automatic video summaries and clipping for news broadcasts
A widely exploited application of AI in news production is automatic video analysis and the generation of short summaries (clipping) of longer broadcasts, so that key points can be shared quickly online and on social channels [GPT]. This is technically feasible thanks to combined speech‑to‑text, natural language processing and video frame segmentation: speech is converted to text, important segments are identified by algorithms, and short edited clips suitable for digital platforms are generated from those segments [GPT].
How that technology works in detail
The pillars of automatic video summarisation are: 1) automatic speech recognition (ASR) to transcribe spoken text; 2) NLP (natural language processing) to extract key sentences and themes; 3) vision AI to detect speaker changes, graphics and visual highlights; and 4) automatic editing software that links timecodes to selected fragments and produces short clips [GPT]. This chain makes it possible to produce multiple sub‑clips from a one‑hour broadcast within minutes and to feed them to social channels or on‑demand video portals [GPT].
What this means in a time of cuts at the NPO
In an environment where publicly funded broadcasters face substantial cuts and channel reorganisation — such as the shift of youth programming to NPO 2 and the transformation of NPO 3 towards live events — AI can be seen as a means to achieve the same output with less personnel or to distribute content more widely at a lower cost per clip [2][5][4]. At the same time, scaling up automated workflows means newsrooms must develop different skills, such as AI oversight and quality control [GPT].
Benefits: speed, scale and multiplatform distribution
The immediate benefits of automatic video summarisation are measurable in time saved and reach: algorithms can create highlights within minutes that editorial staff would otherwise have to cut and tag manually, allowing content to go online faster and reach more platforms [GPT]. For radio and television broadcasters that must merge channels or reduce programming, automation can help deliver more short news products simultaneously for on‑demand and social media, partially compensating for audience reach losses [2][3][6].
Risks and downsides: quality, loss of context and bias
Automation also brings clear risks: AI‑generated clips can lose nuance and context because algorithms tend to select conspicuous or frequent patterns rather than editorially relevant moments [GPT]. Moreover, errors in speech recognition or incorrect summaries can lead to misrepresentation of statements — a risk that is significant for sensitive news items [GPT][alert! ‘It is not publicly confirmed that the NPO uses these specific AI applications; any link between NPO plans and AI use is speculative without explicit source clarification’]. Additionally, models replicate and amplify existing biases from training data, which can affect which speakers or themes are highlighted more often [GPT].
Ethical considerations: transparency, accountability and authorship
Key ethical issues concern transparency towards viewers (should a clip indicate it was produced automatically?), accountability for incorrect representations (who corrects and who is blamed?) and the preservation of editorial standards [GPT]. Public broadcasters, who are in talks about their programming and budgets, face extra public responsibilities because they are entrusted with state or publicly funded resources to provide truthful reporting and pluralism [1][2][5].
Operational impact on editorial teams
In practice, AI changes editorial tasks: repetitive tasks such as clip selection and subtitle preparation shift to automated processes, while staff spend more time on quality control, editorial decision‑making and correcting algorithmic errors [GPT]. This can concurrently lead to shortages in traditional editing and administrative roles, and to a shift in the skills required within broadcasters that are rethinking their channel profiles due to cuts [2][4][5].
Balancing cost savings and the public remit
The financial incentive to use AI is evident when public broadcasters must realise large savings; cutting or merging channels and programme titles can increase the emphasis on efficient content distribution [2][6][1]. At the same time, the public remit of broadcasters requires careful consideration of whether cost savings via AI will lead to less pluralism, less local or investigative journalism, or reduced editorial depth — issues that will also be discussed between broadcasters and the NPO [1][5].
Access and public trust
Automatically generated clips can increase reach among younger audiences who prefer short videos, but if quality, context or transparency are lacking, this can undermine public trust — especially when viewers feel content appears manipulated or artificial [GPT]. Public broadcasters restructuring their channel offerings therefore have the task of integrating AI use into their news output credibly and under control [2][3][5].
What newsrooms must concretely arrange when implementing AI
Practical safeguards include: clear audit logs of automatic edits; human editorial oversight for sensitive items; quality testing for speech recognition and subtitling; policies for correction and rectification of AI errors; and open communication to the public about the use of AI in news production [GPT]. Because the NPO and broadcasters are currently in talks about programming and cuts, such governance issues will need to be part of any modernisation plan that includes AI [1][2][4].
Closing remark: technological choice embedded in policy debate
The debate about channel re‑profiling and cuts at the NPO creates a practical and normative framework in which choices about AI use in journalism are made: efficiency gains and reach expansion weigh against risks to quality, trust and editorial diversity — matters that unmistakably play a role in talks between broadcasters and the NPO [1][2][5][4].