Ethical boundaries in AI defence: Who bears the responsibility?
den haag, dinsdag, 28 oktober 2025.
In a recent episode of the BNR podcast De Strateeg, analyst Sofia Romansky and professor Jeroen van den Hoven discuss the ethical implications of AI in defence. They emphasise the need for accountability when autonomous weapons make mistakes and how international cooperation can help prevent an AI arms race. One of the most intriguing questions they raise is who is responsible if an autonomous weapon makes an error.
Ethical issues in the podcast: who bears the responsibility?
In a recent episode of the BNR podcast De Strateeg, HCSS analyst Sofia Romansky and professor of ethics and technology Jeroen van den Hoven discuss the moral and legal limits of AI in defence, and explicitly ask who should be held accountable when autonomous weapon systems fail or strike civilians [1]. The hosts stress international agreements and ethical guidelines as means to prevent an escalating arms race [1].
A concrete topic for journalists: generative language and speech technologies
In journalism, the AI application most visible to the public and editorial teams manifests in generative language models (LLMs) and advanced text‑to‑speech (TTS) systems — technologies that in 2025 are prominent across a wide range of companies and platforms, including large cloud and model providers and specialised TTS vendors [2]. These systems can draft full articles, create audio impersonations of voices, and personalise news for individual readers or listeners [2].
How the technology works (brief and accessible)
Broadly speaking, LLMs rely on large‑scale statistical patterns in text data to generate plausible sentences; TTS systems use concurrently trained audio networks to synthesise voice based on text input. Many of these tools run on cloud infrastructure and specialised hardware, and are offered via enterprise and cloud platforms that provide the compute power and APIs for newsroom integration [2][5].
Applications in everyday news production
Newsrooms use LLMs to produce first drafts, summarise fact lists, answer reporters’ search queries and generate personalised newsletters or short local updates; TTS is deployed to quickly convert written pieces into audio for podcasts and newsreaders, or to make content accessible to blind and visually impaired audiences [2][5]. These workflows are enabled because media companies and software vendors offer AI services and cloud solutions that integrate seamlessly with editorial systems [5][2].
Benefits for news production and consumption
AI offers tangible gains: speed in producing content, scalability of personalised news offerings and improved accessibility through automatic read‑aloud functions and transcription. In addition, automatic summaries can help readers find the core points of complex stories more quickly, increasing efficiency for both journalists and consumers [2][5].
Risk of misinformation and misuse — and the link to defence ethics
The same techniques that make news faster and more accessible can also be abused: synthetic audio can put false statements into the mouths of public figures; generated articles can spread disinformation; and automated personalisation can reinforce echo chambers. Such risks raise similar questions of responsibility and regulation as in the defence discussion: who is liable in cases of misuse — the model developer, the cloud infrastructure provider, the editor who publishes, or the malicious actor who deploys the outputs? [1][2][3][5].
Regulation, governance and international cooperation
Debates about AI regulation point to the need for common standards and governance — ranging from technical measures (such as watermarks and provenance metadata) to legal frameworks and international agreements — similar to the call for coordination around military AI applications [3][1]. Publishers and policymakers cite digital sovereignty and trustworthy cloud providers as elements in the safe adoption of AI within news organisations [5][3].
Ethical guidelines within newsrooms: transparency and human final responsibility
Practical steps newsrooms can take include: explicitly indicating when content (text or audio) is partially or fully generated by AI; instituting human final review before publication; and implementing technical detection for manipulated audio or deepfakes. These measures align with the broader plea for accountability in AI systems in sensitive domains such as defence [1][5].
Threats and uncertainties requiring attention
Uncertainties remain about how effective international rules can be given rapid technological developments, and to what extent market actors are willing to impose voluntary limits. This uncertainty makes it difficult to predict whether technological countermeasures and policy initiatives will prove timely and sufficiently robust [alert! ‘future compliance and effectiveness of international AI agreements is uncertain; forecasts are absent from the available sources’] [3][1][2].
Practical recommendations for news organisations (brief overview)
Journalistic organisations should: (1) label and make AI use transparent to the public, (2) require human final editorial control on sensitive topics, (3) implement technical verification tools for audio and images, and (4) collaborate with reliable cloud and model providers to retain control over data and models [5][2][1].