Why AI ethics does not need too many rules
Den Haag, donderdag, 4 december 2025.
Former Minister David van Weel warns against the danger of overregulation in AI, stressing that a global, coordinated approach is essential to prevent an ‘9/11 moment’ in artificial intelligence. The most striking truth is that overly strict rules can stifle innovation, while excessively loose rules undermine ethics. The Netherlands plays a key role in fostering smart, ethical collaboration between governments, businesses, and science. The challenge is not to stop AI, but to guide its growth in a way that balances safety, responsibility, and progress — a balance that can only be achieved internationally.
The threat of an AI ‘9/11 moment’
Former Minister of Foreign Affairs David van Weel warned during the forum ‘How to Trust AI? The Good, the Bad and the AI’ of the risk that the world could be overwhelmed by an ‘9/11 moment’ in artificial intelligence [1]. He emphasized that the difference between AI and the atomic bomb lies in the pace of impact: whereas the nuclear bomb requires a single, dramatic test, the consequences of AI unfold gradually but irreversibly, with unpredictable effects on safety, democracy, and geopolitical relations [1]. Van Weel cautioned that a sudden, unforeseen catastrophe — such as mass misuse of AI-powered drones or disruptive deepfakes in an election campaign — could trigger a societal trust crisis that would be difficult to recover from [1]. The comparison to 9/11 is not merely rhetorical: it serves as a warning about inadequate preparation, as well as the need for a proactive, coordinated response to technological disruption [1].
Overregulation as a barrier to innovation
Van Weel warned that overregulation in the field of AI poses a real threat to technological advancement and economic competitiveness [1]. He stressed that rules are necessary, but must be ‘smart and targeted’ — not so strict as to hinder innovation, nor so loose that ethics fall out of sight [1]. This balance is crucial, as the Dutch government is actively involved in international initiatives such as REAIM (Responsible AI in the Military Domain) with South Korea and participation in UN expert groups, where the Netherlands aims to take a leading role in developing responsible AI [1]. Without a global approach, the EU risks introducing too many rules under its AI Action Plan, which could hinder the rapid growth of AI technology in the Netherlands and the region [1][2]. The danger is that overly strict regulation could lead to an ‘innovation drain’ — where tech companies like Anthropic, valued at $300 billion, shift their investment strategies to countries with more flexible regulations [1][2].
International cooperation as the foundation for ethics
According to Van Weel, a global, coordinated approach is essential to preserve opportunities and ensure ethics without stifling innovation [1]. He emphasized that it is not the responsibility of a single country to define AI ethics, but that collaboration between governments, businesses, and science is critical [1]. This is not only an ethical necessity but also a strategic one, given the cross-border nature of AI technology and the risks it poses in automated warfare, data fraud, or mass media manipulation [1]. The Dutch approach focuses on building a bridge between diplomacy and technology, with an emphasis on safety, democratic values, and preventing opaque ‘black box’ models [1]. The recent introduction of criminal liability for creating and spreading deepfakes of natural persons, with a maximum penalty of five years in prison or a fine of €250,000, is a step in this direction, but is integrated into an international framework such as the European AI Action Plan [3].
The challenge of rapid technological advancement
Progress in AI technology is outpacing the legislative capacity of governments. In just five months, daily AI usage among Dutch citizens has almost doubled, underscoring the need for swift, responsible adaptation [3]. Over $100 billion has already been invested by OpenAI to keep the company operational, highlighting an extreme level of financial dependency and investment interest, particularly in the United States [3]. Furthermore, there is an immediate security threat via a vulnerability in recent versions of OpenAI’s software Codex CLI, which allows attackers to execute code undetected — a risk that particularly threatens regulated sectors such as finance, healthcare, and energy [3]. This pace of development makes it impossible to implement a fully controlled and pre-regulated AI policy. Therefore, Van Weel advocates for a dynamic framework that adapts to new developments, rather than static rules that become outdated within years [1].
The role of businesses and science
To achieve a balance between safety, innovation, and ethics, an open, ethical, and smart collaboration between governments, businesses, and science must be established [1]. Businesses play a crucial role, but their responsibility is becoming increasingly clear. For instance, the SAP user group warned that the ‘low-hanging fruit’ of AI use is often not what it seems, as implementation frequently takes years and requires deep process transformation [3]. Additionally, the Australian government caught Deloitte in poor AI use within a delivered report, prompting the company to refund the payment — an example of how trust in AI technology can quickly erode through misuse [3]. Insurers are increasingly refusing to cover risks associated with AI agents and AI chatbots due to the difficulty in assessing such risks, highlighting the economic consequences of irresponsible AI development [3]. Therefore, science and technology experts must proactively collaborate with policymakers, so that opportunities and threats can be properly weighed and knowledge does not remain confined to laboratories [1].