Ethics in AI: Balancing Technology and Justice
brussel, donderdag, 21 augustus 2025.
The use of AI in military contexts raises ethical questions. Israeli AI systems such as Lavender select targets with unprecedented speed, but decision-making is increasingly determined by technical capabilities rather than justice. Experts advocate for a balance where technology serves to promote peace and justice, not undermine them.
Rapid Target Selection
Israeli AI systems such as Lavender select targets with unprecedented speed, allowing military personnel just a few seconds to authorise a bombing [6]. Lavender is used to mark suspected Hamas fighters for potential airstrikes, which increasingly determines decision-making through technical capabilities rather than justice. This raises serious ethical questions about the role of technology in military decision-making.
Ethical Frameworks and Justice
Experts advocate for a balance where technology serves to promote peace and justice, not undermine them [6]. Dooyeweerd’s theory of aspects is cited as a tool to guide the development and use of military technology, with emphasis on the legal aspect that should aim to promote peace and justice [6]. This means that technology should not only focus on efficiency but also on the human and ethical implications of its application.
Transparency and Accountability
One of the greatest challenges in using AI in military contexts is the need for transparency and accountability. The exact reasons for target selection by AI systems are often unclear to the involved military personnel [6]. This can lead to situations where technical capabilities determine decision-making instead of justice. Therefore, it is crucial that there is strict oversight and regulation to ensure that AI systems are used transparently and responsibly.
Impact on Military Personnel
The use of AI in military contexts also has a significant impact on the role and responsibility of military personnel. The task of a military analyst is increasingly focused on approving AI-generated kill lists [6]. This can create a distance between the decision-maker and the consequences of their decisions, raising ethical and moral dilemmas. It is therefore essential that military personnel receive adequate training in the ethical implications of AI use.
International Debate
The use of AI in military contexts is also a subject of international debate. Dozens of world leaders, academics, and technology companies came together at the AI Safety Summit in Bletchley Park (UK) to discuss the safety aspects and ethical implications of AI [2]. These discussions underscore the urgency of a global approach to regulate and guide the development and application of AI.
Concluding Remarks
The use of AI in military contexts presents both opportunities and challenges. While technology can contribute to efficiency and safety, it is crucial that these applications always aim to promote justice and peace. By ensuring ethical frameworks, transparency, and accountability, we can ensure that technology plays a positive role in the military sector.