The EU AI Act, Lethal Autonomous Weapons, and the Imperative for Human-Centric AI
Abstract
The rapid advancement of artificial intelligence (AI) and robotics introduces profound challenges to the military sector, particularly regarding the development of Lethal Autonomous Weapon Systems (LAWS). These systems, capable of identifying and engaging targets without direct human intervention, raise critical ethical and legal questions concerning accountability and human oversight. The integration of Lethal Autonomous Weapons Systems (LAWS) into modern arsenals necessitates a rigorous examination of the prevailing international legal and ethical landscape, particularly as these technologies challenge the foundational tenets of International Humanitarian Law (IHL). Central to this discourse is the inherent difficulty autonomous robotic systems face in adhering to the principle of distinction; specifically, the technical and moral challenge of reliably differentiating between active combatants and civilians, or distinguishing healthy soldiers from those who are hors de combat due to injury This study investigates the significant regulatory gap resulting from the explicit exclusion of military and defense applications from the European Union AI Act (Regulation (EU) 2024/1689), (Artificial Intelligence Act, 2024). It analyzes how the transition from automation to full algorithmic autonomy challenges the fundamental principles of International Humanitarian Law, specifically the requirements of distinction and proportionality. Furthermore, the article examines the strategic implications of automation bias and the potential erosion of human judgment in high-stakes decision-making since at present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists. Ultimately, the current fragmentation of the regulatory landscape, characterized by the exclusion of military AI from the EU AI Act of 2024, underscores the urgent need for a unified international governance body to ensure that the rapid evolution of autonomous force does not supersede the ethical and legal frameworks it is intended to serve
Downloads
References
2. United Nations. (1980). Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (with Protocols I, II and III). http://disarmament.unoda.org/en/our-work/conventional-arms/convention-certain-conventional-weapons
3. International Committee of the Red Cross. (1977). Protocol Additional to the Geneva Conventions of 12 August 1949 and relating to the Protection of Victims of International Armed Conflicts (Protocol I). https://ihl-databases.icrc.org/ihl/INTRO/470
4. United Nations. (1945). Charter of the United Nations. https://www.un.org/en/about-us/un-charter/full-text
5. European Union. (2012). Consolidated version of the Treaty on European Union. Official Journal of the European Union, C 202/13. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A12012M%2FTXT
6. Center for Security and Emerging Technology. (2023). U.S. and Chinese military AI purchases. https://cset.georgetown.edu/publication/u-s-and-chinese-military-ai-purchases/
7. Arms Control Association. (2025). Geopolitics and the regulation of autonomous weapons systems. https://www.armscontrol.org/act/2025-01/features/geopolitics-and-regulation-autonomous-weapons-systems
8. Center for Strategic and International Studies. (2025). Technological evolution on the battlefield. https://www.csis.org/analysis/chapter-9-technological-evolution-battlefield
Copyright (c) 2026 Eirini Dellagrammatika Bizmpiki

This work is licensed under a Creative Commons Attribution 4.0 International License.


