07.04.2025
Regulating the Future of Warfare: Legal and Ethical Challenges of Autonomous Systems

Autonomous weapons are often promoted as precise and efficient — yet without robust legal guardrails, they risk becoming a threat to the very human rights they are intended to protect.

Last week in London, Dr. Anna Mysyshyn, Director of the Institute of Innovative Governance, joined a distinguished community of military leaders, defense experts, and technology innovators at the Military Robotics and Autonomous Systems Conference. The event served as a critical platform for discussing the intersection of emerging technologies and international law in modern warfare.

Dr. Mysyshyn delivered a presentation on one of the defining challenges of our time: How should artificial intelligence (AI) and autonomous systems in armed conflict be regulated — legally, ethically, and in line with human rights principles?

With Ukraine serving as a real-time case study for the deployment of advanced military technologies, including AI-powered drones, surveillance platforms, and situational awareness systems, the need for clear regulatory frameworks has never been more urgent. Technological advancement must go hand-in-hand with legal clarity, democratic accountability, and robust protections for fundamental rights.

Her presentation addressed several core issues:

1.The application — and limitations — of International Humanitarian Law (IHL) and the Geneva Conventions in the age of autonomous warfare;

    2. Significant legal gaps surrounding autonomous targeting decisions, liability, dual-use AI, and the lack of defined accountability chains;

    3. The increasing use of facial recognition technologies and military-grade surveillance tools, and their implications for privacy, democratic participation, and post-war civil rights;

    4. The cybersecurity vulnerabilities of battlefield AI systems, including risks from data breaches, misinformation, and system manipulation by hostile actors.

    Dr. Mysyshyn also presented a set of targeted legal and policy recommendations to support governments and defense actors in navigating these challenges. These included:

    1.Implementing mandatory human rights impact assessments for all AI military deployments;

    2.Establishing internal oversight mechanisms within defense institutions;

    3.Creating regulatory AI environments (e.g., sandboxes) that foster responsible innovation while safeguarding democratic values.

      As Ukraine and its allies continue to innovate in defense technologies, the Director emphasized the importance of ensuring that these advancements remain grounded in law, ethics, and respect for human dignity. The future of warfare must not only prioritize operational effectiveness — it must also uphold accountability, transparency, and the principles of international law.