The Duty to Take Precautions in Hostilities, and the Disobeying of Orders: Should Robots Refuse?

Grimal, Francis and Pollard, Michael (2021) The Duty to Take Precautions in Hostilities, and the Disobeying of Orders: Should Robots Refuse? Fordham International Law Journal, 44 (3). pp. 671-734. ISSN 0747-9395

[img] Text
Grimal Pollard.THE DUTY TO TAKE PRECAUTIONS IN HOSTILITIES AND THE DISOBEYING OF ORDERS SHOULD ROBOTS REFUSE.Fordham.docx - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (340kB)
Official URL: https://ir.lawnet.fordham.edu/ilj/vol44/iss3/3/

Abstract

This Article not only questions whether an embodied artificial intelligence (EAI) could give an order to a human combatant, but controversially, examines whether it should also refuse one. For example, a future EAI may be capable of refusing to follow an order where an order appeared to be manifestly unlawful, or, was in breach of International Humanitarian Law (IHL), national Rules of Engagement (ROE) or, even, where they appeared to be immoral or unethical. As part of opening this much-needed discussion, the authors examine the legal parameters, and by way of a solution provide a framework for overriding and disobeying. In short, the article examines whether human error can be corrected and overridden - but for the better, rather than for the worse? An aircraft’s anti-stall mechanism, which takes over, and corrects human error, is seen as nothing less than “positive”. Such an argument has traction in the strategic realm in terms of “system of systems” — the premise that more advanced technology can potentially help overcome Clausewitzian “friction” / “fog of war”. Within the broader discussion of the “duty to take precautions” (Article 57 API/ ICRC Customary Rule 15), the authors analyze the concept of obeying/disobeying orders through the lenses of Autonomous Weapons Systems (AWS) and EAI - two “robots” which, for the sake of the current Article can be distinguished owing to the fact that an EAI is capable of giving, or of refusing to follow an order to apply force, while an AWS is one which once activated, makes targeting decisions, and applies force independently. Central to this discussion, are state specific ROEs within the concept of “duty to take precautions”. At present, the guidelines relating to a human combatant’s right to disobey orders are contained within such doctrine, but vary widely. For example, in the United States, a soldier may disobey an order but only when the act in question is clearly unlawful. Whereas in direct contrast, Germany’s “state practice”, has specific additional requirements in terms of human dignity, and that the “order” is not being of “use for service”. At its heart, this Article introduces a future-thinking discussion of the practical process of disobeying of orders between various “individuals” within the chain of command (human to AWS; AWS to human; EAI to AWS, human to EAI). Taken to its extreme, the authors envisage the ability of an EAI being able to override and overturn nuclear launch. Towards the end of the Article, the authors extend the discussion of “robot” refusal to a wider application including robot Private Military Contractors (PMCs), robot spies, and the more provocative concept of robot insubordination. Would robot PMCs operate according to existing implicit biases, following orders, regardless of personal “opinion” due, to the financial implications of failing to do so. Additionally, the authors consider whether an EAI (being non-human), should apply a higher threshold before ‘deciding’ whether to disobey an order, due to their immortality. By way of overall solution, the authors propose the crafting of “robot” rules of engagement (RROE) with specific regard to the disobeying of orders. In addition to ensuring the EAI is programmed to run an indefinite proportionality assessment feedback loop, the authors also propose a novel test: the EAI is to discount human “traits” which, lead to human error. If this test is satisfied, an order should be disobeyed, and human error overturned. In the broader sense, the authors question whether warfare should remain an utterly human preserve – where human error is an unintended but unfortunate consequence. Or, does the duty to take all feasible precautions in attack require a human commander to utilize available AI systems to routinely question human decision-making, and where applicable, prevent mistakes/ disobey orders – whether unintended, or are in fact punishable as war crimes? Ultimately, the overarching question posed, is whether EAIs are to be afforded the same combat privileges as human combatants when it comes to the disobeying of orders.

Item Type: Article
Uncontrolled Keywords: Embodied artificial intelligence ; International Humanitarian Law ; Rules of Engagement ; disobeying ; Autonomous Weapons Systems.
Subjects: K Law > K Law (General)
T Technology > T Technology (General)
Divisions: School of Law
Depositing User: Freya Tyrrell
Date Deposited: 27 Nov 2024 09:34
Last Modified: 27 Nov 2024 09:34
URI: http://bear.buckingham.ac.uk/id/eprint/647

Actions (login required)

View Item View Item