Grimal, Francis and Pollard, Michael “Embodied AI” and the direct participation in hostilities: a legal analysis. Georgetown Journal of International Law, 51 (3). ISSN 1550-5200 (In Press)
Text
Grimal and Pollard.pdf Restricted to Registered users only Download (1MB) | Request a copy |
Abstract
This Article questions whether, under International Humanitarian Law (IHL), the concept of a “civilian” should be limited to humans. Prevailing debate within International Humanitarian Law scholarship has largely focused on the lawfulness (or not) of the recourse to autonomous weapons systems (AWS). However, the utilization of embodied artificial intelligence (EAI) in armed conflict, has yet to feature with any degree of prominence within the literature. An EAI is an “intelligent” robot capable of independent decision-making and action, without any human supervision. Predominately, the approach within the existing AWS/AI debate remains pre-occupied in ascertaining whether the military “system” is capable of determining/distinguishing between civilians and combatants. Furthermore, the built-in protection mechanisms within IHL are inherently “loaded” in favor of protecting humans from AWS, rather than vice-versa. IHL makes a clear distinction between civilians and civilian objects. However, increasingly advanced EAI’s will make such a distinction highly problematic. The novel approach of this Article is to not only address the “EAI lacuna” in the broader sense but also consider the application of EAI within a specific area of IHL: “Direct Participation in Hostilities”. In short, can a robot “participate”? DPH is firmly grounded within the cardinal principle of distinction, and within proportionality assessments, in order to afford protection to the civilian population during hostilities. Fundamentally, this Article challenges the ICRC’s influential guidance on “Direct Participation in Hostilities” (DPH). The Authors controversially submit that if that guidance continues to be followed, civilian objects will, in some circumstances, be afforded greater protection than human combatants. To highlight this deficiency, the authors challenge the ICRCs assertion that civilian status must be presumed where there is doubt, and instead subscribe to the prevailing alternative interpretation that DPH assessments need to be made on case by case basis. To address the deficiency, the authors add the novel inclusion of a “Turing-like test” within DPH assessment. A concrete example of EAI could be a robot medic. The robot medic’s Hippocratic duty is to protect its patient’s life. In doing so (and given a suitable set of circumstances), the robot medic may wish to return fire against an attacker (here, the authors envisage a scenario during urbanized warfare). Would such an action constitute DPH (?), and what would the legal parameters look like in practice? Implicit within such a discussion is the removal of emotional attachments that, for many, are innate in DPH assessments. Indeed, does the ICRC’s tripartite test for “DPHing” contain understandable bias in favor of humanitarian considerations? “These laws are sufficiently ambiguous so that I can write story after story in which something strange happens, in which robots didn’t behave properly, in which the robots become positively dangerous…”
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Embodied Artificial Intelligence; Direct Participation in Hostilities; International Humanitarian Law; Law of Armed Conflict; Civilian; Revolving Door Fighter; Continuous Combat Function |
Subjects: | K Law > K Law (General) |
Divisions: | School of Law |
Depositing User: | Rachel Pollard |
Date Deposited: | 10 Mar 2020 14:41 |
Last Modified: | 10 Mar 2020 14:41 |
URI: | http://bear.buckingham.ac.uk/id/eprint/465 |
Actions (login required)
View Item |