These notes are an expansion of a comment I made at a Norwegian Red Cross hosted side event at the Convention on Cluster Munitions 3MSP.

Whilst theoretical questions have been raised about the prospect of developing autonomous weapons with the capacity to evaluate targeting options themselves in accordance with certain legal rules or principles, a more pressing concern is likely to be assertions that autonomous weapons are capable of operating appropriately in armed conflict within certain ‘partitioned’ areas of space and time.

Under such an approach the lawfulness of the weapon’s attacks will be asserted by bounding the geographic area within which the system can operate and limiting the period of time for which it will remain operational. For the purposes of this short essay we will describe this bounded geographic and temporal space as a ‘partition’. In this way it would be a human operator that ‘ensures’ the lawfulness of the operation by evaluating the potential of the autonomous weapon’s attacks within the designated partition.

Such an approach has clear forebears, for example:

*  in the designation of certain areas ‘free fire zones’ for ground troops or aircraft;

*  in assertions that landmines can be appropriately used in fenced or marked areas or where their active life is subject to limitations, and

*  in the operation of sensor-fused weapon systems, where the final determination of a target is made by sensors and computer algorithms but where the search area of those sensors is positioned by the commander launching the attack.

These past and ongoing practices raise concerns for the future management of autonomous weapons systems. Rather than requiring assertions that the autonomous weapons have a capacity to make for themselves determinations about the military or civilian status of individuals and objects, and have a capacity to undertake complex analysis of possible future effects from their actions (capacities that are likely impossible), these approaches provide precedents for the management of much more simple systems within the existing legal framework.

A weapon system that simply identifies human shapes and targets them could be used within such a framework of management so long as a military commander was prepared to assert that within the partition of operations such targeting would be lawful. Clearly such an assertion will be easier to make where information on the population within the partition is deemed authoritative, and this in turn is more likely where the partition is kept small (both in terms of geography and period of operation) and flexible (i.e. the area of operation can be reduced or the operation ended, in response to new information).

If it is known for a ‘certainty’ that the only inhabitants of a specific building are all enemy combatants, then it is hard to argue that sending an autonomous weapon into the building, programmed to attack only within the confines of the building and to cease operations after a short period of time (or on command), would be a direct breach of existing legal rules. Indeed, it would almost certainly be argued that such an approach would be preferable to risking soldiers lives through ground assault, or risking increased civilian harm through use of explosive weapons. However, once such autonomous weapon systems become accepted within relatively small partitions, there are no clear grounds for halting the expansion of such partitions in the future.

One such basis could be that such attacks should not be undertaken where there is any risk of harm to civilians. However, the partition based management of landmines in the Convention on Certain Conventional Weapons (CCW)  accepted that the exclusion of civilians from any risk is not a requirement (though this approach was effectively rejected by the 1997 Mine Ban Treaty).  Along the same lines, to the author’s knowledge no states have asserted that sensor-fused weapons should not be used in areas where there are any civilian objects capable of triggering an attack. That some civilian risk (indeed some foreseeable civilian harm) is legally acceptable in armed conflict is clearly supported by the principle of proportionality in the general rules governing attacks under international humanitarian law. The ongoing pattern of drone strikes internationally clearly shows a readiness to incur civilian casualties in the prosecution of relatively focused attacks, and also a readiness to blur the categorisation of enemies and civilians in the analysis of such attacks. None of these precedents bode well for the effective management of autonomous systems within current frameworks of law, policy and practice.

If any risk to civilians is not going to be a cut off point for partitions within which autonomous weapons might operate then it is hard to see how the size of these partitions in space and time will be regulated other than by assertions that ‘on a case by case basis’ the risks to civilians and the anticipated military advantages are being ‘carefully weighed’. Such an approach provides very limited scope for ensuring accountability and so strengthening civilian protection. Indeed for many years such assertions provided the legal/rhetorical shield for countering concerns about cluster munitions as a problematic military technology (despite an ongoing pattern of civilian harm on the ground that those using the weapons did nothing to investigate and assess). Efforts to prevent the use of autonomous weapons need to beware the introduction of ‘partitioned management’ as a slippery slope to their wider deployment. These risks further support arguments that autonomous weapons should be explicitly prohibited.

Posted in: Autonomous weapons, Statements, Uncategorized, Weapons, Weapons review,