With states gathering in Geneva (and online) for a week of meetings on autonomous weapons under the Convention on Certain Conventional Weapons this week, this analysis was first published in Reaching Critical Will of the WILPF’s CCW Report.

For states considering the issue of “emerging technology in the area of lethal autonomous weapon systems (LAWS)” there are two key problems to solve: firstly, whether some of the real or hypothetical weapon systems or configurations within the scope of these discussions are fundamentally unacceptable and must be ruled out of states’ arsenals and practices; and, secondly, how human control can be meaningfully maintained over the rest of the systems within this discussion’s scope, in order to adequately uphold both legal obligations and more profound moral and ethical principles.

Moving towards clear common approaches and answers to these questions—which would provide building blocks for effective international regulation—now requires the elaboration of detailed positions and proposals from the states and others engaged in this debate. The exercise set by outgoing chair Ambassador Janis Karklins—for states to elaborate their national positions on the meaning of the Guiding Principles on LAWS agreed in 2019[1] – provided a means this year for countries to start doing this. It allowed states to build on their understandings of the subject to give more form and content to concepts such as how human control over weapons can be maintained—and has provided much useful material in this regard.

An effective structure for the regulation of sensor-based weapon systems

Though states have significantly different conceptualisations of the subject matter under discussion (from ideas of loops and automation to “AI weapons”), for Article 36[2] and the Campaign to Stop Killer Robots,[3]  a broad scope of systems that incorporates all states’ definitions would encompass systems that employ a particular process to apply force: that of matching sensor inputs to a “target profile” of characteristics following a system’s activation, emplacement, or deployment. This means that with such systems the exact time, place, and object to which force will be applied will not be known in advance. It is from this uncertainty that most concerns arise, from control to moral acceptability.

In our opinion, the most productive way forward will be to consider applying legal obligations to this broad scope of systems, centring prohibitions and regulations on human action and control regarding their use as well as the value of human dignity. Within this scope, certain systems should be prohibited as straightforwardly unacceptable, and the others should be subject to positive obligations on their design and use to ensure they remain under meaningful human control when used.

It is our position that the targeting of people through systems within this scope should be prohibited because this violates human dignity.[4] Systems must also be prohibited that cannot be meaningfully controlled by their users—for example, because the complexity of their functioning means that the range of outcomes they produce would not be sufficiently understood. A structure of components to ensure meaningful human control is needed for the remaining systems within this scope, to be applied on a case by case basis within individual attacks and operations. As some states have already noted, principles and practices of control might draw substantially from how states already manage uncertainty with less advanced sensor-based weapon systems.

Substantial content in the commentaries for moving forward

Dispersed across the 30 state commentaries submitted so far this year that we have seen, there is already substance to support different elements within this approach, which could be developed and brought together towards a strong framework along the lines that we would consider effective:

Prohibiting anti-personnel use of systems: A number of countries in their commentaries expressed opposition to human life and death “decisions” being carried out by machines and/or suggested restrictions could be made on the types of targets systems could apply force to. Such positions should be further explored and developed with respect to considering a prohibition on targeting people.

Prohibiting systems that can’t be meaningfully controlled: Several commentaries emphasised the need for the users of weapon systems to understand how these will function in practice, with some linking this explicitly to legal compliance. Some expressed concern at systems that might “evolve” or highlighted that system design should ensure sufficient human understanding of functions is possible. These suggestions can be linked to the need to prohibit systems whose complexities of functioning mean that their effects cannot be sufficiently predicted/foreseen or understood by their operators. This would be one element of prohibiting systems that cannot be meaningfully controlled.

Building the elements of human control: As Amb. Karklins notes in his paper, it is a significant commonality within states’ commentaries that further collective work is needed to “determine the type and extent of human involvement or control necessary” to ensure compliance with international law and to respond to ethical concerns.[5] It is useful that, in general, states consider this to be the key area for agreement—as is the consensus that human involvement is implicated in legal compliance.[6]

For Article 36, focusing on the time, place, and target to which force will be applied in an individual attack using a sensor-based system should be the key building blocks for constructing regulation for human control, to address the core issue of uncertainty about the point of application of force.

Within the commentaries, there is much useful material elaborating what the key elements for control could be. For example, several mentioned applying temporal and spatial limits to the use of systems, including (for some) to ensure sufficient proximity of force application to legal judgments, and/or controls on what contexts systems could be used in. Points raised regarding understanding what systems might apply force to in practice, and limiting types of targets, included one proposal to place limits on systems’ target profiles depending on the operational environment. There was recognition in several commentaries that the exact requirements for adequate levels of control might vary depending on the tool and context. We believe therefore that placing discussions in the context of considering individual attacks (as at least one commentary did) is helpful, in order to focus on human action—rather than the generalities and technicalities of systems. In this regard, though a popular point, we would caution against giving undue significance to (undoubtedly important, but ultimately insufficient) technical elements such as contact/recall/supervision/self-destruction. Our focus must be broadly on human actions rather than (sometimes merely imagined) technical fixes.

This brief analysis suggests that there is much material that can be used, interrogated, and built on by states towards constructing an effective international framework for regulation to address the systems and concerns under discussion at the LAWS Group of Governmental Experts (GGE). It is now up to all involved to take the opportunity to push these building blocks forward, towards a common solution.


[1] See “Annex III, Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System,” Revised Draft Final Report, CCW/MSP/2019/CRP.2/Rev.1, 15 November 2019, https://www.unog.ch/80256EDD006B8954/(httpAssets)/815F8EE33B64DADDC12584B7004CF3A4/$file/CCW+MSP+2019+CRP.2+Rev+1.pdf,p10.

[2] See Richard Moyes, “Autonomy in weapons systems: mapping a structure for regulation through specific policy questions,” Article 36, 2019, http://www.article36.org/wp-content/uploads/2019/11/regulation-structure.pdf, and Richard Moyes,”Target profiles as a basis for rule-making in discussions on autonomy in weapons systems,” Article 36, 2019, http://www.article36.org/wp-content/uploads/2019/08/Target-profiles.pdf.

[3] See Campaign to Stop Killer Robots, “Key elements of a treaty on fully autonomous weapons,”2019, http://www.article36.org/wp-content/uploads/2019/11/regulation-structure.pdf.

[4] See Maya Brehm, “Targeting people,” Article 36, 2019, http://www.article36.org/wp-content/uploads/2019/11/targeting-people.pdf.

[5] See Chair’s paper, Commonalities in national commentaries on guiding principles.

[6] Expressed in Guiding Principle (c).


Image: UN Palais des Nations © Article 36

Posted in: Autonomous weapons, Latest posts and news,