Article 36 statement on human control to the UN discussions on autonomous weapons
Statement delivered by Richard Moyes, Article 36 to the CCW Group of Governmental Experts on Emerging Technologies in the area of Lethal Autonomous Weapons Systems.
26 March 2019
Agenda item 5c “Further consideration of the human element”
Thank you Chair,
I am commenting on the issue of control, but these points will also touch on issues of “characterisation” and the law (which are the subject of other agenda items) because many of these issues are interlinked.
And I will note issues relating to “control by design” and “control in use” that other delegations referred to this morning. Control by design relates to how a system functions in the abstract, but control in use recognises that control is also contextual.
In terms of control by design, we are discussing here, in broad terms, systems that use sensors, and compare the inputs from those sensors to an embedded characterisation of a target, or a ‘target profile’ as one delegation referred to it earlier. And on the basis of that comparison these are systems that have the capability to apply force to their approximation of the source of that sensor data.
So the human who activates such a system does not know specifically where and when an actual application of force will occur – and this necessarily produces some tension regarding control.
We have heard examples earlier today, relating to anti-ship missiles, and previously relating to counter-mortar fire systems, where the embedded target characterisation might be based on an acoustic signature, or a particular radar signature. Such profiles might be more or less complex, but they are necessarily a simplification of an object to be attacked into terms that are amenable to a systems sensors.
So an initial concern that such systems raise is: how do they characterise? Is it possible that certain ways of describing a target in sensor terms are unacceptable. I have a feeling of uneasiness about the idea of reducing a human to a pattern of sensor data in order to render them a target. And if we were to start to try to separate certain groups of people from others, still further concerns might come to the fore. So some ways of characterising might be unacceptable.
And it seems from the comments from other delegations that for a commander, understanding the profiles a system uses is also important. That they should be explicable – and can be understood not only in relation to the objects that they are intended to apply force to, but also to objects they may apply force to but which we do not wish to attack. This issue of explicability could link back to concerns that have been raised about the role of machine learning in target recognition.
Linked to this it would seem important that these profiles do not change after a system has been put into use. That they cannot change or develop the parameters of what is to be attacked. And these also is a theme that has been raised by other delegations, concerned with systems that set their own objectives.
So certain ways of characterising might be unacceptable per se, but for any such systems control also needs to be exerted over use. Here constraints on the duration and area of a systems functioning are important – and a significant number of delegations today have highlighted these factors. These provide the basis for a commander to have an understanding of the context in which a system will be operating and to make an informed human judgement over the likely outcomes that will occur.
For us this is also linked to the application of legal obligations, where humans, as the people upon which the legal obligations bear, need to make legal decisions on the basis of an individual attack and in relation to the circumstances prevailing at the time.
Thank you Chair.