The central area of concern regarding the development of autonomous weapons systems (AWS) is that they might lack the necessary human control in the critical functions of identifying, selecting and applying force to targets. Without the necessary human control, such systems might not allow the proper application of legal rules, or might produce interpretations of the legal framework that erode civilian protection, or lead to other negative outcomes relating to the morality of human interactions or the maintenance of peace and stability.

In this context, this new briefing paper from Article 36 argues that:

  • Consideration of the form and nature of human control considered necessary is the most useful starting point for discussions on this issue.
  • The existing legal framework of international humanitarian law provides a framework that should be understood as requiring human judgment and control over individual “attacks” as a unit of legal management and tactical action.
  • That without recognizing a requirement for human control to be in some way substantial or meaningful, the existing legal framework does not ensure that human legal judgment will not be diluted to the point of being meaningless, as a result of the concept of “an attack” being construed more and more broadly.
  • Against that background, delineation of the key elements of human control should be the primary focus of work by the international community.
  • Towards such a process, the following key elements can be proposed:
    • Predictable, reliable and transparent technology.
    • Accurate information for the user on the outcome sought, the technology, and the context of use.
    • Timely human judgement and action, and a potential for timely intervention.
    • Accountability to a certain standard
  • Whilst consideration of these key elements does not provide immediate answers regarding the form of control that should be considered sufficient or necessary, it provides a framework within which certain normative understandings should start to be articulated, which is vital to an effective response to the challenge posed by autonomous weapons systems.
  • An approach to working definitions based on understanding ‘lethal autonomous weapons systems’ as ‘autonomous weapons systems operating without the necessary forms of human control’ would be the most straightforward way to structure discussion in a productive normative direction.

This briefing was prepared as a background paper for delegates to a presentation made by Article 36’s Richard Moyes at the Convention on Certain Conventional Weapons 2016 Meeting of Experts on Lethal Autonomous Weapons Systems.

Download this paper

2016 MHC paper thumbnailKey elements of meaningful human control

Briefing paper
April 2016

 

Read more

This paper draws on thinking around ‘meaningful human control’ developed in collaboration with Dr. Heather Roff in the context of a grant awarded to Ari- zona State University in partnership with Article 36 by the Future of Life Institute:

Roff Moyes MHC AI and AWS thumbnailMeaningful Human Control, Artificial Intelligence and Autonomous Weapons

Briefing paper
April 2016

 

Posted in: Autonomous weapons,