With the recent rise in concerns over autonomous weapons systems, civil society, the international community and others have focused their attentions on the potential benefits and problems associated with these systems. As sensors, algorithms and munitions are increasingly interlinked, questions arise about the acceptability of autonomy in certain critical functions, particularly around identification and selection of and the application of force to targets. These concerns span ethical, legal, operational, and diplomatic considerations.

Despite wide engagement by states, civil society, international organisations and research institutions, the discussion of autonomous weapons systems is still characterised by different uses of terminology, different assessments of where the problem issues really sit, and divergent views on whether, or how, a formalised policy or legal approach should be undertaken.

In the developing international discussion, the concept of ‘meaningful human control’ has emerged as one point of coalescence. Primarily, it has been used to describe a threshold of human control that is considered necessary; however, the particulars of the concept have been left open so as to foster conversation and agreement. The content of the principle must now be addressed.

This paper seeks to do so by offering a framework for meaningful control to a multi-stakeholder audience from a diverse set of professional and academic backgrounds. It was by drafted by Dr. Heather Roff of Arizona State University and Richard Moyes of Article 36 in the context of a grant awarded by the Future of Life Institute to further develop thinking on ‘meaningful human control’ as a conceptual approach to the control of artificial intelligence in the context of autonomous weapons systems, and was prepared for the April 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems of the UN Convention on Certain Conventional Weapons.

The key elements for human control over technology proposed and discussed in this paper are:

  • Predictable, reliable and transparent technology.
  • Accurate information for the user on the outcome sought, operation and function of technology, and the context of use.
  • Timely human action and a potential for timely intervention.
  • Accountability to a certain standard

The paper looks how human control needs to be embedded through mechanisms operating before, during and after use of technologies in conflict. It also addresses how meaningful human control needs to be applied over attacks at the tactical level of warfighting, as well at other levels.

Download this paper

Roff Moyes MHC AI and AWS thumbnailMeaningful Human Control, Artificial Intelligence and Autonomous Weapons

Briefing paper
April 2016

Posted in: Autonomous weapons,