More than one thousand high-profile artificial intelligence (AI) experts, scientists and other professionals today called for autonomous weapons to be banned. They highlighted the urgency of a pre-emptive legal prohibition on weapons that “select and engage targets without human intervention”, operating “beyond meaningful human control”. Given that the development of such weapons is within reach, the risks of an “AI arms race” are grave, according to their open letter, presented at the International Joint Conference on Artificial Intelligence in Argentina. This call was endorsed by signatories including Stephen Hawking, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis.

States must act now to develop an international legal prohibition on lethal autonomous weapons systems, before the speed of technological development overtakes diplomatic processes. The international community has been discussing autonomous weapons within the framework of the Convention on Certain Conventional Weapons at the UN in Geneva for the past two years. Most states present at these discussions reject the idea of weapons systems operating without human control. However, they have not yet seized the opportunity to address the issue of increasing autonomy in weapons systems. States must formally recognise the principle that meaningful human control over individual attacks must be maintained, and develop a international prohibition of lethal autonomous weapons systems on this basis.

The UK government has stated that weapons systems must remain under human control. However, it has so far failed to elaborate a detailed policy on how meaningful human control is and can be maintained over weapons systems that the UK is or is intending to use. States must take this step in order to inform international discussion as well as evaluate their current conduct. The UK has also stated that new international law is unnecessary to respond to this threat, despite global developments.

With Professor Heather Roff of the University of Denver, Article 36 is undertaking a project to develop the concept of meaningful human control funded by the Future of Life Insitute (FLI). The FLI coordinated the open letter released today, which Professor Roff also assisted in drafting and spoke to the BBC about. Our project will build a dataset of existing and emerging semi-autonomous weapons, to examine how autonomous functions are already being deployed and how human control is maintained. The project will also bring together a range of actors including computer scientists, roboticists, ethicists, lawyers, diplomats and others to feed into international discussions in this area.

Today’s open letter has received wide media coverage, highlighting broad concern about the issue of autonomous weapons and a growing body of global opinion in favour of a pre-emptive ban.

Read more

Article 36 publications on autonomous weapons and meaningful human control


Future of Life Institute funding study of meaningful human control

Review of latest CCW talks in Geneva and interventions by Article 36

Future of Life Institute

UK position on autonomous weapons

Campaign to Stop Killer Robots


Photo: Flickr/Qinetiq

Posted in: Autonomous weapons,