Article 36 is part of the Campaign to Stop Killer Robots, which calls for an international agreement to prohibit ‘fully autonomous weapons’ – systems that would be able to identify and attack targets without direct human control.  With developments in computers, sensor technology and the proliferation of drone warfare, the movement towards fully autonomous weapons is already under way.  Such weapons would raise profound moral and legal concerns and the Campaign is determined to put in place international legal barriers to stop them being put on the battlefield.  This short paper provides Article 36’s assessment of the likely next steps for international discussion on the issue.

Where is the issue going to be discussed?

Although this issue was first raised at the UN’s Human Rights Council, it seems quite likely now that discussions on autonomous weapons will take place in the UN Convention on Certain Conventional Weapons (CCW), a standing UN forum for conventional weapons regulation. State Parties to this 1980 treaty meet every year in Geneva to discuss conventional weapons that raise humanitarian concerns, and over the years have developed 5 separate protocols addressing concerns ranging from weapons that leave fragments not detectable by X-ray in the human body to blinding laser weapons.

Already, some governments have suggested that the CCW would be the right place for international discussions on this matter. Other than UN General Assembly itself, there is no obvious alternative UN forum where governments could really be expected to address fully autonomous weapons. It’s also the case that the CCW has a relatively modest agenda at the moment, with no negotiations underway on other issues.

How long would it take?

Deliberations at the CCW have generally taken several years to reach a conclusion and this conclusion has not always been successful in adopting legal restrictions or prohibitions. Having said that, the last two negotiations at the CCW that were widely considered to be successful, namely the adoption of the ban on blinding laser weapons in 1995 and the protocol on explosive remnants of war in 2003, came up with agreements within a couple of years of actual negotiations.

First the issue would need to be put on the table, so a State Party would have to propose a mandate for discussions and get support for this from other State Parties. After a period of discussions a further mandate to negotiate a new protocol would have to be agreed.  This would be very significant and require agreement from pretty much all of the states active in the discussions. Because the CCW generally works by consensus it means that in practice any member can block progress on an issue, and this has happened several times in the past. (e.g. for example new rules to regulate anti-vehicles mines have been blocked).

Who would the active players be?

The CCW is not universal, but it has over 100 State Parties and includes many of the most significant voices on military and humanitarian affairs. Presumably, for this issue, the most interested CCW members would be those with active research programmes on autonomous military robotics. Countries like the US, UK, China, Russia, Israel are likely to be interested in discussions and would no doubt have military or technical experts on the issues able to speak authoritatively during the discussions. Other CCW members have a tradition of active engagement on issues with a humanitarian and moral dimension, including Austria, Costa Rica, the Holy See, Ireland, Mexico, South Africa, Sweden and Switzerland. A benefit with this issue being raised at the CCW is that it has a ready-made community of participants to discuss this issue, and funding to meet regularly.   The CCW also allows for international organisations such as the International Committee of the Red Cross and UN agencies to provide input into the debate. The CCW has also traditionally been reasonably open to input from civil society.

What role would civil society play?

The CCW has been a good place for civil society to engage in discussions on weapons issues with states. NGOs can intervene during the discussions, including meetings of military experts, and can circulate working papers and host side events during the meetings. Civil society has often driven the agenda of the CCW and played a key role in previous negotiations, such as during negotiations of the 2003 Protocol on Explosive Remnants of War.

Civil society would certainly be actively engaged in any CCW discussions on autonomous weapons systems and would help to raise expectations for an ambitious outcome as well as to provide technical expertise on the issues.

How might the discussion be framed?

Different parties might see advantages in framing this issue in different ways.  The problem has so far been mainly described in terms of ‘fully autonomous weapons’, ‘lethal autonomous robotics’ or ‘killer robots’ – supporting an approach that seeks to define what is problematic about a type of technology and then to formulate legal restrictions limiting a design purpose, technical characteristic, deliberate or inadvertent effect of the weapon technology.  Yet such an approach may be very difficult across diverse technologies.[1]

An alternative approach might frame discussions in terms of what level of human control is needed for it to be possible for an attack to be acceptable.  Article 36 argues that defining and ensuring ‘meaningful human control over individual attacks’ is the critical task to ensure that killer robots are kept from future battlefields.

What are the tricky issues that might be faced?

Some of the key questions that would be contested in such discussions might include:

  • What level of human control is needed and when?  Is it sufficient for a human simply to ‘press a button’ when a computer identifies a target?  How many targets might a weapon strike at after a human has ‘pressed a button.’
  • What sorts of sensor data are sufficient for a computer or a human to reasonably identify an object as a valid military target?
  • Are there some systems, operating in narrowly defined roles or contexts, that should be allowed because their defensive value outweighs the risk to civilians?
  • How would an agreement be verified?  If systems can operate fully autonomously but also under human control how might we build confidence that the rules are not being side-stepped?

What sort of outcome could we expect?

An initial, informal, outcome from discussions of this issue within the CCW would be strengthened international recognition that fully autonomous weapons are systems of particular concern – and a deeper understanding of the moral, legal and technological issues that need to be considered

However, the CCW is not famous for its ambitious, standard-setting results. Often it has discussed issues at length but not been able to proceed to actual negotiations.  Both the Ottawa and Oslo processes, that led to prohibitions on anti-personnel landmines and cluster munitions respectively, were born out of failures by the CCW to take adequate action to curb these weapons. The consensus working method of the CCW has generally meant that lowest common denominator outcomes emerge.

This is not to say that State Parties to the CCW could not decide to take ambitious action on fully autonomous weapons. They took ambitious action on blinding laser weapons where the view prevailed that it would not be in the interest of anyone to go down the road of blinding each other’s soldiers.

So the question would be whether or not states would be willing to agree to a clear and effective outcome on fully autonomous weapons that would prohibit the use of weapons that can function without meaningful human control. If this does not prove possible within the CCW, then concerned states might well decide to embark on a freestanding process, with the support of UN actors and other international organisations. It is to be hoped, though, that all states will decide that this issue requires concerted and collective action. Hopefully we can all agree that we cannot let machines decide whom they are going to attack and that we should not try to programme them to do so.


[1] It might be very hard to describe the issues raised by fully autonomous weapons solely in terms of the technological characteristics. Despite the common media images, such systems are probably not best thought of as a unitary, humanoid robot of the ‘Terminator’ type. They are more likely to be composed of a number of software and hardware components, which may not all be in one location (imagine a satellite, a computer and weapon working together), but which together are capable of undertaking an armed attack without direct human control.  Furthermore, such a system may be capable of operating fully autonomously at some times, but under human control at others – making it difficult to assert that the technology is ‘in itself’ unacceptable.

Posted in: Autonomous weapons,
Tagged: , ,