Killer robots debate gets serious
The evolution of military technology and doctrine towards greater autonomy for machines and weapons on the battlefield has long been a concern for roboticists, ethicists and other thinkers and campaigners observing this trend. Opponents to autonomous weapons hold that the distinction between military and civilian is a determination for human beings to make and that delegating decisions on the use of force to machines will put civilians in danger and risk undermining the frameworks that constrain armed violence. Over the course of 2012, though, the issue of autonomous weapons has begun to move into the mainstream of discussions around weapons, ethics the protection of civilians and international humanitarian law.
In March 2012, Article 36 became the first NGO to call for an international ban on autonomous weapons, building on the work of a group of scientists and scholars who established the International Committee for Robot Arms Control in 2009 to push for a prohibition on armed autonomous unmanned systems.
In October 2012 a group of NGOs, including Article 36, met in New York and decided to work together to build an international civil society campaign calling for a comprehensive prohibition on fully autonomous weapons. Many of the people at this meeting had collaborated successfully in campaigns on other weapons issues, including landmines and cluster munitions.
In November 2012, Human Rights Watch, one of the organisations that initiated the campaign in October, issued a joint report with Harvard Law School entitled ‘Losing Humanity: the case against killer robots.’ The report is significant because it marks the first time that a civil society organization had set out a detailed case against autonomous weapons. It is also significant because it comes from one of the most influential organisations in the field of human rights and humanitarian law.
Around the same time, the US Department of Defense issued a directive setting out US policy on semi-autonomous and autonomous weapon systems. The directive places some emphasis on maintaining ‘appropriate levels of human judgment over the use of force,’. But a key problem with the directive is that it does not completely rule out the development and use of autonomous weapons systems in the future or under certain circumstances. It also says that autonomous weapon systems may be used to apply ‘non-lethal’ force, suggesting that the principle of machines making decisions about the use of force is acceptable. More broadly, the very existence of this Pentagon policy suggests that further development and use of autonomous weapons is a clear direction of travel for the US military. Other countries will no doubt be paying close attention. Noel Sharkey, the Sheffield University roboticist, has noted that China, Israel, Russia, the UK, and the US are known to be currently working on autonomous weapons.
One of the most striking elements of the reactions to the HRW report and the US DOD directive was the general recognition of the serious ethical and legal problems associated with leaving decisions over the use of armed force to machines. One might have expected at least some commentators to dismiss concerns over “killer robots” as scare mongering by mad scientists and well-meaning activists that have been convinced by them. That was certainly not the reaction. The fairly broad media coverage focused rather on the case made by HRW against the development of fully autonomous weapons. Even a reaction on the blogosphere that disagreed with HRW’s call for a prohibition, indicated that there is space for a discussion on this issue within civil society, the media and policy-makers in government.
This discussion will be crucial as we prepare for a diplomatic process to address the dangers of autonomous weapons. Beyond its call for a ban on autonomous weapons, Article 36 offers the following reflections on how this issue should be taken forward.
1. A public debate on broad terms
Consideration should be given to the broadest possible set of concerns around autonomous weapons; these should not be limited to the use of lethal force only and should avoid narrowing the focus to certain very sophisticated systems, recalling that simple systems may be equally problematic and closer on the horizon.
2. Moral and ethical scrutiny as well as legal debate
Whilst it will be important to consider the development of autonomous weapons against specific legal frameworks, in particular of international human rights and humanitarian law, it will also be important to engage with the moral and ethical arguments surrounding the broader role of machines in decisions to use violent force against or amongst people; it may be unwise to frame opposition to autonomous weapons too tightly around assertions about what is technically possible for robots to do or not do in the future.
3. The bigger picture of machines on the battlefield
Concerns about “killer robots” should be articulated in the context of the wider spectrum of autonomy on the battlefield; this is to say that at the same time as calling for a prohibition on fully autonomous weapons, it should be clear that separate but related concerns exist around the use of remotely-operated and semi-autonomous weapons and that these also need to be addressed.
4. Prevention and the review of weapons technology before deployment
Ultimately, given the complex and unpredictable ways in which autonomy on the battlefield is developing and taking into account the capacity for autonomy to be a function of numerous systems operating together, it will be necessary to develop and implement more effective mechanisms for scrutinizing military technology before it finds its way onto the battlefield.