The following interview with Article 36 Director, Thomas Nash, appeared in the September issue of Dazed and Confused Magazine.

The use of military drones has been the subject of debate between human rights commentators for the past two years. Is it acceptable for the perpetrator to be far removed from their target? Will it be possible to trace the culprit? How do we put limitations on their use to preserve our privacy? But compared to some of the new weapons being developed, they’re fairly straightforward. As military technology progresses exponentially, we earthlings find ourselves on the cusp of full-blown robotic war. There are tech companies developing autonomous weapons, or killer robots, that can be pre-programmed to enter a battlefield and select who to exterminate, eliminating the need for human involvement. That’s why the UN called for a moratorium on the development of these weapons in May and a debate in parliament was held on the issue in June. Thomas Nash is a member of pressure group Article 36, which monitors and challenges the use of weapons technologies. He is also one of the co-ordinators of the Campaign to Stop Killer Robots.

Dazed Digital: What are the main differences between killer robots and drones? 

Thomas Nash: A drone, at the moment, requires a human being to make the decision to select a target before a weapon is released. An autonomous weapon would not require a human being to pull the trigger or push the button. So we’re trying to draw a line now: we should have an international treaty that prohibits such weapons and requires that every attack requires a human being to be looking at that target and making that decision.

DD: How close are we to killer robots becoming a reality?

Thomas Nash: There are at least five countries undertaking research going in that direction: the US, UK, Russia, China and Israel. There are already anti-missile systems that exist that are coming close to this line, such as Phalanx and Aegis, but they can’t select from potential different targets.

USA-satellite_web
USA, the worlds biggest developer of unmanned military systems

DD: Are these weapons capable of coming up with contingency plans? Are they able to adapt if circumstances change?

Thomas Nash: It’s impossible to tell, really. At the moment the only country that has a coherent policy on these weapons is the US. They released a Department of Defense directive in November last year. It has a lot of caveats and loopholes, but essentially states that there must be appropriate levels of human judgment for any use of weapons. That’s a step in the right direction as far as we’re concerned, but it opens up the possibility for problematic developments in future. And it’s a directive that only lasts five years. At the moment, at least we have a commitment that they won’t go there with weapons that could envisage different scenarios on the battlefield and adapt to them. I guess if you’re going to make these weapons and make it worthwhile, you have to make them so that they develop to these scenarios. It wouldn’t be worth sending them out there without that.

DD: Why is this such a breach of our human rights? 

Thomas Nash: The agent who makes the decision about lethal force should be a moral agent. If human beings devolve the decision-making power about using force to machines, we cross a moral boundary. There’s also a concern about the ability of robots to adhere to international humanitarian laws. These are rules we have put on to ourselves as human beings and societies to constrain our violent behaviour and the way we conduct wars. We’ve prohibited certain weapons and practices, and we generally try to adhere to these things. But they only work if they’re understood as decisions that human beings make on a daily basis. Every time a human being decides not to kill a civilian or not to torture somebody, they’re making a decision of their own will, and that’s what gives it power. Whereas if you say to a machine, ‘Right, implement this policy on an automatic, pre-programmed basis,’ it undermines the whole idea of human beings placing constraints on themselves. It divorces the freedom of action.

AT THE MOMENT THE ONLY COUNTRY THAT HAS A COHERENT POLICY ON THESE WEAPONS IS THE US. THEY RELEASED A DEPARTMENT OF DEFENSE DIRECTIVE IN NOVEMBER LAST YEAR.

DD: So it is only a human-rights law if humans actively participate in upholding it?

Thomas Nash: Yes. But there’s another concern: the general slide towards autonomy on the battlefield, where it seems that war can be this clinical thing executed by a few political actors and their military advisers. There’s a woefully inadequate level of debate around the development of military technologies. In general they’re developed behind closed doors with money from large contractors and acquiescence and support from government agencies. Then suddenly they’re there, and they’re being used. The arming of drones was not considered a great concern. It was not predicted when the technology was developed for surveillance and so on. But very quickly you saw an explosion of weaponised drones. If you don’t generate public debate around certain types of technologies while they’re being developed, it’s extremely difficult to hold them back once they’re there. Our hope with this campaign is to generate debate before we’ve gone down the road of fully autonomous weapons and countries are competing with each other in the robotic arms-race and we can’t stem the tide of mechanical slaughter on the battlefield.

israel_web
Israel has the Harpy, a “Fire and Forget” autonomous weapon system

DD: But if wars are fought by automated weapons on both sides, could it reduce the number of human fatalities?

Thomas Nash:They wouldn’t be targeting each other. They’d be targeting the enemy. If we look at the way wars have been performed in history, a lot of people have been involved and there’s been a lot of checks and balances. A lot of people have died, a lot of people have suffered. War is an ugly thing, but it’s been in the public eye.

DD: Have you encountered anyone that’s pro autonomous weapons?

Thomas Nash: We haven’t encountered any groups but we have come across a few individuals who get wheeled out when we do media interviews on the BBC and CNN. I guess their line isn’t pro autonomous weapons, but they’re sceptical, which may just be a kneejerk reaction. There are some commentators and pundits who just want to question everything that comes out of Human Rights Watch and Amnesty. There’s a roboticist called Ronald Arkin who is certainly not pro killer robots, but he does think it’s worthwhile to consider whether there could be an ethical governor who would be able to control the actions of an autonomous robot. His view is that if we could develop an ethical governor that’s as good as a human being at implementing the rules of international humanitarian law, then we should. He’s holding out this dream of the perfect, friendly killer robot that would be able to fight wars in a clean and better-than-human way.

THE AGENT WHO MAKES THE DECISION ABOUT LETHAL FORCE SHOULD BE A MORAL AGENT. IF HUMAN BEINGS DEVOLVE THE DECISION-MAKING POWER ABOUT USING FORCE TO MACHINES, WE CROSS A MORAL BOUNDARY.

china_web
China is developing fully autonomous weapons, while cyber attacks are still its strongest weapon

DD: Where does the world stand now on these weapons?

Thomas Nash: The big development in recent months was the debate at the Human Rights Council in Geneva in May when the UN put out a report on lethal autonomous robotics that called for a moratorium on their development. There were 26 countries that spoke and everyone recognised the need for more debate. We were disappointed that the UK was the only country to intervene in that debate and to actively reject the recommendation for a moratorium. It was not a very well-thought-through statement and they’ll probably forget it and move towards a policy that’s more in line with what we’re saying. I mean, even their biggest ally and the biggest military power in the world, the United States, gave a relatively sophisticated statement that appreciated the report and talked about the need for more debate.

DD: How do the military feel about these weapons?

Thomas Nash: In my experience of uniformed military people, they are pretty horrified that their jobs could suddenly be done by robots that are programmed in a lab. There is a sense that war should be fought by people who have honour and certain ways of behaving. So there’s a warrior ethos within the military forces of any country that is offended by the belief that soldiering can just be done by robots.

Follow Nathalie Olah on Twitter here @NROlah