Should robots fight our wars?
Imagine a world where robots are sent into the battlefield. Drones buzz above them, surgically taking them out when it is clear no humans would be harmed. It sounds like science fiction – but it also sounds quite positive. Countries can fight, sure, but without killing actual people.
Then there’s a darker scenario. Imagine hi-tech weapons which can be programmed years in advance to seek out specific people, or groups of people. Facial recognition, gender, ethnicity, all these could be used as criteria to create weapons designed to take out political opponents, unhelpful communities, irritating activists.
Semi-autonomous weapons already exist, and are in use. Think of the drone strikes the United States has used in Pakistan and Afghanistan. Targets are identified, the drones are programmed, and off they go. The targets of course are human – Washington says it prevented an imminent terrorist attack in Kabul by taking out one of the men planning it.
But these weapons – or more specifically, those programming them – or not infallible. Drone strikes have killed civilians, they have hit wedding parties. So while semi-autonomous weapons may protect the lives of the soldiers who would, 20 years ago, have had to carry out such attacks, they still kill people, and not always the people they were supposed to kill.
Do we need a treaty?
In our latest Inside Geneva podcast, we examine the complex negotiations underway here at the United Nations right now to decide whether the next generation of such weapons, the fully autonomous variety, should be regulated by treaty, or perhaps even banned. The International Committee of the Red Cross (ICRC), the guardian of the Geneva Conventions, is one of the groups urging some rules at least around the use of these new weapons.
More
Killer robots: should algorithms decide who lives or dies?
“It’s about the risk of leaving life and death decisions to a machine process,” says ICRC senior policy adviser Neil Davison. “An algorithm shouldn’t decide who lives or dies.”
The ICRC’s position is that unpredictable autonomous weapons should be prohibited, as should those which target people. All others should remain under some sort of “meaningful human control.”
Unsurprisingly, UN member states who are well ahead in the development of autonomous weapons (the US, China, Russia, Israel) are hesitant about a treaty. They fear restrictions before the technology is even ready for widespread use.
Technology racing ahead of ethics
But those campaigning for a ban on what they call “killer robots” say that is precisely the point. We all embraced social media before we understood just what it could do in terms of data harvesting. The technology around autonomous weapons is very advanced, and Mary Wareham of Human Rights Watch worries that, as with social media, we have not fully understood the implications.
If something does go wrong, she points out, it is going to be very difficult to apportion responsibility. A robot, after all, cannot be prosecuted for war crimes. “Do you hold the commander responsible, who activated the weapons system?” she asks. “There’s what we call an accountability gap when it comes to killer robots.”
And that term “meaningful control” concerns Paola Gaeta, professor of international law at Geneva’s Graduate Institute, who also joins us on the podcast. “What does it mean, meaningful human control? What if a weapon is used and developed without meaningful human control, what are the consequences of it? How do you ascribe responsibility?”
Slow negotiations
These are questions the diplomats, humanitarian organisations and human rights groups currently negotiating in Geneva will have to try to answer by December, when the UN Conference on Certain Conventional Weapons (or CCW) will meet to decide whether or not to open formal negotiations on a treaty.
Such negotiations typically take years, and there is little hope that a draft treaty itself can be hammered out over the next few months. But campaigners warn that time is running out – after all the discussions around whether or not killer robots needed regulation first began back in 2013.
And the CCW doesn’t have the best track record when it comes to reaching agreement. It tried to draft a treaty on landmines, and on cluster munitions. In the end member states remained so far apart that these negotiations were taken out of the UN process, and agreed separately by like-minded states, becoming, eventually, the Ottawa Convention on landmines, and the Oslo Convention on cluster munitions.
Campaigners like Wareham already suspect the talks around lethal autonomous weapons will have to take the same route. Her colleague Frank Slijper, of the disarmament group Pax for Peace, has invested years in the negotiations, and is ready to give the process more time, but he too is not optimistic.
“If we don’t have a treaty within ten years we will be too late. Technology is progressing at a much faster pace than diplomacy is doing, and I fear the worst.”
Davison of the ICRC however, is in the negotiating room in Geneva, and he believes there is some cause for optimism. “There is increasing agreement that certain types of autonomous weapons need to be prohibited, and others need to be regulated. Clearly there’s a lot of work to do…but there’s a real opportunity. So I’m staying positive.”
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.