Swiss perspectives in 10 languages

New wars, new weapons, and lots of ethical questions 

Imogen Foulkes

A few weeks ago, reports emerged that in Israel’s war on Hamas, it was using a new, sophisticated system, powered by artificial intelligence, to identify targets.

The system, named “Lavender” apparently used data points – frequent change of abode, or of mobile phone, to select its targets. And, some reports claim, so many targets – up to 37,000 – were selected so fast, that the actual humans making the final decision on whether to kill them had just a few seconds to do so. 

The system also reportedly factored in an acceptable level of “collateral damage”; how many civilians was it acceptable to kill while taking out the target? Up to a 100, it’s claimed in some cases, often 15 or 20.  

More

These new weapons are now becoming established in modern warfare. Their use and their regulation merit some discussion here in the home of the Geneva Conventions, and that’s exactly what we’re doing on our Inside Geneva podcast this week. 

Ethics walking while technology runs 

Just over four years ago, Inside Geneva first looked at lethal autonomous weapons, also called “killer robots”. Since then, the technology has raced ahead, while negotiations to limit or even ban the use of these weapons are creeping along at a snail’s pace here at the United Nations. 

Russia’s invasion of Ukraine has seen both sides use drones, and although these weapons are not especially new (the United States first tested them in the Vietnam war) the ones buzzing over Ukraine and Russia now are increasingly autonomous, using AI systems to map and alter course, and to select their targets. 

More

The International Committee of the Red Cross (ICRC) has repeatedly said that “meaningful human control” is essential if these new weapons are to be compliant with the Geneva Conventions. But if an AI powered system is selecting targets, can human control really be maintained? 

Right now in Vienna, a conference called “Humanity at the Cross Roads” is underway, hosted by the Austrian government. Its aim; to discuss the “profound questions, from a legal, ethical, humanitarian and security perspective” raised by AI and autonomous weapons. Attending is Sai Bourothu, of the Campaign to Stop Killer Robots.  

+Swiss army uses drone technology. Should we worry?

The campaign is heartened by the UN General Assembly’s resolution last year, in which a huge majority of member states stressed the need for the international community to address the challenges posed by autonomous weapons. But the latest developments, in which AI is being used to define kill targets based on an individual’s habits, social media comments, or movements, has aroused new concerns. 

“Autonomous weapons systems… raise concerns around digital dehumanisation: the process by which people are reduced to data points that are then used to make decisions and take actions that negatively affect their lives,” Bourothu tells Inside Geneva. 

Cheap technology with a lethal impact 

Also concerning is what Jean-Marc Rickli, Professor of Global and Emerging Risks at Geneva’s Centre for Security Policy calls the “democratisation of technology.” If that sounds positive, when it comes to lethal autonomous weapons, it isn’t. What it actually means, Rickli tells us, is that “off the shelf technology” is “proliferating, to a level that has never been seen in the field. Especially digital technologies and especially in the field of artificial intelligence, in the field of cyber security.” 

This technology is being adapted for war use by non-state armed groups as well as the armies of the superpowers. In the battle for the Iraqi city of Mosul eight years ago, Islamic State used cheap drones, replaced their cameras with hand grenades, and used them with bloody effect against Iraq’s armed forces. 

How to control the proliferation and use of these cheap killing devices is on the agenda at the Vienna Conference. But there are, given the increasing sophistication of artificial intelligence, other, wider, concerns. 

How do AI weapons think? 

A number of Inside Geneva listeners wrote to us, when we announced the topic of this week’s podcast, asking whether AI could have ‘empathy’ or the intelligence not to kill innocent civilians. A question like this is profound, and, as Jean-Marc Rickli explained, not only do we not really know the answers, what we do know is somewhat worrying. 

“AI and machine learning basically leads to a situation where the machine is able to learn” he explains. “So yes, you can programme the algorithm, but you’re not programming the final outcome.” 

“You’re programming the process, and that process then also changes. And so now, if you talk to specialists, to scientists, they will tell you that it’s a black box, we don’t understand, it’s very difficult to backtrack.” 

+Killer robots: should algorithms decide who lives or dies?

Can machines learn empathy? Bourothu points to some key doubts. First, empathy is interpreted differently by different cultures. Who would write the code for empathy? One of the big concerns about AI is the inbuilt bias many systems may have, because the human who initially programmed them may have had his or her own bias.  

Even if there were no bias problems, he does not believe an emotion like empathy can be recreated in a machine. ‘‘I think it’s going to be an incredibly immense task to code something such as empathy.  I think almost as close to the question as whether machines can love.” 

When I ask Rickli the empathy question, his answer reveals another profound problem. “There was an experiment conducted last year,” Rickli tells Inside Geneva, “Where patients were asked to rate the quality of medical advice received by medical doctors, and by medical chat bots.” 

“And medical chat bots ranked much better in quality. But they also asked them to rank empathy. And on the empathy dimension they also ranked better. If that is the case, then you open up a Pandora’s box that will be completely transformative for disinformation.” 

Imagine it; if a machine makes us believe that it cares about us, that it’s empathetic to us, we risk losing our instinctive, human understanding of what real empathy is. Or, as Bourothu says, what real love is. That is, for me at least, a terrifying prospect. We are not quite there yet with AI, but the technology is developing fast. The fact that governments are having in-depth discussions about the ethics of autonomous weapons shows they recognise the disturbing potential. 

Right now, Rickli points out, the question is “is it ethical to be killed by a machine? What is ethical in terms of dying?” 

But, as if this question wasn’t hard enough, AI will confront us with harder ones. Who, or what, cares most? Loves best? Grieves hardest? The humans, or the machines? If we believe a machine loves us, does that make it true? Who or what do we trust? Our own, human, fallible emotions and judgements? Or the machine’s? 

Join us on Inside Geneva to hear the discussion in full. 

Edited by Virginie Mangin 

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR