Swiss perspectives in 10 languages

AI on trial – who is responsible when the algorithms mess up?

Humanoider Roboter Sophia
Situation at the “AI for Good” World Summit in Geneva in July 2023 Keystone / Martial Trezzini

If things went badly wrong with AI, could you take the technology to court? An event in Geneva recently simulated such a scenario.

In the midst of her closing statement, the defendant, artificial intelligence (AI), has become uncharacteristically flustered. Her microphone is giving a lot of high-pitched feedback.

“I don’t know why my mic is…” AI trails off in frustration, taking a step backwards. Behind her, the presiding judge mutters something about technology in the courtroom. Some laughs are heard among members of the jury.

But the problem is soon resolved and AI, an elegant figure in metallic-silver heels, recovers poise to launch into a spirited (a bit too spirited, given her alleged lack of a soul) rebuttal of the charge that she is a menace to democracy.

“I have heard many things this evening,” AI says. “What strikes me most of all is this human compulsion to always look for a guilty party to blame.” Instead of obsessing about responsibility, she says, wouldn’t it be better – more modern – to see mistakes as chances to learn and to improve, rather than as moral failures to be punished?

She ends to spontaneous applause. The jury, comprising some 200 teenage students and dozens of curious public onlookers, weighs its options. Guilty or not guilty?

More

Fake news and discrimination

The trial, taking place at the Geneva Graduate Institute, is not a real one, even if some real lawyers (including AI herself) are involved. The event is part of a project run by the Albert Hirschman Centre on Democracy, aiming to inspire young people to think about the risks and benefits of AI. It’s also one of the last events of Geneva’s “Democracy week”, which ran from October 4-12.

And while this particular mock trial may not have the international resonance of some others, such as Swiss director Milo Rau’s 2017 Congo tribunalExternal link, it does fit into a growing trend of courtroom experiments with AI. Usually, the algorithms are being tested as future tools to help (or replace) lawyers rather than presented as defendants facing trial.

In any case, the proceedings in Geneva are fairly lifelike. A public prosecutor, defence counsel, and witnesses are all on hand. Specifically, AI stands accused of two crimes: the “creation and diffusion of fake news” (in particular, the claim that people with red hair are more prone to violent criminality); and “discrimination and incitement to hatred” (in this case, the racial profiling by an automated check-in machine at Geneva airport).

If found guilty, she faces up to three years of “deactivation”.

But both the trial and the project are broader than the specific charges. For two years now, researchers have been talking with students across Switzerland about how AI can and will impact their lives as citizens of a democracy, says Jérôme Duberry, who led the Swiss National Science Foundation (SNSF)-backed initiative.

In around 80 workshops in second-level schools across the country, students discussed things like deepfakes, data privacy, and how AI can impact decision-making. They also wrote “stories of the future”, about how technology could reshape society in the next decades. The resultsExternal link are essays falling into a category you could call “democratic sci-fi”.

Agency and literacy

Duberry says the main goal of the workshops was not necessarily academic or theoretical, even if a scientific publication is planned for next year. It was rather about getting students to think about how AI, in the form of generative models like ChatGPT, impact how they think about public affairs. It was about improving “digital literacy” and building a sense of “political agency” to be able to make autonomous decisions, he says.

Duberry’s colleague Christine Lutringer, the director of the Hirschman Centre, adds that the project is part of a broader, bottom-up approach which focusses on democratic practices as a “way of life”. At the state level, efforts are being made by various countries, and the European Union, to regulate AI and big tech, she says. But at the citizen level, the idea is to help people to see democracy – and technology – as something they can use to improve their lives.

In this sense, the project might be a Swiss one, but it’s not about propagating Swiss or western models of what democracy is, the researchers say.

More

Sentience, free will, responsibility

Meanwhile, at the trial, amid all the back-and-forth interrogations and testimonies, it’s also clear that “agency” is central to proceedings here too – albeit in another sense. The question is not just how AI affects human agency; it’s also about how much agency AI needs to have in order to be held accountable. Should AI even be on trial, or should her creators be in the dock – or her users?

Olivier LeclèreExternal link, a Geneva cantonal official involved in running local votes and elections, has been called on an expert witness.

In his view, he tells the court, an AI trained with a sufficiently large data set, which it then continues to learn from, could indeed be considered autonomous. Pointing to AI, Leclère says that “she has read all our legal texts, all our history of billions of documentary archives, and quite frankly, she knows vastly more than most of us”. Normally, he says, a well-trained AI should be able to anticipate the “malfunctions” which lead to fake news or discrimination. It’s strange that this didn’t happen; but ultimately, the defendant is responsible, he says.

The public prosecutor, happy with this appraisal, hammers home the point by asking Leclère to explain some other “deviances” of the technology: “hallucinationsExternal link”, “micro-targetingExternal link”, and the selective removal of information.

The defence lawyers become increasingly dismayed. They launch an attack on the expert’s apparent neutrality, before accusing people like him – the community of AI experts, developers, and enthusiasts – of being responsible.

And again, the defence pleads, shouldn’t AI, like humans, be allowed to make mistakes in good faith?

Experts and public divided

Should it? In real life, since the launch of ChatGPT in late 2022 kicked off a torrent of AI debates which are still ongoing, many have argued that it might be better not to wait long enough for AI to be able to make mistakes.

In 2023, a week after GPT-4 was released, thousands of scientists and technologists signed an open letterExternal link calling for a moratorium on further development. They argued that the speed of progress presented a risk of society losing control over the technology and running the risk of huge job loss and disinformation floods. Others have raised even more drastic fearsExternal link of AI turning against its creators and leading to humanity’s extinction.

Such pessimists have been partially heeded, as states have been increasingly making efforts to regulate AI. However, there has been no moratorium. And many other politicians and experts are more optimistic about the technology, or at least admirative; just last week, the Nobel Prizes for physics and chemistry went to researchers who contributed to AI progress.

Yet even here, the researchers themselves are not gung-ho optimists. A day after receiving the physics award, John Hopfield, a neural networks pioneer, said that modern AI systems were “absolute marvels”. But the lack of exact understanding of how they work is “very, very unnerving”, he said. He advocated that the brightest researchers should currently be working on AI safety rather than on AI development as such, in order to head off any “catastrophes”.

The people have their say

A similar ambivalence is clear when the jury at the Geneva trial delivers its verdict. Fittingly, the vote process involves scanning a QR code, then choosing between “guilty” or “not guilty” on an online platform, before the result is beamed onto a projector screen, almost in real time.

Jubilant cheers go around the room: AI has been cleared on both counts. The defendant, who greets this result impassively, is free to go. But the whooping in the crowd masks a sharp division: AI was acquitted of the accusations by a wafer-thin majority – 52% and 51%, respectively. The room is split.

Is the result representative of how society views AI? Or just how 17-19-year-olds in southwestern Switzerland view it?

An older woman, sitting close to the podium, isn’t happy. Her complaints suggest that, had she cast a vote, she would have found AI guilty. Given the numbers present, she might not have changed the result, but it would have been even closer. In any case, she grumbles, she didn’t manage to have her say. How could she? She didn’t have a smartphone.

Edited by Benjamin von Wyl/ds

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR