Swiss perspectives in 10 languages

Is AI a challenge to democracy?

Imogen Foulkes

If it looks like a duck, walks like a duck, and quacks like a duck…then it’s probably a duck. The logic behind that statement seems indisputable. But what if you’re a prospective voter – a Democrat – in the United States, and you get a call from Joe Biden telling you not to vote “because it will only help the Republicans?”

It sounds like Joe Biden…but why would a Democrat president be advising Democrats not to vote? Or what if you’re in the United Kingdom, and you come across a social media recording of Sir Keir Starmer, tipped to be the next Prime Minister, verbally abusing his staff? It sounds awful…but he comes across as such a mild mannered (whisper it, boring) man, that it doesn’t seem in character.

In 2024, four billion of us have the chance to vote in elections, from the United States, the UK, to the European Union, India, Russia, South Africa or Rwanda. This year is being talked about as the one in which artificial intelligence, or AI, could play a big, and dangerous role.

That’s what we’re discussing on Inside Geneva this week, and that’s why, if you do come across something that looks authentic, but its message is odd (like Joe Biden telling Democrats not to vote), be suspicious…very suspicious.

More

Propaganda has always been with us

We all know politicians will use whatever tools they have at their disposal to get our votes. Gabriela Ramos, Assistant Director-General for Social & Human Sciences & AI Ethics at UNESCO, tells Inside Geneva that “Propaganda has always been there, since the Romans. Manipulation has always been there, or plain lies by not very ethical politicians have always been there.”

“The problem now,” Ramos cautions, “is that with the power of these technologies, the capacity for harm can be massive.”

Can the UN, in this case UNESCO, do anything to protect the democratic process from AI manipulation? Ramos is working on it, she gets unlikely partners (the US, China) in the same room to talk about regulation, about the measures governments should be taking.

She firmly believes the risks of AI have to be tackled “upstream”. The tech companies need to be regulated, there need to be sanctions for those giving a platform to misinformation, and governments should have the power themselves to take misinformation down.

Sounds good? Well it could be, but there’s no way any of that legislation is coming this year. In the meantime, we will have to watch out ourselves for that duck that may not actually be a duck.

“I’m worried, about who’s going to win,” says analyst Daniel Warner, who will vote in the US presidential elections this year. “But I’m also worried about whether my vote will count, and I’m worried about all kinds of disinformation that we see out there now. More than I’ve ever seen before.”

What’s the biggest danger?

Ways to manipulate voters are becoming ever more sophisticated, and harder to spot. Alberto Fernandez Gibaja is head of Digitalisation and Democracy at the International Institute for Democracy and Electoral Assistance (International IDEA). He’s identified some of the threats to this year’s elections.

Foreign interference, he suggests, could be significant, but at least we might hope our governments and institutions are watching out for it. But what if the manipulation, the deepfakes, the micro targeting of voters based on their tastes, habits, or assumed opinions, is all coming from our own country, from the very people who want our votes?

Fernandez Gibaja sees the biggest danger in what he calls “the liar’s dividend”. That misinformation, false narratives, fake pictures and videos, will become so widespread, that voters will in the end trust no one, and nothing.

‘‘We kept thinking that we can fight that with trying to correct falsehoods,” he explains, “but that’s not the problem. The problem is not a particular false statement, or semi-false statement.  The problem is that once you make people question the people, institutions and processes they can trust, then there’s no way back.”

Who to trust?

That lack of trust, and the rise of the most implausible conspiracy theories, is already being seen in the US, and even, to a lesser extent, in the UK. Steve Bannon, when he was chief strategist for Donald Trump, was open about his desire to create mistrust, to “flood the zone with shit” so that voters no longer had faith in the very institutions that upheld their democratic system.

It was during the Trump era (which could soon be back with us) that the term “mainstream media” was coined to disparage traditional broadcasters and newspapers, those with journalists trained to be objective and professional.

Ironically, in this big election year, Fernandez Gibaja advises voters that mainstream media, despite the insults from disruptors like Bannon, may be our best protection against manipulation.

“Trust trustable media” he says, suggesting outlets like the New York Times, the BBC, or the Financial Times. “Trust doesn’t’ mean you need to agree with media. But you can trust their ethics, they are not going to lie, they’re not going manipulate you with false information just to try to convince you to vote one way or another.”

The influencer you might follow on YouTube, however, who might suggest something a bit…unusual…is, Gibaja suggests, not the best place to invest your trust.

Are we ready?

So, are we ready for this big election year? Are we armed against manipulation and disinformation? Our guests on Inside Geneva agree we have a lot of work to do, at the multilateral, national, and individual level. It’s a fascinating discussion, so do join us to hear it in full.

But when Gibaja talks about trusting “trustable” media, we are already seeing heartening signs that mainstream media is acting to take down misinformation. The “kill” notice that the world’s major news picture agencies put on Kate Middleton’s family photo is telling. It was a harmless photo, the British royal family wasn’t trying to influence anyone’s vote, just trying to damp down fevered speculation about the Princess of Wales’ health.

But the photograph had been manipulated, and the news agencies have their rules. If it’s not the real picture, it doesn’t get used. You can’t have one rule for the politicians, and another for the royals, and so the photo was killed. Similarly, the deepfake robocall of Joe Biden was spotted quickly, as were multiple pictures of Donald Trump surrounded by apparently adoring young African Americans who, it turned out, had never been anywhere near Trump, let alone promised to vote for him.

So perhaps, in the absence of any regulation of all this malignant fakery, we do have to trust the mainstream media, and trust also our “common sense”, Gibaja advises. If it looks like a duck, quacks like a duck…but the quacking sounds a bit weird…maybe it’s not a duck after all.

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR