Swiss perspectives in 10 languages

How deepfakes are impacting our vision of reality

immagine volti digitali
Brain Light / Alamy Stock Photo

Extremely plausible fake video apps have democratised the manipulation of visual content to influence public opinion and spread disinformation. Two of Switzerland's leading deepfake experts explain why it’s becoming easier to fool the human eye.

Abraham Lincoln used it to make himself look better to increase his presidential aura; Joseph Stalin and Mao Zedong were known to erase their political opponents off photographs. Image manipulation is at least as old as photography.

But whereas once only the most experienced could masterfully deceive the human eye, today it has become child’s play. All it takes is software downloaded from the internet, a few images taken here and there from search engines or social media, and anyone can create fake videos that could spread like wildfire on the Web. Take for instance the viral fake video of Tom Cruise playing golfExternal link or of Queen Elizabeth dancingExternal link during her yearly Christmas speech. “Nowadays, a photo is enough to create a good deepfake,” says Touradj EbrahimiExternal link, who heads the Multimedia Signal Processing LaboratoryExternal link at the Swiss Federal Institute of Technology Lausanne (EPFL).

The term “deepfake” was coined in 2017 and comes from the contraction of “deep learning” (the “deep” machine learning that relies on artificial intelligence) and “fake”.

For several years, Ebrahimi’s team has been focusing on deepfakes and developing state-of-the-art systems to verify the integrity of photos, videos and images that circulate on the Web. Deepfakes use artificial intelligence to generate synthetic images that are so real that they fool not only our eyes, but also the algorithms used to recognise them. They now have proven capable of superimposing the faces of two different people to create a false profile or false identity.

For Ebrahimi and his team, it’s a race against time and technology; information manipulation has exploded into a national security issue in many parts of the world with the rise of social media. Millions of people, as well as companies and governments, can freely create and access content and manipulate it. According to Ebrahimi, countries such as Russia, China, Iran and North Korea are considered very active in spreading fake news, including through the use of deepfakes, both within and outside their national borders. A recent example of the growing impact of deepfakes includes a European Member of Parliament tricked by deepfake video callsExternal link that imitate Russian opposition figures to discredit Alexei Navalny’s team.

See how the EPFL is using AI and deep learning to detect manipulated videos more effectively:

The eye wants its share (of truth)

A study from MIT has shownExternal link that fake news spreads as much as six times faster than truthful news content on Twitter. This makes the deepfake phenomenon particularly worrying, according to Ebrahimi. “Deepfakes are a very powerful means of misinformation because people still tend to believe what they see,” he says.

Video quality also continues to increase, making it more difficult to distinguish the real from the fake.

“A state with unlimited or almost unlimited resources can already create fake videos that are so real that even the most experienced eyes can be fooled,” Ebrahimi says.

Sophisticated software can recognise the manipulations, but the EPFL professor estimates that not even machines will be able to tell the difference between real and fake content in two to five years.

Touradj Ebrahimi’s lab has been working for 20 years on media security problems involving images, videos, audio and speech and on verifying their integrity. Initially, manipulations were mainly a copyright issue. Later, the issue shifted to privacy and video surveillance until the advent of social media, which contributed to a massive spread of manipulated content.

Deepfakes can bypass the detectors used to identify fakes. For this reason, Ebrahimi’s lab uses a “paradigm” called “provenance technology” to anonymously determine how a piece of content was created and what manipulation was applied. “But for provenance technology to work, it has to be used by a large number of actors on the Web: from Google to Mozilla, Abode, Microsoft to all social media, to name but a few,” says the expert. “The goal is to agree on a JPEG [image and video file] standard to be applied globally.”

More and more manipulations

At first, fake videos were mainly used to create funny clips of actors and other well-known personalities or in video games. Some can even have positive impacts, Ebrahimi points out.  

“Deepfakes have already been used in psychotherapy to alleviate the suffering of people who have lost a loved one,” he says. 

He cites a case in the NetherlandsExternal link, where a grieving parent created a deepfake of his prematurely deceased daughter in order to say goodbye to her. The genealogy site MyHeritage can do something similar, “resurrecting” deceased relatives through its DeepNostalgiaExternal link tool by animating their faces in photographs.

But as technology improved, deepfakes soon became an effective denigration tool, especially against women, or a way to extort money and manipulate public opinion.

They have also been used by cyber criminals to trick companies into sending them moneyExternal link by impersonating the CEO asking for an urgent wire transfer.  

“At the moment there are only a few such manipulations, but as the technology matures we will see more and more of them,” predicts Sébastien MarcelExternal link, a senior researcher at Swiss research institute Idiap.

He explains that current deepfake technology allows only visual content to be manipulated, but not audio. The voices, when not taken from other videos, are impersonated by a professional.

“Audio fakes are still a challenge, but in the future we will see ultra-realistic deepfakes, capable of faithfully reproducing anyone’s image and voice in real time,” he says.

At that point, manipulations such as setting up a fake scandal about a rival or business competitor, as one example, will become easily possible.

Sébastien Marcel heads the Biometrics Security and Privacy group at the Swiss Idiap Research Institute. This is one of the few labs in Switzerland that focuses on biometrics research to assess and strengthen the vulnerability of fingerprint and face recognition systems. “Research on facial recognition and biometrics in general is still rather scarce in Switzerland,” says Marcel.

Denying reality

As awareness of deepfakes increases, the uncertainty over what’s real and what’s fake can have an unintended effect and create a culture of “plausible deniability” where no one is willing to take responsibility because anything could be falsified, argues researcher Nina Schick in her book Deepfakes: The Coming InfocalypseExternal link.

Real videos can also be mistaken for falsified content. In Gabon, for instance, a video of President Ali Bongo, who had been absent from the public scene for weeks due to illness, was mistaken for a deepfake and provoked the uprising of a handful of military coup leadersExternal link

“Deepfakes could give anyone the power to falsify anything, and if everything can be falsified then anyone can claim plausible deniability,” Schick argues. She considers this among the biggest dangers to society posed by deepfakes. 

More

How to fight the ‘fake news’ culture

The European Union is not taking the problem lightly. Funding initiatives such as Horizon Europe encourage research into false videos.

“We expect to see more EU calls on deepfakes in the coming years”, says Marcel.

On a technical level, tackling deep fakes means being proactive and focusing on the vulnerabilities of systems.

“But it is not always that simple,” the Idiap researcher argues. “The academic processes for obtaining funding are slow.”

Meanwhile, the technologies behind deepfakes continue to develop at a rapid pace.

Ebrahimi and Marcel agree that in order to combat fake news, it is essential to create awareness and educate the population to develop critical consciousness and a deeper sense of civic responsibility.

“We need to teach our children to question what they see on the internet and not to spread any content indiscriminately,” says Ebrahimi.

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR