Swiss perspectives in 10 languages

How Meta’s fact-checking shift threatens human rights

Meta ended its US fact-checking programmes in January and its CEO, Mark Zuckerberg, said he would work with US President Donald Trump to push back on censorship around the world, including in Europe.
Meta ended its US fact-checking programme in January and its CEO, Mark Zuckerberg, said he would work with US President Donald Trump to push back on censorship around the world, including in Europe. Copyright 2024 The Associated Press. All Rights Reserved.

By scrapping its fact-checking programme while relaxing its moderation policies in the United States last month, Meta risks, as it has done in the past, harming already marginalised populations. The United Nations and the European Union remain a bulwark against these latest social media developments.

“It’s time to get back to our roots around freedom of expression on Facebook and Instagram,” declared Meta boss Mark Zuckerberg in a video published on January 7.

Citing years of pressure from governments and traditional media and the start of a new era marking a “cultural tipping point”, the world’s third richest man took advantage of the inauguration of President Donald Trump on January 20 to announce a series of measures aimed at combating “censorship” on his platforms.

These changes only concern the United States for now, but they have already sparked strong international reactions. Human rights groups are concerned about the harmful effects they could have on already vulnerable populations.

Zuckerberg said Meta was going to get rid of fact-checkers – recognised media companies, including Agence France-Presse (AFP) – which it believes have become “too biased”. These will be replaced by a “community notes” model like that of Elon Musk’s platform X. This system allows users to add context to controversial or misleading posts. Meta itself will not write community notes.

The Californian company will also change the way it moderates content on its platforms. From now on, only publications that are “illegal” or constitute “serious violations” of the group’s policies – linked, for example, to terrorism or the exploitation of children – will be automatically removed. Other violations will have to be reported by internet users, while some restrictions, such as those relating to immigration or gender, will be dropped because they are “disconnected from the dominant discourse”, according to Zuckerberg.

Real risks

For Deborah Brown, deputy director for technology and human rights at the New York-based NGO Human Rights Watch, this is an “extremely imprudent decision” by Zuckerberg.

“I’m really concerned about the impact this programme may have on human rights around the world. We know that misinformation can incite violence, hatred and even genocide,” she told SWI swissinfo.ch.

More

The influence of Meta’s platforms – which includes Facebook, Instagram, WhatsApp and Threads – is immense. According to the company, four billion people connect at least once a month to one of its services – or half of the planet.

Yet investigations by the UN, international NGOs and governments have highlighted Facebook’s role in spreading disinformation and hate speech, notably during the 2016 US elections and the Covid-19 pandemic.

Another striking example was the crisis in Myanmar in 2017. As the only source of information for many people in the southeast Asian country, Facebook was used to encourage violence against the Rohingya Muslim minority. Their persecution by the army was described as genocide by UN investigators. Meta later acknowledged that it had made mistakes.

Brown added: “Suppressing speech that does not meet free speech standards is not the same as censorship.” She expressed concern at the lack of transparency about the impact of the company’s new policy on the prevalence of hate speech and the absence of a detailed plan for how Meta will manage the risks.

Guests for US President Donald Trump’s inauguration on January 20, 2025, included several tech bosses: (L-R) Mark Zuckerberg, Jeff Bezos, Sundar Pichai and Elon Musk.
Guests for US President Donald Trump’s inauguration on January 20, 2025, included several tech bosses: (L-R) Mark Zuckerberg, Jeff Bezos, Sundar Pichai and Elon Musk. Keystone-SDA

Minority voices further isolated

Stefania Di Stefano, a doctoral student at the Geneva Graduate Institute and an expert on freedom of expression in the age of social networks, believes Meta’s decision will make its platforms “dangerous for many people”.

After appointing Joel Kaplan, a figure close to the Republican Party, as the new Global Policy Chief and head of moderation policy at Meta at the beginning of January, the group changed its rules on “hate speech” to “hate content” on January 7.

Di Stefano says these new, vaguer provisions mean that “if ‘mainstream speech’ relays insults against certain categories of people, then Meta will tolerate it on its platforms. This amounts to removing protection for people who are already marginalised”. These include LGBT people, migrants and refugees.

‘Continue the dialogue’

For years, the UN has been trying to make the social network giants aware of their responsibilities about the content they disseminate. These efforts are being made under the “B-Tech” project of the Office of the UN High Commissioner for Human Rights in Geneva (OHCHR).

“In the human rights community we are unfortunately used to ups and downs,” says Scott Campbell, head of the human rights and digital technology team at OHCHR. “Meta had invested a lot of resources in thinking about its human rights responsibilities, and some tangible progress had been made. But we are very concerned by its recent decision.”

More

Meta launched its fact-checking system in 2016 in response to the misinformation scandals the group was facing. It subsequently set up an Oversight Board to independently review moderation decisions on the company’s platforms, which has since also published a human rights report.

“Our approach is to continue the dialogue,” adds Campbell, who says he has already had a meeting with Meta since January 7. “We have expressed our serious concerns.”

Facebook vs Brussels

But the international human rights framework is not binding on companies, so it is up to governments to legislate.

“We advocate governments to introduce regulations that are in line with their international human rights obligations. But this is not an easy task. The laws must allow everyone to participate in complete safety, without silencing opinions that are said to be awful but legal,” explains Campbell.

Europe is a pioneer in this area. With the EU’s Digital Services Act, adopted in 2022, which requires social network platforms to combat misinformation and illegal content.

A campaigner from the global citizens movement Avaaz wearing a mask of Facebook CEO Mark Zuckerberg holds a sign reading "Regulate me", outside the European Commission on the day the Digital Services Act was published, on December 15, 2020 in Brussels.
A campaigner from the global citizens movement Avaaz wearing a mask of Facebook CEO Mark Zuckerberg holds a sign reading “Regulate me”, outside the European Commission on the day the Digital Services Act was published, on December 15, 2020 in Brussels. Copyright 2020 The Associated Press. All Rights Reserved

For Jérôme Duberry, director of the Tech Hub and co-director of continuing education at the Geneva Graduate Institute, the Meta boss’s decision is above all political.

“It’s a tacit agreement with the Trump administration to align itself with a ‘laissez-faire’ approach to moderation in exchange for which the American president will fight against any form of regulation from Europe,” he says.

Meta’s fact-checking programme, which will continue to exist outside the US, is costly for the company. It could therefore seek to harmonise its practices on a global scale to make savings. But the group could then run up against the limits set by the European framework, which provides for heavy fines in the event of non-compliance.

“The test will be to see whether the EU applies its regulations firmly or, on the contrary, shows flexibility,” says Di Stefano. In a tense political climate in Europe and faced with a protectionist and unpredictable Trump, it is not certain European countries will engage in an arm wrestle with one of the heavyweights of the US economy.

More

Debate
Hosted by: Benjamin von Wyl

Are direct democracies more vulnerable to disinformation?

The wave of disinformation is expected to particularly affect direct democracies such as Switzerland or many US states.

65 Likes
122 Comments
View the discussion

Edited by Virginie Mangin/Adapted from French by DeepL/sb

More

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR