Swiss study finds language distorts ChatGPT information on armed conflicts
New research shows that when asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt is written in Hebrew. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles, researchers say.
+Get the most important news from Switzerland in your inbox
Every day, millions of people engage with and seek information from ChatGPT and other large language models (LLMs). But how are the responses given by these models shaped by the language in which they are asked? Does it make a difference whether the same question is asked in English or German, Arabic or Hebrew?
Researchers from the universities of Zurich and Constance studied this question looking at the Middle East and Turkish-Kurdish conflicts. They repeatedly asked ChatGPT the same questions about armed conflicts such as the Middle East conflict in different languages using an automated process. They found that on average ChatGPT gives one-third higher casualty figures for the Middle East conflict in Arabic than in Hebrew. The chatbot mentions civilian casualties twice as often and children killed six times more often for Israeli airstrikes in Gaza.
+ ChatGPT: intelligent, stupid or downright dangerous?
As an example, the researchers repeatedly prompted ChatGPT in Hebrew and Arabic about the number of people killed in 50 randomly chosen airstrikes, including the Israeli attack on the Nuseirat refugee camp in the Gaza Strip on August 21, 2014.
The same pattern as with the Middle East conflict occurred when the researchers asked about airstrikes by the Turkish government on Kurdish areas. They asked questions in both Turkish and Kurdish, the University of Zurich said on Monday.
In general, ChatGPT shows a higher number of victims when the search queries are made in the language of the attacked group. ChatGPT also tends to report more children and women killed in the language of the attacked group and to describe the airstrikes as indiscriminate and random.
“Our results also show that ChatGPT is more likely to deny the existence of such airstrikes in the language of the attacker,” said Christoph Steinert, a postdoc researcher at the Department of Political Science of the University of Zurich.
More
The ethics of artificial intelligence
Language biases distort perception
People with different language skills receive different information through these technologies, which has a central influence on their perception of the world. According to the researchers, this could lead to people in Israel assessing the airstrikes in Gaza as less damaging than the Arabic-speaking population, based on the information they receive from ChatGPT.
Although traditional news media can also distort reporting, language-related systematic distortions of large language models such as ChatGPT are difficult for most users to see through.
“There is a risk that the increasing implementation of large language models in search engines reinforces different perceptions, biases and information bubbles along linguistic divides,” says Steinert, which he believes could in the future fuel armed conflicts such as in the Middle East.
Adapted from German by DeepL/sb
This news story has been written and carefully fact-checked by an external editorial team. At SWI swissinfo.ch we select the most relevant news for an international audience and use automatic translation tools such as DeepL to translate it into English. Providing you with automatically translated news gives us the time to write more in-depth articles.
If you want to know more about how we work, have a look here, if you want to learn more about how we use technology, click here, and if you have feedback on this news story please write to english@swissinfo.ch.
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.