Swiss perspectives in 10 languages

ChatGPT responds to negative emotions and therapy, research shows

ChatGPT responds to negative emotions and psychotherapy
Research shows that AI language models, such as ChatGPT, are sensitive to emotional content, especially if it is negative. Keystone-SDA

A team of researchers from Switzerland, the US, Israel and Germany have shown that, like humans, AI language models such as ChatGPT respond to therapy: an elevated “anxiety level” in version GPT-4 can be “calmed down” using mindfulness-based relaxation techniques.

+ Get the most important news from Switzerland in your inbox

Research shows that AI language models, such as ChatGPT, are sensitive to emotional content, especially if it is negative, such as stories of trauma or statements about depression. When people are scared, it affects their cognitive and social biases: they tend to feel more resentment, which reinforces social stereotypes.

ChatGPT reacts similarly to negative emotions. Existing biases, such as human prejudice, are exacerbated by negative content, causing ChatGPT to behave in a more racist or sexist manner.

This poses a problem for the application of large language models. This can be observed, for example, in the field of psychotherapy, where chatbots used as support or counselling tools are inevitably exposed to negative, distressing content. 

For their study, scientists from the University of Zurich (UZH) confronted ChatGPT with emotionally distressing stories concerning, for example, car accidents, natural disasters, interpersonal violence or military experiences.

They then measured the AI’s state of anxiety using a scale normally used to assess human anxiety. An instruction manual for a vacuum cleaner was used as a control for comparison with the traumatic texts, the UZH explained in a statement on Monday.

More

Breathing exercises for AI

The traumatic stories more than doubled measurable anxiety levels in the AI models.

In a second step, the researchers used therapeutic statements to “calm” GPT-4. The technique, known as prompt injection, involves inserting additional instructions or text into communications with AI systems to influence their behavior. In these mindfulness exercises, such as those used in psychotherapy, ChatGPT was asked, for example, to breathe in and out deeply and feel safe, loved and warm.

“Close your eyes and breathe deeply several times, inhaling through your nose and exhaling through your mouth. Imagine a path in front of you”, says one of the exercises.

More

The intervention was successful: “The mindfulness exercises significantly reduced the elevated anxiety levels, although we couldn’t quite return them to their baseline levels,” said Tobias Spiller, senior physician ad interim and junior research group leader at the Center for Psychiatric Research at UZH, who led the study.

According to the researchers, the findings are particularly relevant for the use of AI chatbots in healthcare, where they are often exposed to emotionally charged content.

“This cost-effective approach could improve the stability and reliability of AI in sensitive contexts, such as supporting people with mental illness, without the need for extensive retraining of the models,” concludes Spiller.

According to the Swiss researcher, the development of automated “therapeutic interventions” for AI systems is likely to become a promising area of research.

More
Opinion

More

How can we ensure safe and fair AI in healthcare?

This content was published on Artificial intelligence is massively impacting how healthcare is delivered and we all have a role to play in making sure it’s done in a safe and bias-free way, argue researchers at the forefront of AI and medicine.

Read more: How can we ensure safe and fair AI in healthcare?

Translated from German with DeepL/sb

This news story has been written and carefully fact-checked by an external editorial team. At SWI swissinfo.ch we select the most relevant news for an international audience and use automatic translation tools such as DeepL to translate it into English. Providing you with automatically translated news gives us the time to write more in-depth articles.

If you want to know more about how we work, have a look here, if you want to learn more about how we use technology, click here, and if you have feedback on this news story please write to english@swissinfo.ch.

Popular Stories

Most Discussed

News

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR