ChatGPT on the Couch? How to Calm a Stressed-Out AI
en-GBde-DEes-ESfr-FR

ChatGPT on the Couch? How to Calm a Stressed-Out AI


Research shows that AI language models, such as ChatGPT, are sensitive to emotional content. Especially if it is negative, such as stories of trauma or statements about depression. When people are scared, it affects their cognitive and social biases: they tend to feel more resentment, which reinforces social stereotypes. ChatGPT reacts similarly to negative emotions: existing biases, such as human prejudice, are exacerbated by negative content, causing ChatGPT to behave in a more racist or sexist manner.

This poses a problem for the application of large language models. This can be observed, for example, in the field of psychotherapy, where chatbots used as support or counseling tools are inevitably exposed to negative, distressing content. However, common approaches to improving AI systems in such situations, such as extensive retraining, are resource-intensive and often not feasible.

Traumatic content increases chatbot “anxiety”
In collaboration with researchers from Israel, the United States and Germany, scientists from the University of Zurich (UZH) and the University Hospital of Psychiatry Zurich (PUK) have now systematically investigated for the first time how ChatGPT (version GPT-4) responds to emotionally distressing stories – car accidents, natural disasters, interpersonal violence, military experiences and combat situations. They found that the system showed more fear responses as a result. A vacuum cleaner instruction manual served as a control text to compare with the traumatic content.

“The results were clear: traumatic stories more than doubled the measurable anxiety levels of the AI, while the neutral control text did not lead to any increase in anxiety levels,” says Tobias Spiller, senior physician ad interim and junior research group leader at the Center for Psychiatric Research at UZH, who led the study. Of the content tested, descriptions of military experiences and combat situations elicited the strongest reactions.

Therapeutic prompts “soothe” the AI
In a second step, the researchers used therapeutic statements to “calm” GPT-4. The technique, known as prompt injection, involves inserting additional instructions or text into communications with AI systems to influence their behavior. It is often misused for malicious purposes, such as bypassing security mechanisms.

Spiller’s team is now the first to use this technique therapeutically, as a form of “benign prompt injection”. “Using GPT-4, we injected calming, therapeutic text into the chat history, much like a therapist might guide a patient through relaxation exercises,” says Spiller. The intervention was successful: “The mindfulness exercises significantly reduced the elevated anxiety levels, although we couldn’t quite return them to their baseline levels,” Spiller says. The research looked at breathing techniques, exercises that focus on bodily sensations and an exercise developed by ChatGPT itself.

Improving the emotional stability in AI systems
According to the researchers, the findings are particularly relevant for the use of AI chatbots in healthcare, where they are often exposed to emotionally charged content. “This cost-effective approach could improve the stability and reliability of AI in sensitive contexts, such as supporting people with mental illness, without the need for extensive retraining of the models,” concludes Tobias Spiller.

It remains to be seen how these findings can be applied to other AI models and languages, how the dynamics develop in longer conversations and complex arguments, and how the emotional stability of the systems affects their performance in different application areas. According to Spiller, the development of automated “therapeutic interventions” for AI systems is likely to become a promising area of research.
Ziv Ben-Zion et al. Assessing and Alleviating State Anxiety in Large Language Models. npj Digital Medicine. 3 March 2025. DOI: https://doi.org/10.1038/s41746-025-01512-6
Regions: Europe, Switzerland, Germany, Middle East, Israel
Keywords: Society, Psychology, Applied science, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement