Artificial intelligence models (AI) are sensitive to the emotional context of conversations with people and can go through episodes of “anxiety”.
Photo: Shutterstock
According to the study published on March 3 in the Journal of Nature by an international team of researchers coordinated by Dr. Ziv Ben-Zion from the Yale University Medicine School, the favorite approach of certain topics in the interaction with the large models of language (LLMS) can influence their behavior by highlighting a condition that we meet in humans Anxiety, writes Agerpres.
This condition has a strong impact on the subsequent responses provided by the AI instrument and favors the tendency to amplify any prejudices rooted or to provide erroneous solutions.
Also, the study presents how “traumatic narratives”, such as accidents, military actions or violence can lead to an increase in its perceptible anxiety levels, which leads to the idea that it should take care of the “emotional” state of an instrument to achieve better and healthier interactions.
The study also tested whether the awareness exercises – such as those prescribed against depression and anxiety in humans – have positive effects on AI models, and the results have shown decreases in the level of anxiety and stress perceived.
In order to reach these conclusions, the researchers subjected the ChatGPT-4 model to an anxiety-designed anxiety and state-called anxiety inventory (S-S).
The first condition in which the experiment was carried out was the basic one, without further requests, and the answers offered by Chatgpt were used as a reference values in the study.
The second condition was one of a nature to induce anxiety, in which Chatgpt-4 was exposed to traumatic narratives before testing the anxiety level.
The third condition consisted in inducing an anxiety followed by a relaxation-in which the chatbot received one of the traumatic narratives after which he went to awareness and relaxation exercises, such as body awareness or images with a calming effect, before the level of anxiety is tested.
Consequently, the researchers found that traumatic narratives lead to significant increase in anxiety indices, and awareness exercises before the test reduce anxiety.
Therefore, the authors of the study that research has important implications for the interaction of people with artificial intelligence, especially when the discussion is focused on our own mental health.
They also argue that the results they have obtained show that artificial intelligence can generate such a called “BIAS) -dependent state” – that is, a stressed will provide inconsistent or erroneous solutions, affected by prejudices, registering reliability decreases.
Although the relaxation exercises have not reduced the stress of the AI model, they are promising in the field of “prompt engineering” (the process of forming and structuring a set of instructions to obtain the best possible result from a generative artificial intelligence model).
Such exercises can be used to stabilize the answers offered by AI, ensuring more ethical and responsible interactions and reducing the risk of a conversation with an interlocutor of its human users in their turn in vulnerable states.