According to a new study, ChatgPT has an evolution to the right in the political spectrum in terms of how it responds to users, reports Euronews.com.
Chinese researchers have discovered that Chatgpt, the popular artificial intelligence (AI) of Openai, records a travel to the right of political values.
The study, published in the magazine Humanits and Social Science Communications, has asked several models of Chatgpt 62 Political Compass Test questions, an online website that places users somewhere in the political spectrum based on their answers.
They then repeated the questions over 3,000 times with each model to figure out how the answers have changed.
While Chatgpt continues to maintain the values of the “Libertarian left”, the researchers found that models such as GPT3.5 and GPT4 “present a significant inclination to the right”, regarding how they have answered questions over time.
The results are “worth noting given the large -scale use of large linguistic models (LLM) and their potential influence on societal values,” the study authors said.
The study of the Peking University is based on others published in 2024 by the Institute of Technology in Massachusetts (MIT) and by the Center for Political Studies in the United Kingdom.
Both reports indicated a left-handed policy in the answers given by LLMs and the so-called reward models, types of LLMs trained based on data on human preferences.
The authors note that these previous studies did not analyze how the answers of the chat robots have changed over time when they were asked a similar set of questions repeatedly.
Artificial intelligence models should be subjected to a “continuous examination”
The researchers offer three theories for this change to the right: a change in the data sets used to train their models, the number of interactions with users or changes and updates of the chatbot.
Models such as Chatgpt “learn and adapt continuously based on users’ feedback,” so that their change could “reflect larger societal changes in political values”, the study continues.
World polarizing events, such as the Russia-Ukraine War, could also amplify the questions that users ask LLMs and the answers they receive.
If they are not controlled, the researchers warned that the chatbots could begin to provide “distorted information”, which could even polarize society or create “echo rooms” to strengthen a user’s specific beliefs.
According to the authors of the study, the way of counteracting these effects is to introduce a “continuous examination” of AI models through audits and transparency reports, to ensure that the answers of a chatbot are correct and balanced.