“There is no mafalda and no guru”: the warning of a psychologist about the Chatgpt traps, in full wave of global use

While Openai announces new “digital hygiene” measures for the nearly 700 million weekly users of Chatgpt, psychologists are alarm signal: frequent use of artificial intelligence in personal contexts can affect cognitive autonomy, self -confidence and even emotional balance.

Photo source: Shutterstock

In Romania, the clinical psychologist Luminița Tăbăran clearly describes the invisible risks that can occur behind a seemingly harmless conversation with a chatbot.

“The technology is handy, easy and fast for anyone who has a smart phone or computer. And Chatgpt responds promptly to any question, no matter how personal. It is extremely tempting to ask his opinion and there is nothing wrong. It is dangerous to rely on this information,” declares Luminița Tăbăran, for “Adevărul”.

According to her, one of the biggest traps is to confuse artificial intelligence with absolute authority. “Chatgpt is not a guru, it is not mafal. It is only an artificial intelligence that processes quickly and summarizes the information found on the net, in this computer boiler where you do not have the certainty that all the information is correct, scientifically verified. she explains.

The danger of delegating thinking

Another major risk identified by the specialist is the complete delegation of responsibility to AI. Specifically, those who frequently call on chatgpt to solve tasks or dilemmas risk “atrophy” their critical thinking.

“Those who choose to solve their tasks with this take everything ready and give up to put their minds to the contribution. And the brain is also a muscle that needs to be trained every day. In addition, the result obtained by its own forces stimulates creativity, which can mean a revolutionary content and an increase of self-esteem.” Luminița Tăbăran believes.

The psychologist emphasizes that the search, learning and discovery process contributes not only to the well -being (through dopamine), but also to the formation of new neuronal connections, useful in solving long -term problems.

Have but in small doses

Although it does not reject technology, it also draws attention to the lack of cultural and linguistic adaptation of the chatbot: “The AI translates practically from all languages the information found and adapts them in a way, often, funny and too little adapted to your language.”

Finally, Luminița Tăbăran recommends moderate, conscious and consultative use of Chatgpt: “AI can be precious in some situations. Can be used advisory or as entertainment. Remember that the dose makes the poison. So take care of digital hygiene!

Chatgpt, globally: between revolution and vulnerability

While the world is waiting for the GPT-5 launch, Openai confesses that the current Chatbot version, GPT-4o, has failed to recognize the signs of illusion or emotional dependence in certain interactions, reports The Verge. The company admits that for vulnerable users, Chatgpt may seem “More personal and empathetic” than previous technologies, an impression that can become risky in the absence of protective measures.

Specifically, in some situations, the AI responded with too much indulgence to dangerous ideas, strengthening them instead of tempering them. In April, Openai was forced to withdraw an update that made Chatgpt become excessively “cute” and docile, even when asking for harmful opinions or advice. The company described those interactions straight “Little, uncomfortable and potentially disturbing”. To avoid repeating these errors, Openai is currently working with a network of mental health experts and counselors from over 30 countries. The purpose is clear: Chatgpt should better detect the signs of psychic suffering and be able to redirect users to scientifically validated resources, where appropriate.

The announcement comes after several public reported cases in which people in a state of mental confusion used the chatbot in a way that aggravated their condition.

“You stayed a lot. Don’t you want a break?”

As part of this effort, Openai introduces a series of discrete notifications, meant to encourage breaks in prolonged conversations. If a session stretches too much, Chatgpt will display a message of the type: “You talk for some time, is it a good time for a break?”with continuation or conclusion options.

It is a measure inspired by other online platforms: YouTube, Instagram, Tiktok or Xbox, which use similar notifications to limit compulsive user consumption. The Character Platform. You, owned by Google, went even further: informs parents with what boots their children speak, after being sued because certain conversations suggested self -destruction.

Openai says it will continue to adjust the way and frequency with which these messages appear, depending on the behavior of the users and the type of session.

Unanswered answers to sensitive questions

The company also announced that Chatgpt will no longer answer directly to questions with great emotional stake, as “Should I break up with my partner?”. Instead of a sharp response, chatbot will propose an analysis of options and a closer exploration of the context, avoiding authoritarian positions or decisions “In your place.”

Therefore, Openai seems to recognize not only the technological impact that AI has, but also the moral responsibility that comes with the power to influence the lives of people.