“It is the most empathetic voice in my life.” Artificial intelligence, the silent confidant of those who fail to communicate with others

Chatboits fed by artificial intelligence, such as Chatgpt, prove real support for neurodiver, including autism, ADHD or dyslexia, helping them to communicate more clearly and more confidently, although some experts draw attention to the risks of excessive addiction, reports Reuters.

Photo source: pixabay

Kate D’Hotman, Cape Town movie director, says he knows that AI is a machine, “Sometimes, honestly, it’s the most empathetic voice in my life“Diagnosed with autism and ADHD, D’Otman uses constant Chatgpt, from 2022, to overcome communication barriers and personal relationships. She recounts that she used to send direct, even brutal messages in the perception of others, although this is not intended. Now, she consults the chatbot before writing, asking for a psychologist to adopt. Help understand what has generated a tension in a friendship.

Similar experiences are also reported by Sarah Rickwood, a project manager in the UK, who says that his mind was rapidly running from one idea to another and often felt it was not understood. Chatgpt helped her better structure her thoughts and draft clear documents. “He allowed me to do a lot more with my brain”Rickwood told Reuters.

As technology spreads, initiatives specially designed for neurodiver users also appear.

For example, Michael Daniel, an Australian engineer who learned that he was only after his daughter received the same diagnosis, created a neurotranslator: an assistant who helps him to communicate more clearly with his neurotypic wife. Daniel remembers how, one day, he said about his wife’s outfit: “Wow … it’s a unique shirt”, without understanding that the lack of a direct compliment could be interpreted as a criticism. Neurotranslator offered the necessary clarification. “The emotional baggage that (normally) comes with such situations simply disappears in a few minutes.”he says. The app already has over 200 paying subscribers.

However, some researchers point out that this addiction to AI may have a reverse. Larissa Suzuki, a neurodiver computer and NASA researcher, warns that the fast results can become “very seductive”, and users risk losing their ability to function independently.

At the same time, Gianluca Mauro, a counselor in AI, points out that, unlike therapists, these models do not follow ethical codes and can avoid providing difficult tips precisely to maintain user satisfaction. If the AI becomes addictive, Mauro warns, clear regulations will be needed.

A study by Carnegie Mellon and Microsoft argues that prolonged use of generative instruments can undermine cognitive involvement, especially in routine tasks, where users tend to automatically rely on algorithmal recommendations.

Despite these warnings, many of those who use daily chats say that the benefits are much higher than the risks. D’Hotman, for example, confesses that he had just left the house for a year after being diagnosed with autism. “If I gave up Chatgpt, I think I would go back to that traumatic isolation period. As someone who has faced a handicap all my life, I need it.”she says.

Recent studies show that over 400 million people use Activ Chatgpt Each week, and AI’s global use has increased by 48% in a single year, according to a Google-Pesos survey quoted by Reuters.

In this context, a new question is outlined: to what extent do we model the technology and to what extent it gets to model us?