What artificial intelligence can do for us and against us. “It can create the illusion of a real competence”

Openai has just launched the GPT-5, the artificial intelligence model considered “the most intelligent, fast and useful so far.” But as the tools have more and more performing, the challenge moves from skills to competence: how do we use them, what questions we know how to ask and, especially, how aware are the limits and risks of these instruments?

Photo source: Shutterstock

From missed recruits and automatic errors with devastating impact on families, to students and decision makers confusing fluency with intelligence, the AI shows more and more clearly.

The real risks of hyper-automation are clearly seen in concrete examples. Amazon tested a recruitment system that, based on historical data, learned to discriminate as women, because the models were trained on CVs from a technological environment dominated by men. The result: the algorithm perpetuated and amplified the existing bias ”explains, Alexandru Dan, an artificial intelligence expert, TVL Tech CEO and teacher at the Academy of Economic Studies (ASE) in Bucharest, for Adevărul.

Another case, he adds, is the one in the Netherlands, where an automatic system of granting credits and detecting fraud for social benefits has systematically discriminated families with foreign origins. This has led to thousands of false accusations and dramatic consequences for those affected, including loss of housing or jobs. “Such situations have accelerated the need for a clear legislative framework. You have act, the European regulation on artificial intelligence, will impose strict requirements of transparency, audit and risk assessment, precisely to prevent the repetition of these cases and to protect the fundamental rights of citizens,” draws attention to the specialist.

In his opinion, there is a sufficiently high risk of teachers, students, priests or decision makers to confuse the linguistic fluency of GTP-5 with authentic understanding, discernment or consciousness. It is a significant risk, because the linguistic fluency of GPT-5 can create the illusion of real competence. The model does not think, has no experience and does not understand the concepts as a man. He predicts words based on data patterns, without an authentic understanding of the subject. This is linked to the phenomenon called hallucination of models, when you have false information, but presents them with maximum confidence. These hallucinations can look perfectly credible and can go unnoticed, especially among users without training in checking the sources ”says Alexandru Dan.

He draws attention to the consequences, which can be as serious as possible. People can integrate these errors into projects, official documents or critical decisions. Without alphabetization you have critical thinking, the public risks taking as absolute truths made, with potential negative impact on education, economy and governance ””reports the teacher.

At the same time Alexandru Dan believes that Romania is not ready to manage these risks, neither institutional nor educational. “At the legislative level, there is the European framework of Act, but clear local procedures are lacking for its application in administration, education and justice. Most public institutions do not have specialists capable of auditing or checking the systems before implementing. In education, digital literacy and critical thinking are not yet systematically integrated into school programs. Most teachers do not have specific training to use you in charge or to identify the hallucinations and patterns. Thus, students and teachers can become passive users without verification mechanisms ”he thinks.

In the legal and administrative environment, adds the expert, the delegation of critical decisions to AI, without human evaluation, can generate errors with serious effects. The lack of mandatory protocols to validate the content generated automatically makes these risks amplified, especially in contexts where speed is prioritized to the detriment of accuracy.

The problem is aggravated because AI evolves in months ago with jumps Exponential capacity, while institutions adapt to the classic rhythm for years. This speed difference creates a dangerous gap and requires a fundamental change of approach. Without a rapid and coordinated adaptation, Romania and Europe will be strongly impact at economic, social and political level ”, he completes.

However, in the right hands, GPT-5 can be a powerful tool. “Three years ago, I was at a level comparable to the general school. Today, GPT-5 reaches performances equivalent to those of a faculty or even doctorate in many fields,” says Alexandru Dan.

The model can quickly synthesize data, draft business documents, develop complete strategies (with SWOT, financial projections, scenarios) and automate professional communication. And not only.

“It can also automate professional communication, generating negotiation emails, customized responses and messages for partners, with tone and structure adapted to each situation. In fact, it can simulate risk and opportunity scenarios, offering proactive recommendations for reducing vulnerabilities and maximizing the company’s competitive advantages. or business training programs, adapting the materials to the level of knowledge, the objectives and the learning pace of each student ”, argues Alexandru Dan.

The prompt that changes everything

Few know this: Chatgpt can be taught to write his prompt before answering. If you ask them to do this, the quality of the answers increases visibly, they become clearer, more precise and adapted to your needs, according to the specialist, who emphasizes that, most of the times, people directly type the question. “But if instead you first ask him to create the” ideal prompt “for himself, he will give you a much better structured answer.”.

For example, you can write: “Chatgpt, it creates a perfect prompt using this frame: role, objective, audience, tone, pregnancy, format, constraints, humanization.

Context: I want to make a marketing strategy for my company, my role is a B2B strateg, the goal is to increase the engagement by 50%, the audience is made up of tech founders, the tone is bold, and the constraints are: without generic tips, only applicable ideas. “

The result? “A set of instructions ready to use in a new conversation, which allows you to deliver you more relevant ideas and solutions. The secret is simple: it offers as much context of what you want to get. The more you know about yourself, your goals and limitations, the better you can answer you,” the better you can answer. ” concludes Alexandru Dan.

Personally, I’m a Linux fan since I know myself. When I write prompts for GPT-5, I can’t help but think of the classic terminal and the “sudo” command in Crunchbang, Ubuntu or Manjaro. In the open-source world, an old joke circulates: “Sudo Make Me A Sandwich.” It seems that everything is reduced to the right order. But in order to give a good order, you must first understand how the system works.

The same is the same with the AI. If you do not know how to use it, it’s not an instrument, it’s a trap. In the absence of a legal framework, digital literacy and critical thinking, technology becomes a blade with two edges: it can write flawless reports … or it can destroy lives.

Romania and, in fact, humanity, has a choice: either it learns to control artificial intelligence, or it will be controlled by it.