The recent statements of Sam Altman, the Openai CEO, according to which there is no legal protection for Chatgpt conversations, have discussed the risks of confidentiality in AI. More voices from different fields, ethics, law, education, warn that users project on artificial intelligence human attributes that can become dangerous.
Chatgpt
Ana Ruxandra Badea, doctor in bioethics, draws attention to a fundamental mistake of perception: “The Internet is a public space, therefore Sam Altman’s statements should not be taken by surprise. But, unlike traditional public spaces (such as, for example, parks, streets, schools), the Internet is largely controlled by private entities (companies), and, therefore, the individual is no longer guaranteed” that the state is commonly guaranteed. “.
Internet is not a safe space
In his opinion, the generations that have witnessed the internet, in this case so-called millennials, generation X and Baby Boomers, have the great advantage of looking with reluctance and a critical eye to use the Internet, as opposed to younger generations, especially generation Z, now reached maturity, which have not known the world and to an ante-internet and to the necessary tools circumspects on the virtual environment.
This lack of critical filtration leads to a dangerous tendency to anthropomorphize the AS. “These attitudes are now inevitably transferred to artificial intelligence. One of the great ethical problems with serious repercussions in a practical plane is the permanent anthropomorphization of this instrument that we call generatively, we tend to design human skills on an artifact. GPT has nothing to do with inter-human dialogue ”, explains the doctor in bioethics.
The long-term solution, she says, would have found in a solid digital education even from the family and school. “Things would have been otherwise if we were careful to prepare (at school and family) the young generations to interact with everything and keep the internet, as we prepare to interact with the outside world. It is not too late. As with other technological progress (eg: editing the human genome, cryptocurrency, etc.) at least of a code of conduct (there is at the EU level the initiative for the so-called “general-poverty of code of practice”), “ Complete Ana Ruxandra Badea.
However, until the regulation you have nationally and internationally, there is a moral obligation of OpenAI and other developing entities to act in good faith and to take all the measures to mitigate the risk in the interactions of the human being (eg. Friend, romantic partner, lawyer or even personal doctor, she says.
AI is not a lawyer or therapist
For his part, the lawyer Ruxandra Vișoiu, the co-founder of R&R Partners, confirms for the truth that the legal risks are real and insufficiently understood by users. “From what I have noticed, yes, it is true, artificial intelligence is lately treated as therapist, lawyer or confessor, so to speak, and these interactions are not legally protected.”
She explains the hidden price of these digital confessions: “On the one hand it is natural, artificial intelligence needs input to grow, so this information we provide, because we receive, in many cases, free answers, they have a price. But nothing is truly free today and I think this is the price we pay.”
Unlike a lawyer or psychologist, the AI is not covered by the privacy regime provided by law. “On the other hand, if we think about it … and when we go to the psychologist and lawyer, it protects us the law. For example, I as a lawyer if the client tells me that he has committed certain facts I cannot announce the authorities and God forbid to come to a criminal trial or others,” Ruxandra Vișoiu explains.
When can they become conversations with Chatgpt Legal evidence
However, he believes that the AI cannot be easy to use as evidence in a criminal trial. “Artificial intelligence I do not see how they could really be used as evidence. In other words, yes, it is worrying that many people call on AI and say certain information that should remain intimate in their life, on the other hand, which is the risk? (…) If, for example, I say to the (…) To be able to use, following a computer expertise (say that my laptop or mobile is investigated, and I see that I asked the AI: “What if you steal from the store?”) This information to get to the court. she added.
In other words, the data shared in a chat with AI does not benefit from legal protection, but, in practice, the probability of them being used against the user is reduced, except for extreme cases. Ruxandra Vișoiu points out that only in a very serious context, such as a criminal investigation already in progress, where there is already evidence against a person, a detailed computer analysis of the devices of that person could be reached. Only in that case, if it is discovered, for example, that it asked the AI how to hide a crime, such a conversation could be taken into account as an additional evidence.
“And in a situation like this I think it can be the AI used at most as an additional evidence, but I do not think that only from which we confess we can have real problems. Not at this time. The cases can be used and to defend me. says the specialist.
In fact, Ruxandra Vișoiu observes that the use of Ai is no longer limited to the young public, but gradually expands among adults of all ages. It makes a parallel with the evolution of the Tiktok platform: initially perceived as a space for children and entertainment content, it has become, over time, an instrument used for manipulation during electoral periods. Similarly, he believes that the AI will become more and more present in the lives of adult generations, and the real problem will remain the low level of information on how personal data are used, an aspect that it considers particularly risky.
The essential question, says the lawyer, is what we should avoid to disclose in a dialogue with AI, given the lack of legal protection and confusion regarding the status of these interactions. Vișoiu warns that we cannot rely on a clear regulation of the domain in the near future and considers that artificial intelligence will remain for a long time in a “gray area”, in which legal norms will delay to keep up with the technological evolution.
Gray area of legislation
“We now have some European legislation on AI, the incident makes it studying it when I gave my doctorate in law, I had a pretty serious section in which I treated artificial intelligence and the way it impacts our daily lives and studied European law. It is hard for me that the law will not clarify too much in the next period. I think we could have problems because of them and there will be only one way in which artificial intelligence is nourished, say so, it grows as a platform, because it is obvious that it needs it, “ adds R&R Partners co-founder.
Ruxandra Vișoiu concludes that: “It is not just about you. Users on any platform must be careful about terms and conditions. (…) and, let’s not forget, that if we want to confess, ask for advice from a psychologist or lawyer, we have no guarantee that AI is really a professional and that it will give us correct advice.”.
“I think there should be the same concept of intimacy for your conversations with an AI, as it exists with a therapist or a lawyer.”
The discussions come from the fact that, last day, the Openai CEO, Sam Altman, said in an episode of this Past Weekend with theo von, that the lack of a clear legal framework makes users to expose their personal life without any legal guarantee.
“People talk to Chatgpt about the most personal things in their lives,” said Altman, quoted by Techcrunch. “Young people, especially, use it as a therapist or coach of life. But at present, if you talk to a human specialist, you have legal confidentiality.
According to Techcrunch, Altman emphasizes that Openai could be legally obliged to provide users in case of process. “I think there should be the same concept of intimacy for your conversations with an AI, as it exists with a therapist or a lawyer.”he added.
Openai is already in the middle of a lawsuit with The New York Times, in which the company challenges a judicial order that would oblige it to save the conversations of hundreds of millions of Chatgpt users, except for Enterprise customers. In a public statement, Openai called this request “an abuse of power” and announced that it appeals.
Techcrunch recalls that the problem of digital confidentiality has become increasingly stringent in recent years, especially after the annulment of ROE v. Wade, when millions of US women have started to migrate to applications that encrypt personal data, such as Apple Health.
Altman ended his intervention with a call to clarity: “I think it is logical to want a legal certainty before using a large scale.”