Nobel laureates met last month with nuclear field experts to discuss Ia and the end of the world.
Photo nuclear weapons: EPA-EFE
According to Wired, the meeting experts seemed to generally agree that it is only a matter of time until an artificial intelligence will put their hands on the nuclear codes. It is difficult to determine exactly why this must be true, but the feeling of inevitability – and anxiety – is tangible in the magazine’s report, writes futurism.
“It’s like electricity“, Bob Latiff, a reserve general-major of the US Air Force and a member of the Bulletin of the Atomic Scientists, told Wired Bob Latiff. “It will enter all areas.”
It is a bizarre situation. It has already been shown that it has many negative tendencies, resorting to the black users’ blackmail at an amazing rate when they are threatened with closure.
In the context of a Ia or networks that protect the stocks of nuclear weapons, these slightly understood risks become huge. And this without entering into the real concern of some experts, which happens to be the subject of the film “The terminator ”: A hypothetical superhuman that becomes rebellious and returns the nuclear weapons of humanity against it.
Earlier this year, former Google’s CEO, Eric Schmidt, warned that he takes it on a human level could no longer be motivated to “Listen to us ”, arguing that “People do not understand what happens when you have an intelligence at this level.”
This type of pessimism related to Ia is in the minds of technology leaders for many years, as reality plays a game in slow-motion. In their current form, the risks would probably be more trivial, because the best models of today suffer from rampal hallucinations that largely undermine the usefulness of their results.
Then, there is threat of a faulty technology that leaves gaps in cyber security, allowing opponents – or even opponents – to access the systems that control nuclear weapons.
It was difficult to reach a consensus among all the members of the unusual meeting last month on such a loaded subject, the Global Risk Director of the Federation of American Scientists, Jon Wolfsthal, recognizing for the publication that “No one really knows what it is. ”
However, they found at least one common point.
“In this area, almost everyone says we want an effective human control over making decisions regarding nuclear weapons.” Wolfsthal added. Latiff agreed that “You have to be able to assure the people you work for there is someone responsible. ”
If all this sounds like a clowns show, don’t be mistaken. Under Donald Trump’s presidency, the federal government has intensely dealing with the introduction in all possible areas, although experts warn them that the technology is not yet-and it may never be-at the height of pregnancy. By emphasizing Bravada, the Department of Energy declared this year that IA is “The next Manhattan project ”, referring to the project during World War II, which led to the creation of the first nuclear bombs in the world.
Singling the seriousness of the threat, the manufacturer Chatgpt, Openai, concluded an agreement earlier with the US national laboratories to use for the purpose of nuclear weapons.
Last year, the General of Air Force Anthony Cotton, who is actually responsible for the US nuclear missile stock, praised a conference on defense that the Pentagon is relying more and more, arguing that this “It will improve our decision -making capabilities.”
Fortunately, Cotton did not go until we declare that we have to let the technology take over the total control.
“But we must never allow artificial intelligence to make these decisions in our place.”he added at the time.