The “godfather” of artificial intelligence has estimated the chances that the technology will destroy humanity in 30 years

The British-Canadian computer scientist, often presented as the “godfather” of artificial intelligence, has estimated the chances of AI destroying humanity in the next three decades.

Professor Geoffrey Hinton, who this year won the Nobel Prize in Physics for his work on AI, said there is a chance of “10% to 20%” that artificial intelligence will lead to human extinction in the next three decades, writes The Guardian.

Previously, Hinton had said there was a 10 percent chance the technology would trigger a catastrophic outcome for humanity.

Asked on BBC Radio 4’s Today program if he had changed his assessment of a potential AI apocalypse and the 10-in-10 chance of it happening, he said: “Not really, 10% to 20%. (…) You see, we’ve never dealt with things smarter than us before.”

He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There is a mother and a child. Evolution has worked hard to allow the child to control the mother, but this is the only example I know of.”

London-born Hinton, professor emeritus at the University of Toronto, said humans would be like small children compared to the intelligence of very powerful AI systems.

I like to think of it as: imagine yourself and a three-year-old. We (the people – n. ed.) will be the three-year-oldshe said.

AI can be loosely defined as computer systems that perform tasks that typically require human intelligence.

Last year, Hinton made headlines after he resigned from his job at Google to speak more openly about the risks of unfettered AI development, citing concerns that “bad actors” would use technology to harm others. A key concern of AI safety activists is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought the development of artificial intelligence would be when he began his work on the technology, Hinton said: “I didn’t think it would be where we are now. I thought at some point in the future we would get here.”

He added: “Because the situation we are in now is that most experts in the field believe that sometime, probably in the next 20 years, we will develop AIs that are smarter than humans. And that is a very scary thought.

Hinton said the pace of development was “very, very fast, much faster than I expected” and called for government regulation of the technology.

My concern is that the invisible hand will not keep us safe. So just letting big companies for-profit won’t be enough to make sure they develop it safelyhe said. “The only thing that can force those big companies to do more safety research is government regulation.”, notes The Guardian

Hinton is one of three “godfathers of artificial intelligence” who won the ACM AM Turing Award – the computer science equivalent of the Nobel Prize – for their work. However, one of the trio, Yann LeCun, chief AI scientist at Mark Zuckerberg’s Meta, played down the existential threat and said that AI “could save humanity from extinction.”