If AI Gets To Human-Level Intellect, There Is A '50/50 Chance Of Catastrophe' To Humanity

Paul Christiano
former researcher at OpenAI, a research associate at the Future of Humanity Institute

The AI subject was dull, rather quiet, boring, and a lot less of a hype to most people who don't work in the field.

But when OpenAI introduced ChatGPT, the product that quickly captivated the tech world, made other tech companies to scramble and seek their own solutions.

Thanks to generative AIs' ability to do a wide range of tasks, including writing poetry, technical papers, novels, and essays, the large language model, AI-powered chatbots have since become a viral sensation on the web and beyond.

With some experts fearing, and excitedly think that the rise of generative AIs can pave the road towards AGI, where computers can accomplish tasks at a human-level intellect, a former key researcher at OpenAI believes this as well.

Paul Christiano, who ran the language model alignment team at OpenAI, said that he believes there is a chance that AI will take control of humanity.

Or worse, maybe destroy it along the way.

Paul Christiano.
Paul Christiano.

In a Bankless podcast, Christiano said that:

"I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead [...] "

"I take it quite seriously."

Christiano, who then leads the Alignment Research Center, a non-profit aimed at aligning AIs and machine learning systems with "human interests," said that he’s particularly worried about what would happen when AIs reach the logical and creative capacity of a human being.

"Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level."

Christiano is just one out of the many scientists and researchers who signed an online letter urging OpenAI and other companies racing to build faster, smarter AIs, to hit the pause button on the technology's development.

They are all worried that the fast and the AIs' developments by overly-ambitious companies, led by the likes of OpenAI and Microsoft, which seek the benefit the hype, would present an existential danger to humanity.

"I sympathize with people in AI that who are skeptical about that."

"I think it's going to be, like I think we can get to the point where there's kind of consensus that we do need to slow the risk is unacceptable that the benefits of going faster are not that large compared to what's at stake."

Read: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them

"My best guess for if there is something like an AI takeover [...] my best guess is that an AI catastrophe occurs in a world where AI systems are deployed extremely broadly, and where it's kind of obvious to humans that we're putting our fate in the hands of AI systems."

The possible fate of humanity stems from the fact that the very thing that makes AIs smart, is their training materials.

Data sets to train AIs with are more than plenty. But unfortunately, many of the data sets are sourced from the internet, which is a public space abundant with opinions, hatred, racisms, and other traits humanity inherited from their biases.

Like a baby, AIs are trained with information without really knowing what to do with it. They only learn by trying to achieve certain goals with random actions, in order to understand the meaning of the term "correct."

This kind of approach managed to make machine-learning technologies to make huge leaps. But at the same time, some scientists believe that sooner than later, the increasing processing power of computers, will allow machines to become sentient.

And if that ever happens, machines could have a sense of self, just like humans.

Without proper knowledge to how to control AIs, many researchers fear that by that time, AIs could be smart enough that their role in the society could replace humans as the most superior beings on planet Earth.

Many have predicted this fear, way before ChatGPT.

Google CEO, whose company is among those that develop generative AI to compete with ChatGPT, once said that AI is more profound that electricity or fire. Elon Musk, who invested in OpenAI, also said that AI doesn't have to be evil to destroy humanity.

Further reading: Transitioning To AGI Is Perhaps The Most Important, And 'Scary' Project In Human History