AI Is Soon Going To Be Smarter Than Humans. 'How Do We Survive That?'

Geoffrey Hinton
computer scientist, the 'Godfather of AI'

When early humans discovered fire, they were afraid of it. But later, after they found the benefits of using fire, they can no longer live without it.

When humanity discovered how to make electricity, they were scared at first. But when they realized how it can help, they cannot live without it, as well.

The same goes with AI.

The thing about AI is that, it's only as good as the dataset it has been trained on.

Thanks to the internet and the ever-growing number of interactions made in it, information to train AI with is more than plenty. However, datasets sourced from the internet include massive amount of biased opinions, racism, sexism, and others, including ideologies, thoughts, as well as political and culture influence.

When these information become AI's training materials, the AIs will inherit those human-like traits.

The AI field was kind of dull. But when OpenAI introduced ChatGPT, people were awed by the ability of this particular generative AI.

And Geoffrey Hinton is scared.

Geoffrey Hinton.
Geoffrey Hinton becomes a whistleblower, as soon as he quits his job.

Speaking to with MIT Technology Review, Hinton said that:

"I have suddenly switched my views on whether these things are going to be more intelligent than us."

"I think they’re very close to it now and they will be much more intelligent than us in the future [...] How do we survive that?"

Hinton, a British-Canadian cognitive psychologist and computer scientist, is best known as the "Godfather of AI."

Working in the AI field since he was still an graduate student at the University of Edinburgh in the 1970s, he has been credited with his work on artificial neural networks, and was working with Google Brain and the University of Toronto to help develop the technology.

It was back in 2012, when Hinton and two of his graduate students at the University of Toronto created a technology that became the intellectual foundation for the AI systems that the tech industry’s biggest companies believe is a key to their future.

In 2018, Hinton received the Turing Award, together with Yoshua Bengio and Yann LeCun, for their work on deep learning.

But as the field he worked on developed too rapidly, Hinton started to worry.

Hinton warned that the progression seen since 2012 is astonishing, but it is likely just the tip of the iceberg.

Ultimately, Hinton quit Google in May 2023, citing concerns about the risks of AI.

Read: Yann LeCun, Geoffrey Hinton And Yoshua Bengio Received The Turing Award, The Nobel Prize Of Computing

It's only after leaving Google, that Hinton started to pour his heart out.

He started describing his fear, and his concern about how the rapid development and usage of generative AIs can translate to very bad things.

He said that upon realizing the potential of this technology, large tech firms that began to realize how they can profit from this, were moving too fast on deploying AI for public use. Part of the problem was that AI was achieving human-like capabilities faster than forecast.

"Look at how it was five years ago and how it is now," he said of the industry. "Take the difference and propagate it forwards. That’s scary."

Since the first time that computers can be made "smart," in many instances, AI has either been a protagonist or as an essential part of science-fictions.

From the 1927 film Metropolis, where the AI is a robot, to the famous 2001: A Space Odyssey, where an AI called HAL 9000 has cemented itself firmly in the pop culture as one of the most memorable AI antagonist ever. And let's not forget about Star Wars franchise with R2-D2 and C-3PO, The Terminator with Skynet and the ever-popular T-800 cybernetic organism with an endoskeleton, The Matrix franchise with the Agents and the Sentinels, Ex Machina with Ava, and many many more.

While ChatGPT is far from those iterations of AI in science-fiction works, ChatGPT is an AI that shows the ability to "think."

“I’m just a scientist who suddenly realized that these things are getting smarter than us."

"I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us."

Hinton's fears echo those expressed by over 1,000 tech leaders earlier this 2023, in a public letter calling for a brief halt to AI development. Hinton did not sign the letter at the time, explaining that he did not want to criticize Google while he was still with the company.

Among the tech pioneers who signed the petition, include Elon Musk.

In fact, the billionaire who helped founded OpenAI, has been vocal about the threats of AI for years.

Read: AI Is More 'Profound Than Electricity Or Fire': A 'Balance' Should Be Reached

What concerns Hinton, is the way AI works.

Whereas the human brain can solve complex calculations, and do various of other tasks thanks to their native talent for organizing and storing information and reasoning out solutions to problems.

This is made possible through the billions of neurons packed inside the skull, making more than one hundred trillion connections to forge the mind. By contrast, the technology underlying ChatGPT features between 500 billion and a trillion connections.

And what makes AI technology smart is the sheer volume of information that models like OpenAI's GPT-4 have access to. Hinton said that the AI knows "hundreds of times more" than a single human can, and it may have a "much better learning algorithm" than humans do, making it more efficient at cognitive tasks.

Hinton argues that GPT-4 has demonstrated a impressive ability to learn new things very quickly once trained by researchers. Whereas human beings need to take time to learn and share information with each other, AI systems can accomplish this instantaneously, which Hinton said creates a potential for these models to outsmart humans.

Not only can AI systems learn things faster, he noted, because AIs can also share copies of their knowledge with each other almost instantly.

In fact, AI systems might already be outsmarting humans, it's just humans don't know it yet because AIs work in sandboxes and restricted with protocols and bound by rules.

"It’s a completely different form of intelligence," he said.

"A new and better form of intelligence."

Speaking about the threats AI can make, according to Hinton, the danger of these technologies lies with bad actors who could use them to spread misinformation to sway elections, or even conduct wars. Individual criminals, terrorist groups or even rogue nation-states could use AI technology.

"I want to sort of blow the whistle and say, ‘We should worry seriously about how we stop these things getting control over us,'" he said.

Hinton leaving Google comes a while after Google decided to merge its Google Brain division with DeepMind, to create what it calls 'Google DeepMind."