Background

The Future Of AI: Human Intelligence Isn't Going To Be The 'Upper Limit Of What's Possible'

Shane Legg
co-founder of Google DeepMind

The global technological landscape has been reshaped by the "LLM war" ignited by the public arrival of OpenAI's ChatGPT in November 2022.

This event was so transformative that it reportedly triggered a "code red" response within tech giants like Google, who suddenly faced a credible threat to their search dominance. This intense competitive pressure has spurred an unprecedented wave of innovation, with major players like OpenAI, Anthropic, Meta, and Google DeepMind racing to develop models like GPT-5, Claude 4.1, and Gemini 2.5 Pro.

The focus has rapidly shifted beyond simple chatbots to creating sophisticated AI "agents" that can reason, use tools, and autonomously complete multi-step tasks, accelerating the march toward Artificial General Intelligence, or AGI.

Scientists and industry leaders are now openly and frequently forecasting its imminent arrival.

And according to Shane Legg, co-founder of Google DeepMind, it's certain that AI will surpass human intelligence, and that he has no doubt about it.

Shane Legg

In a podcast with Hannah Fry, a British mathematician, author, and broadcaster known for making mathematics accessible to the public, Legg said that:

"So is human intelligence going to be the upper limit of what's possible? I think absolutely not. And so I think we as our understanding of how to build intelligence systems develops, we're going to see these eyes go far beyond human intelligence."

Most CEOs in the field predict AGI within the next few years, with some researchers suggesting that current trendlines in compute scaling and algorithmic efficiency, like the use of Chain of Thought (CoT) prompting and Reinforcement Learning from Human Feedback (RLHF), make AGI by 2027 or 2030 strikingly plausible, even though some experts remain skeptical that current Large Language Models alone can ever reach true general intelligence.

However, the pace of advancement is undeniable, as highlighted by Legg.

Legg, who is also the Chief AGI Scientist at Google DeepMind, aptured the industry's mood, by explaining what "minimal AGI" is, in which he defined as a system "that can learn most things that humans can learn, but maybe a bit slower, maybe not quite as effectively. It’s about being generally intelligent and capable."

he then extended his thoughts, outlining a full AGI and an Artificial Superintelligence (ASI) is possible. And there are lots of real-world examples where human-made tools are more powerful that the humans that made them.

"In the same way that, you know, humans, you know, we can't outrun, Top Fuel dragster over 100m, right? We can't lift more than a crane, right? We can't see further than the Hubble telescope. I mean, it's we already see machines in particular areas that can, you know, fly fast and the fastest bird and all these sorts of things. Right. I think we'll see that in cognition as well."

"We've already seen in some aspects, you know, you don't know more than Google, right?"

"And so on on like information storage and stuff like that. We're really going beyond what the human brain is capable of."

"I think we're going to start seeing that and reasoning in all kinds of other domains. So yes, I think we are going to go towards superintelligence."

In other words, Legg wants people to look at AGI a bit different to what they did before.

According to Legg, he believes the current state of AI is "uneven," and that AI systems are already superhuman in areas like language translation and general knowledge, but still fall short in areas like continual learning, visual reasoning, and complex reasoning.

He predicts that these weaknesses are not fundamental blockers and will be addressed over the next few years.

Legg emphasizes that the arrival of AGI will cause a massive transformation that will structurally change the economy and society.

And when that happens, and AGI is reached, humanity will progress towards ASI, a technology that would be far beyond human cognition. By comparing the limits of the human brain (low power consumption, slow signal propagation) to the enormous capacity of modern data centers (high power, massive speed and bandwidth), he concludes that human intelligence is absolutely not the upper limit of what's possible.

When Fry asked him what will happen to the society when human inteligence is dwarfed by superintelligence, Legg responded with:

"This is actually something which is going to structurally change the economy and society and all kinds of things."

"And we need to think about how do we structure this new world [...] ."

Shane Legg

Because of this, he argues for developing "System Two Safety," a concept based on Daniel Kahneman's thinking model, where an AI engages in slow, deliberate reasoning to analyze ethical situations and their consequences, rather than relying on quick, instinctive, or simple rule-based responses.

This focus on reasoning about ethics is critical to ensuring that a superintelligence becomes "super ethical."

He warns that the current system where people trade mental and physical labor for access to resources may no longer work as AI starts taking on a significant fraction of cognitive work.

He sees an "enormous opportunity" for a "golden age" where machines dramatically increase production and advance science, but this requires society to carefully navigate the transition.

He urges people in every domain, from law and education to economics and city planning, to seriously consider the implications of cheap, abundant, capable machine intelligence in their fields. He notes that jobs requiring in-person, non-cognitive labor, like plumbing, may be protected in the short-to-medium term, but purely cognitive, high-compensation work (like advanced finance or software engineering) is most susceptible to being impacted.