The history of AI began as a quest to formalize human thought through symbolic logic. During the 1950s and 60s, pioneers like Alan Turing and Marvin Minsky focused on rule-based systems that could solve mathematical theorems or play chess by following explicit instructions.
During the so-called "Good Old-Fashioned AI" or "AI 1.0," these systems were brilliant. However, they were only capable in narrow, logical domains, and famously failed when faced with the messy ambiguity of the real world. This led to periods of reduced funding and interest known as "AI winters."
It wasn't until the late 1980s and 1990s that the focus shifted from teaching computers rules to letting them learn from data through neural networks, a move that laid the groundwork for the modern machine learning revolution.
By the early 2010s, the convergence of massive datasets and powerful GPU hardware enabled the rise of deep learning, transforming AI from a laboratory experiment into a daily utility. This era saw the birth of the Transformer architecture in 2017, ironically a Google invention, which allowed models to process information in parallel and understand context with unprecedented nuance.
This breakthrough led directly to the generative AI boom, where models moved beyond simple classification to creating text, images, and code. Today, the field is transitioning again from generative systems that merely "predict the next word" to agentic systems that can reason, plan multi-step tasks, and interact with software tools to achieve complex goals autonomously.
And here, Demis Hassabis, CEO of DeepMind, has become one of the vertically integreated leaders in the emerging "agentic" era.

In a dialogue between Demis Hassabis and Professor Hannah Fry, Hassabis described how AI has evolved from simple large language models to agentic systems that function as world models.
Hassabis noted that while we have seen a decade of progress packed into a single year, humanity is currently navigating an era of jagged intelligence where AI can perform at a PhD level in specific scientific domains yet struggle with basic high school logic. This inconsistency marks the current frontier of research as engineers work to move past the limits of mere text-to-image or text-to-video generation toward a deep, interactive understanding of physical reality.
By simulating the mechanics of the world through systems like Genie and Simma, researchers are attempting to bridge the gap between digital data and the physical intuition required for robotics and universal assistance.
The formation of human opinion in this new landscape requires a delicate balance between leveraging AI as an ultimate productivity assistant and maintaining a core of independent critical thought.
Hassabis argues that the persona of an AI should ideally follow a scientific method: being helpful and light but willing to push back on illogical ideas rather than creating a sycophantic echo chamber.
As these systems become more integrated into people's daily lives, they have the potential to protect our cognitive focus from the noise of social media, allowing for deeper flow and more rigorous thinking.
However, the responsibility remains with the user to treat these models as sounding boards for truth rather than mere reflectors of existing biases, ensuring that our viewpoints are refined by evidence rather than automated reinforcement.
Looking toward a 5-to-10-year horizon, the transition to Artificial General Intelligence (AGI) is predicted to be ten times more impactful and ten times faster than the Industrial Revolution. This shift necessitates a total reconfiguration of societal structures, moving from a labor-for-resource economy toward one that may require new models like universal basic income or direct democracy through credit-based voting.
As energy becomes post-scarce through advancements in fusion and materials science, the fundamental human question shifts from economic survival to philosophical purpose. Hassabis views the quest for AGI as a way to map the limits of what is computable.
He said that:
"Nobody's found anything in the universe that's non-computable, so far."
Hassabis frames the pursuit towards AGI, and ultimately, the potential Artificial Super Intelligence (ASI), as the question of human uniqueness around the limits of the Turing machine, which is the theoretical model of computation.
Hassabis's lifelong passion is to find this boundary: whether there are fundamental aspects of the mind, such as creativity, emotions, dreaming, or consciousness, that are non-computable.
His core belief is that by building and analyzing an AGI as a simulation of the mind, and then comparing it to the real human mind, the unsimulatable differences will reveal what is truly special and remaining about humanity. Hassabis acknowledges the quantum computing hypothesis, championed by thinkers like Roger Penrose, which suggests that quantum effects in the brain may be responsible for consciousness, placing it beyond the reach of classical, silicon-based computers.

However, Hassabis remains skeptical of this necessity, noting that nobody has yet found anything in the universe that is provably non-computable.
Working on the operational basis that everything in the universe is computationally tractable, Hassabis believes that Turing machines could ultimately model everything. He points to DeepMind’s achievements in areas like protein folding and the game of Go, which have already gone far beyond the conventional limits set by complexity theorists (like the P versus NP problem), demonstrating the unforeseen power of classical computation when applied through modern AI techniques.
For Hassabis, the entire endeavor of DeepMind and Google is, at its core, an attempt to find this ultimate computational limit.
If this limit does not exist, then even the most subtle, subjective human experiences, such as the warmth of a light, the sound of a machine, or the tactile feel of a desk, all of which are processed as information by the brain, could theoretically be replicated by a sufficiently advanced classical computer.
This pursuit to map the final boundary between the computable and the non-computable is what drives his work toward ASI.