
The internet continues to operate as usual, but behind the scenes, Google has been hard at work.
The launch of ChatGPT by OpenAI sparked intense competition in the tech industry. Companies of all sizes quickly entered a race to secure their position in the rapidly growing AI landscape.
For Google, a leader in web innovation, the unexpected surge in generative AI caught the company off guard. Recognizing the disruptive potential of large language models (LLMs) in reshaping human-computer interaction, Google declared a “code red.”
This urgent call to action led Google to accelerate the development of its own AI chatbot, initially named Bard, and later rebranded as Gemini. To stay ahead in the AI race, the tech giant integrated Gemini into its core services, reinforcing its position in the market.
Meanwhile, OpenAI has continued to push boundaries, advancing its AI's reasoning capabilities. Determined not to lag behind, Google is investing heavily to ensure it remains a dominant force in the AI revolution.
After announcing 'Gemini 2.0 Flash Thinking Experimental' back in December 2024, now Google is making the reasoning AI model available to the masses.
Our latest update to our Gemini 2.0 Flash Thinking model (available here: https://t.co/Rr9DvqbUdO) scores 73.3% on AIME (math) & 74.2% on GPQA Diamond (science) benchmarks. Thanks for all your feedback, this represents super fast progress from our first release just this past… pic.twitter.com/cM1gNwBoTO
— Demis Hassabis (@demishassabis) January 21, 2025
No longer 'experimental,' this AI model can “explicitly shows its thoughts” for better reasoning performance, with the ability to solve even more complex problems.
Inside this 'Gemini 2.0 Flash Thinking' is an improved Flash Thinking mode, which features enhanced reasoning capabilities across multiple modalities, including text, images, and code.
This advancement allows the model to seamlessly integrate diverse data sources with coherence and precision. With a 1-million-token content window, Gemini 2.0 can process and analyze vast datasets simultaneously, making it ideal for tasks such as legal research, scientific analysis, and large-scale content creation.
One of the standout features of Gemini 2.0 is its ability to execute code directly within its framework.
This bridges the gap between theoretical reasoning and practical application, enabling users to perform complex computations effortlessly. The model also resolves a persistent challenge seen in earlier iterations by minimizing contradictions between its reasoning and responses. These enhancements ensure more consistent performance and greater adaptability across various applications.
The advancements in of this Flash Thinking mode cam ne seen in its benchmark results.
The model achieved impressive scores, including 73.3% on AIME (math), 74.2% on GPQA Diamond (science), and 75.4% on the Multimodal Model Understanding (MMMU) test. These benchmarks highlight the model’s exceptional capabilities in complex reasoning and precise planning, particularly in tasks that demand high levels of accuracy and sophistication.
Early user feedback has been overwhelmingly positive, emphasizing the model’s improved speed and reliability over its predecessor.

DeepMind CEO Demis Hassabis said this "represents super fast progress from our first release just this past December."
"We’ve been pioneering these types of planning systems for over a decade, starting with programs like AlphaGo, and it is exciting to see the powerful combination of these ideas with the most capable foundation models."
Its ability to process extensive datasets while maintaining logical coherence positions it as an indispensable tool for sectors such as education, scientific research, and enterprise analytics.
For users, these upgrades mean faster and more accurate results for even the most complex queries. Gemini 2.0’s ability to handle multimodal data and manage extensive content efficiently positions it as an essential tool for domains such as advanced mathematics, research, and long-form content generation.
As previously mentioned, Google has been busy, especially that this progress comes after just one month after the release of the previous version, underscoring Google’s dedication to rapid innovation and delivering user-centric improvements.
Gemini 2.0 continues to push boundaries, cementing its role as a leading solution in advanced AI applications.
The release, and how it sets new performance records in mathematical and scientific tasks, is like a jab to OpenAI.
While OpenAI offers a more powerful o3 model, which achieved an 87.7% score on the GPQA Diamond benchmark, OpenAI only offers this for those users who pay.
Google that makes Gemini 2.0 Flash Thinking available for free, is literally pressuring OpenAI to question its premium strategy.