
AI was little more than a term romanticized in science fiction and cautiously explored by researchers.
But with the arrival of ChatGPT from OpenAI, the concept caught fire, sparking an arms race in Silicon Valley. Tech giants suddenly rushed to outdo one another, investing billions in generative AI with ambitions of achieving both AGI and market dominance.
China was never far behind, steadily pushing to craft its own answer to the disruption.
Amid this high-stakes battleground emerged DeepSeek, a bold underdog from Hangzhou—a startup that dared to challenge the incumbents with leaner resources but unshakable ambition.
It sent tremors through the West, and well beyond.
After releasing DeepSeek-V3, and updated it with some improvements and called it DeepSeek-V3.1, DeepSeek now introduced 'DeepSeek-V.3.1-Terminus.'
DeepSeek-V3.1 → DeepSeek-V3.1-Terminus
The latest update builds on V3.1’s strengths while addressing key user feedback.
What’s improved?
Language consistency: fewer CN/EN mix-ups & no more random chars.
Agent upgrades: stronger Code Agent & Search Agent performance.…— DeepSeek (@deepseek_ai) September 22, 2025
According to its documentations page, V3.1-Terminus is built on V3.1’s strengths "while addressing key user feedback."
Improvements include language consistency, where the AI generates fewer Chinese-English language mix-ups and no more random characters. V3.1-Terminus also has an updated agent that performs better in coding and search. The goal is to deliver more stable and reliable outputs across benchmarks compared to the previous version.
On its HuggingFace page, DeepSeek said that the model structure of V3.1-Terminus is the same as V3.
In other words, it retains its efficient agentic capabilities, as well as its hybrid Mixture of Experts (MoE) framework.
Instead of being a one-size-fits-all model, DeepSeek-V3.1-Terminus, which is based on its predecessor, also operates in two distinct modes: the Non-Thinking Mode that is optimized for speed and standard conversational tasks, and the Thinking Mode where the model's true power for complex tasks shines.
Just likes predecessor, DeepSeek-V3.1-Terminus prioritizes structured, multi-step reasoning, making it ideal for agentic workflows that require sophisticated problem-solving, like code generation and debugging.
This dual-mode approach allows the model to strike a balance between performance and cost.
For simple queries, it can operate in a fast, efficient manner, and for more complex challenges, it can "think" more deeply, achieving high accuracy without the unnecessary overhead of constant, deep reasoning.
DeepSeek-V3.1-Terminus delivers more stable & reliable outputs across benchmarks compared to the previous version.
Available now on: App / Web / API
Open-source weights here: https://t.co/Jh4RudofKm
Thanks to everyone for your feedback. It drives us to keep improving… pic.twitter.com/6fdLvl4LG3— DeepSeek (@deepseek_ai) September 22, 2025
The world of large language models (LLMs) is constantly evolving, with new iterations and architectures appearing at a rapid pace.
Among these, DeepSeek-V3.1-Terminus made a splash, not with a flashy launch, but with a "silent" release that let its performance speak for itself.
The quiet release of DeepSeek-V3.1-Terminus is showing that DeepSeek is focusing more on performance and confidence in the open-source AI landscape.
By providing an efficient, highly capable model with a generous context window and strong agentic skills, DeepSeek is offering a compelling alternative to leading closed-source models, especially those that come from the West.