
Xiaomi, one of China’s leading consumer tech brands, is entering the Large Language Models race with its own product.
Known for being one of the largest manufacturer of smartphones in the world, and referred to as the "Apple of China," Xiaomi has introduced what it calls 'MiMo,' which aims to unlock the reasoning potential of LLMs from pretraining to posttraining.
Developed in-house, MiMo underscores the company’s ambition to enter the LLM race, competing with the likes of OpenAI ChatGPT and others, but by integrate the home-grown AI with array of hardware products.
According to its page on the GitHub platform:
"In this work, we present MiMo-7B, a series of models trained from scratch and born for reasoning tasks."
Today, Xiaomi releases MiMo, our first open-source reasoning model. At 7B parameters, it’s optimized for reasoning through pre-training and post-training, surpassing OpenAI’s o1-mini and QwQ-32B-Preview on AIME 2024-2025 and LiveCodeBench v5 benchmarks.#XiaomiMiMo #MiMo7B pic.twitter.com/AbP9xdmP1O
— XiaomiMiMo (@XiaomiMiMo) April 30, 2025
Unlike traditional digital assistants that feel stiff or scripted, Xiaomi said that MiMo is built to be emotionally intelligent, learning your habits and preferences over time.
Think of it as a charming digital companion — helpful, responsive, and just a little bit clever.
Unlike many AI models that boast massive parameter counts, MiMo operates with a lean 7 billion parameters.
Despite its compact size, Xiaomi claims MiMo matches or even surpasses larger models like OpenAI's o1-mini and Alibaba's Qwen-32B in tasks involving mathematical reasoning and code generation . This efficiency is achieved through meticulous pre-training processes, including enhanced data preprocessing and multi-layered filtering, allowing MiMo to perform complex reasoning tasks typically reserved for larger models.
MiMo-7B-RL not only excels in code and algorithmic tasks but also outperforms both QwQ-32B-Preview and DeepSeek-R1-Distill-Qwen-7B across general tasks, even when the reinforcement learning evaluation is limited to mathematics and code problems. pic.twitter.com/dc47Xs6YEo
— XiaomiMiMo (@XiaomiMiMo) April 30, 2025
Xiaomi developed MiMo-7B from the ground up, targeting reasoning tasks.
The pre-training phase involved enhancing text extraction and multi-dimensional data filtering to increase the presence of reasoning patterns in the training data. Approximately 200 billion tokens of synthetic reasoning data were generated to diversify the training corpus. A three-stage data mixture approach was employed, gradually introducing more complex data to the model. The model was trained on about 25 trillion tokens in total. Multiple-Token Prediction (MTP) was incorporated as an additional training objective to enhance performance and inference speed.
Post-training focused on refining the model's reasoning capabilities through reinforcement learning (RL).
A curated set of 130,000 mathematics and code problems was used, each verified by rule-based systems to ensure quality and appropriate difficulty levels. To address sparse reward issues in complex tasks, a test difficulty-driven reward system was implemented, assigning scores based on the difficulty of test cases. An easy data re-sampling approach was adopted to enhance rollout sampling efficiency and stabilize policy updates during later RL training phases.
To optimize the RL training process, Xiaomi developed a Seamless Rollout Engine, integrating continuous rollout, asynchronous reward computation, and early termination to minimize GPU idle time.
This infrastructure led to a 2.29× increase in training speed and a 1.96× improvement in validation efficiency. The engine also supports Multiple-Token Prediction within virtual Large Language Model (vLLM) environments, enhancing the robustness and scalability of the inference system.
With the release of MiMo, Xiaomi’s AI ambitions is evident.
Following the hyper popularity of LLMs from the West, and what followed was the emergence of DeepSeek which threatens Silicon Valley, Xiaomi’s AI model comes at a time when other China tech companies are diving head first into the competition.
Both the MiMo-7B-Base, SFT, RL-Zero and RL model checkpoints are now open-sourced and available at: https://t.co/Odu43jXTJp
For further details, please refer to the technical report: https://t.co/QAwZx5tBeD— XiaomiMiMo (@XiaomiMiMo) April 30, 2025
Which China's tech companies starting to show their strengths in developing foundational models, driven by the commercial potential of having such AIs in their disposal, Xiaomi signals its serious commitment to AI development, joining the ranks of tech giants investing heavily in artificial intelligence. By focusing on efficient reasoning capabilities and broad accessibility, Xiaomi positions MiMo as a competitive player in the global AI landscape.
But unlike others, Xiaomi has its own plans, which is to develop its own LLMs to soon incorporate them into its hardware products.
The company showcased this when it unveiled MiMo alongside the launch of Xiaomi’s SU7 electric vehicle.
Since MiMo is a part of the Xiaomi's own HyperOS interface inside the car, the company is making its EV offering feel more like a smartphone on wheels: Consumers can talk to the car, plan routes, control the environment, or even sync with their smartphone and home — all powered by MiMo’s AI.
The company showed that MiMo is not just an AI voice tucked inside a smartphone. It’s a full-fledged assistant spanning Xiaomi’s growing hardware ecosystem
While Xiaomi is releasing a reasoning model, it said that it still has a chance to reach artificial general intelligence (AGI), the theoretical point at which AI equals or surpasses human intelligence.
“The year 2025 seemed to be the second half of the AI model competition, but we firmly believe the road to AGI is still very long,” the company said.