counter easy hit

Xiaomi releases open-weight MiMo-V2.5 AI model, claims “frontier-level agentic capability”

Xiaomi releases open-weight MiMo-V2.5 AI model, claims “frontier-level agentic capability”
6

Xiaomi is the latest company to release an open-weight AI model – MiMo-V2.5 claims to be a “major step forward in agentic capability and multimodal understanding.”

Xiaomi has shared various benchmark results that compare MiMo-V2.5 against the likes of the recently released DeepSeek-V4, Kimi K2.6, Claude Opus 4.6, Gemini 3.1 Pro and Xiaomi’s older MiMo-V2-Pro.

The company claims that MiMo-V2.5 achieved best-in-class performance on its in-house agentic tasks benchmark. On the internal MiMo Coding Bench, the smaller V2.5 model matched the larger V2.5-Pro at half the cost. In other benchmarks that test the model’s image and video understanding, V2.5 is level with closed-source models, says Xiaomi.

MiMo-V2.5 evaluated on coding and agentic tasks. MiMo-V2.5 evaluated on coding and agentic tasks.

The model was trained on 48 trillion tokens and is natively multimodal with support for text, image and video data. Xiaomi has published two versions: MiMo-V2.5 with 310B total parameters (15B active) and MiMo-V2.5-Pro with 1.02T total parameters (42B active). The model supports 1 million tokens of context.

MiMo-V2.5 evaluated on image and video understanding MiMo-V2.5 evaluated on image and video understanding

You can download the model from Hugging Face and run it yourself, but you will need something like a kitted-out Mac Studio to do it – consumer GPUs don’t have enough VRAM (no, not even the Nvidia RTX 5090).

You can try out Xiaomi MiMo-V2.5 in the AI Studio (which doesn’t load at the time of writing) or use it via the official API. Or, as mentioned above, download it and run it locally, if you have the means to do so.

Source

Leave A Reply

Your email address will not be published.