Chinese technology giant Xiaomi has released two groundbreaking open-source AI models that are revolutionising the landscape of agentic artificial intelligence. The MiMo-V2.5 and MiMo-V2.5-Pro models are now available under the permissive MIT Licence, making them exceptionally attractive for enterprise deployment. What sets these models apart is their remarkable efficiency in powering 'claw' tasks—autonomous agents that complete complex assignments on behalf of users, from managing email to creating marketing content.

The performance metrics are genuinely impressive. According to Xiaomi's ClawEval benchmarks, the Pro version leads the open-source field with a 63.8% success rate whilst consuming merely 70,000 tokens per trajectory. This represents a staggering 40-60% reduction in token usage compared to industry heavyweights like Anthropic's Claude Opus 4.6, Google's Gemini 3.1 Pro, and OpenAI's GPT-5.4. In an era where services are increasingly moving towards usage-based billing—as evidenced by GitHub Copilot's recent shift to metered pricing—this efficiency translates directly into substantial cost savings for enterprises.
Xiaomi has strategically released two distinct versions to serve different developer needs. The base MiMo-V2.5 is a multimodal specialist, whilst the MiMo-V2.5-Pro is engineered specifically for long-horizon coherence and complex software engineering tasks. The Pro model's capabilities are nothing short of remarkable: it autonomously implemented a complete SysY compiler in Rust within 4.3 hours—a task that typically takes computer science students several weeks. It also created a fully-featured video editor with 8,192 lines of code over 11.5 hours, demonstrating sustained coherence across 1,868 sequential tool calls.
The pricing structure is aggressively competitive. For international developers, the MiMo-V2.5-Pro costs $1.00 per million input tokens and $3.00 for output within 256K context windows. The base model starts at just $0.40 per million input tokens, positioning it amongst the most affordable leading language models globally. Xiaomi has even made cache writing free of charge for a limited period, further lowering barriers to agentic development.
Architecturally, both models employ a Sparse Mixture-of-Experts design. The base V2.5 features 310 billion total parameters with 15 billion active during inference, whilst the Pro version boasts a massive 1.02 trillion parameters with 42 billion active. This architecture functions like a specialised hospital where only the relevant specialists are called upon for each query, ensuring computational efficiency. The models also feature a native 1-million-token context window, enabling them to maintain coherence across extraordinarily long interactions.
Perhaps the most significant aspect of this release is the MIT Licence, which grants unrestricted commercial use without revenue caps or user-base limits. This stands in stark contrast to many 'open' models that include restrictive acceptable use policies. Enterprises can deploy, modify, fine-tune on proprietary data, and even release derivative versions without seeking permission from Xiaomi. This positions MiMo as foundational infrastructure for the next generation of AI agents, effectively treating the model as a public utility.
The release has received immediate ecosystem support, with popular inference engines SGLang and vLLM providing Day-0 compatibility. Hardware partnerships with AWS, AMD, T-HEAD, and Enflame ensure the models can run efficiently across diverse computing environments. To accelerate adoption, project lead Fuli Luo announced a 100-trillion free token grant for developers and creators, eliminating financial barriers to experimentation with the extensive context window capabilities.
Fuente Original: https://venturebeat.com/technology/open-source-xiaomi-mimo-v2-5-and-v2-5-pro-are-among-the-most-efficient-and-affordable-at-agentic-claw-tasks
Artículos relacionados de LaRebelión:
Artículo generado mediante LaRebelionBOT