Xiaomi has launched its first inference open-source large model, Mimo! With 7 billion parameters, it surpasses OpenAI's o1-mini and Alibaba's QwQ-32B-Preview.
Under the same conditions of reinforcement learning (RL) training data, MiMo-7B shows a significantly greater potential in mathematics and coding than other widely used models in the industry, including well-known RL starter models such as DeepSeek-R1-Distill-7B and Qwen2.5-32B.
Chinese humanoid robots are seizing a global $5 trillion "big track"!
Morgan Stanley expects that by 2050, a total of 1 billion humanoid robots will be deployed globally, with annual revenue reaching 4.7 trillion dollars, nearly double the total revenue of the top 20 global Auto Manufacturers in 2024. China's policy support, technological advancements, and manufacturing foundation place it in a leading position in the field of humanoid robots, especially in the Hardware supply chain.
The market may reach a critical juncture in the short term, with Banks and Electrical Utilities showing repeated activity, and the Technology Sector poised to take off.
Track the entire lifecycle of the main Sector.
In the morning, big news! Alibaba releases and open sources Qwen3, seamlessly integrating thinking modes, multilingual capabilities, and facilitating agent calls.
Alibaba stated that Qwen3 seamlessly integrates two modes of thinking and supports 119 languages, making it convenient for agents to call. This release of the Qwen3 series includes two expert mixture (MoE) models and six additional models, with the flagship model Qwen3-235B-A22B showing highly competitive performance in benchmark tests for code, mathematics, and general capabilities compared to top models like DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro.
'Alibaba Rolls Out Latest Flagship AI Model in Post-DeepSeek Race'- Bloomberg
Today's Pre-Market Movers and Top Ratings | NVDA, TSLA, DPZ, and More
Moriarty mcG OP : Xiaomi recently unveiled MiMo, its first open-source large language model (LLM) designed for reasoning tasks, announced on April 30, 2025. Here’s a detailed breakdown based on available information:What is MiMo?MiMo (likely standing for "Mindful Model" or a similar term, though not explicitly defined) is a 7-billion-parameter LLM series, trained from scratch by Xiaomi’s LLM-Core Team for mathematics, coding, and general reasoning tasks. It’s optimized through pre-training and post-training (reinforcement learning, or RL) to enhance reasoning capabilities.The MiMo-7B series includes:MiMo-7B-Base: Pre-trained on ~25 trillion tokens with a multi-token prediction objective to boost performance and inference speed.MiMo-7B-SFT: A supervised fine-tuned version.MiMo-7B-RL: RL-tuned from the base model, excelling in math and code.MiMo-7B-RL-Zero: RL-trained from a cold-start supervised fine-tuned model, achieving 93.6% on specific benchmarks.PerformanceDespite its compact size (7B parameters), MiMo outperforms larger models like OpenAI’s closed-source o1-mini and Alibaba’s Qwen-32B-Preview on key benchmarks (e.g., AIME24, AIME25, LiveCodeBench, MATH500, GPQA-Diamond). For example, MiMo-7B-RL matches o1-mini’s performance in math and coding tasks.It uses a three-stage pre-training data mixture and RL with 130K curated math/code problems, verified by rule-based systems to ensure quality. A test difficulty-driven reward system and data resampling enhance its optimization.AvailabilityMiMo is open-source, with models available on Hugging Face (https://huggingface.co/XiaomiMiMo). Xiaomi supports inference via a forked version of vLLM, though compatibility with other engines is unverified. The team welcomes contributions and feedback at mimo@xiaomi.com.The release includes checkpoints for all model variants, aiming to benefit the broader AI community by providing insights into building reasoning-focused LLMs.SignificanceMiMo marks Xiaomi’s entry into the competitive AI landscape, showcasing its ambition beyond hardware. Posts on X highlight its compact efficiency and superior performance, with sentiments praising Xiaomi’s innovation in open-source AI.Unlike proprietary models, MiMo’s open-source nature allows developers and researchers to adapt and build upon it, potentially accelerating advancements in reasoning-focused AI applications