No Data
No Data
Trending Stocks Today: XIAOMI-W up 4.74%
According to TrendForce, the global Television shipment volume is expected to decrease by 0.7% year-on-year in 2025.
It is estimated that the Global Television shipment volume will decrease by 0.7% in 2025, remaining at only 0.1 billion or 96.44 million units.
Xiaomi has launched its first inference open-source large model, Mimo! With 7 billion parameters, it surpasses OpenAI's o1-mini and Alibaba's QwQ-32B-Preview.
Under the same conditions of reinforcement learning (RL) training data, MiMo-7B shows a significantly greater potential in mathematics and coding than other widely used models in the industry, including well-known RL starter models such as DeepSeek-R1-Distill-7B and Qwen2.5-32B.
Canalys: In the first quarter, the Global Smart Phone market achieved only a 0.2% growth, with shipments reaching 0.2969 billion units.
Canalys (now incorporated into Omdia) latest research shows that in the first quarter of 2025, the Global Smart Phone market achieved only a 0.2% growth, with shipments reaching 0.2969 billion units.
Chinese humanoid robots are seizing a global $5 trillion "big track"!
Morgan Stanley expects that by 2050, a total of 1 billion humanoid robots will be deployed globally, with annual revenue reaching 4.7 trillion dollars, nearly double the total revenue of the top 20 global Auto Manufacturers in 2024. China's policy support, technological advancements, and manufacturing foundation place it in a leading position in the field of humanoid robots, especially in the Hardware supply chain.
Market Chatter: Xiaomi Increases Q1 Market Share in China
Moriarty mcG OP : Xiaomi recently unveiled MiMo, its first open-source large language model (LLM) designed for reasoning tasks, announced on April 30, 2025. Here’s a detailed breakdown based on available information:What is MiMo?MiMo (likely standing for "Mindful Model" or a similar term, though not explicitly defined) is a 7-billion-parameter LLM series, trained from scratch by Xiaomi’s LLM-Core Team for mathematics, coding, and general reasoning tasks. It’s optimized through pre-training and post-training (reinforcement learning, or RL) to enhance reasoning capabilities.The MiMo-7B series includes:MiMo-7B-Base: Pre-trained on ~25 trillion tokens with a multi-token prediction objective to boost performance and inference speed.MiMo-7B-SFT: A supervised fine-tuned version.MiMo-7B-RL: RL-tuned from the base model, excelling in math and code.MiMo-7B-RL-Zero: RL-trained from a cold-start supervised fine-tuned model, achieving 93.6% on specific benchmarks.PerformanceDespite its compact size (7B parameters), MiMo outperforms larger models like OpenAI’s closed-source o1-mini and Alibaba’s Qwen-32B-Preview on key benchmarks (e.g., AIME24, AIME25, LiveCodeBench, MATH500, GPQA-Diamond). For example, MiMo-7B-RL matches o1-mini’s performance in math and coding tasks.It uses a three-stage pre-training data mixture and RL with 130K curated math/code problems, verified by rule-based systems to ensure quality. A test difficulty-driven reward system and data resampling enhance its optimization.AvailabilityMiMo is open-source, with models available on Hugging Face (https://huggingface.co/XiaomiMiMo). Xiaomi supports inference via a forked version of vLLM, though compatibility with other engines is unverified. The team welcomes contributions and feedback at mimo@xiaomi.com.The release includes checkpoints for all model variants, aiming to benefit the broader AI community by providing insights into building reasoning-focused LLMs.SignificanceMiMo marks Xiaomi’s entry into the competitive AI landscape, showcasing its ambition beyond hardware. Posts on X highlight its compact efficiency and superior performance, with sentiments praising Xiaomi’s innovation in open-source AI.Unlike proprietary models, MiMo’s open-source nature allows developers and researchers to adapt and build upon it, potentially accelerating advancements in reasoning-focused AI applications