WuWenXinQiong Co-founder and CEO Xia Lixue

Xia Lixue, Co-founder and CEO of Infinigence

 

AsianFin — Infinigence, an AI infrastructure startup backed by Tsinghua University, introduced a sweeping portfolio of performance-optimized computing platforms targeting the full spectrum of AI deployment at this year’s World Artificial Intelligence Conference (WAIC 2025) .

The company officially launched three flagship products under its integrated solution suite: Infinicloud, a global-scale AI cloud platform for clusters of up to 100,000 GPUs; InfiniCore, a high-performance intelligent computing platform designed for multi-thousand-GPU clusters; and InfiniEdge, a lean, edge computing solution optimized for terminal deployments with as few as one GPU.

Together, the platforms represent what CEO Xia Lixue calls a “software-hardware co-designed infrastructure system for the AI 2.0 era.” Built for compatibility across heterogeneous computing environments, the Infinigence stack offers full lifecycle support—from model scheduling and performance optimization to large-scale application deployment.

“We’re addressing a core bottleneck in China’s AI industry: fragmentation in compute infrastructure,” Xia said. “With InfiniCloud, InfiniCore, and InfiniEdge, we’re enabling AI developers to move seamlessly between different chips, architectures, and workloads—unlocking intelligent performance at scale.”

In a fast-evolving AI landscape dominated by open-source large language models such as DeepSeek, GLM-4.5, and MiniMax M1, Chinese infra startups are racing to build the backbone that powers model deployment and inference.

Early on July 29, Infinigence announced that InfiniCloud now supports Zhipu AI’s latest GLM-4.5 and GLM-4.5-air models, which currently rank third globally in performance. The move signals Infinigence’s ambition to anchor the growing synergy between Chinese model developers and domestic chipmakers.

Xia likened the trio of newly launched platforms to “three bundled boxes” that can be matched to AI workloads of any scale. “From a single smartphone to clusters of 100,000 GPUs—our system is designed to ensure resource efficiency and intelligent elasticity,” he said.

Infinigence’s platforms are already powering Shanghai ModelSpeed Space, the world’s largest AI incubator. The facility sees daily token call volumes exceed 10 billion, supports over 100 AI use cases, and reaches tens of millions of monthly active users across its applications.

A key challenge for China’s AI infrastructure sector is hardware heterogeneity. With dozens of domestic chip vendors and proprietary architectures, developers often struggle to port models across systems.

Xia emphasized that Infinigence has developed a “universal compute language” that bridges chips with disparate instruction sets. “We treat computing resources like supermarket goods—plug-and-play, interoperable, and composable,” he said.

The company’s infrastructure has already achieved full-stack adaptation for more than a dozen domestic chips, delivering 50%–200% performance gains through algorithm and compiler optimization. It also supports unified scheduling and mixed-precision computing, enabling cost-performance ratios that beat many international offerings.

“What’s missing in China’s ecosystem is a feedback loop,” Xia said. “In the U.S., NVIDIA and OpenAI form a tight cycle: model developers know what chips are coming, and chipmakers know what models are being built. We’re building that loop domestically.”

Infinigence is also targeting AI democratization with a first-of-its-kind cross-regional federated reinforcement learning system. The system links idle GPU resources from different regional AIDC centers into a unified compute cluster—allowing SMEs to build and fine-tune domain-specific inference models using consumer-grade cards.

To support this, Infinigence launched the “AIDC Joint Operations Innovation Ecosystem Initiative” in partnership with China’s three major telecom providers and 20+ AIDC institutions.

Xia noted that while training still depends heavily on NVIDIA hardware, inference workloads are rapidly migrating to domestic accelerators. “Users often start with international chips on our platform, but we help them transition to Chinese cards—many of which now deliver strong commercial value,” he said.

Infinigence has also rolled out a series of on-device and edge inference engines under its Infini-Ask line. These include:

  • Infini-Megrez2.0, co-developed with the Shanghai Institute of Creative Intelligence, the world’s first on-device intrinsic model.

  • Infini-Mizar2.0, built with Lenovo, which enables heterogeneous computing across AI PCs, boosting local model capacity from 7B to 30B parameters.

  • A low-cost FPGA-based large model inference engine, jointly developed with Suzhou Yige Technology.

Founded in May 2023, Infinigence has raised more than RMB 1 billion in just two years, including a record-setting RMB 500 million Series A round in 2024—the largest to date in China’s AI infrastructure sector.

Its product portfolio now spans everything from model hosting and cloud management to edge optimization and model migration—serving clients across intelligent computing centers, model providers, and industrial sectors.

The company’s broader mission, Xia said, is to balance scale, performance, and resource availability. “Our vision is to deliver ‘boundless intelligence and flawless computing’—wherever there’s compute, we want Infinigence to be the intelligence that flows through it.”

IEEE Fellow and Tsinghua professor Wang Yu, also a co-founder of Infinigence, argued that the future of China’s AI economy depends on interdisciplinary collaboration. “We need people who understand chips, models, commercialization, and investment,” Wang said. “Only then can we solve the ‘last mile’ problem—connecting AI research with real-world deployment.”

As China looks to decouple from foreign hardware dependence while competing globally in next-gen AI, Infinigence is positioning itself as a vital enabler—fusing chip-level control with cloud-scale ambition.

“Every AI system runs on two forces: models and compute,” Xia said. “They cannot evolve in silos—they must move forward in sync.”

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App