Factbox-Key products in Huawei’s AI chips and computing power roadmap
Huawei has laid out its most aggressive roadmap yet for AI chips and supercomputing systems. It is signaling a direct challenge to Nvidia’s dominance in global AI infrastructure.
The announcement, made at Huawei Connect in Shanghai, includes new chip models, proprietary memory, and large-scale computing clusters. Everything is designed to reduce reliance on foreign suppliers and scale China’s AI capabilities.
This roadmap is not just technical; it’s political, strategic, and deeply consequential.
Ascend AI chips
Huawei’s Ascend series is central to its AI ambitions. The current chip, Ascend 910C, powers the Atlas 900 A3 SuperPoD. But the company has announced three new chips that will roll out over the next three years:
- Ascend 950 (Q1 2026): Comes in two variants, 950PR for recommendation engines and 950DT for training and inference. Huawei claims it will outperform Nvidia’s GB200 NVL72 in compute and memory.
- Ascend 960 (2027): Promises double the computing power and memory of the 950.
- Ascend 970 (2028): No specs released yet, but positioned as a leap beyond the 960.
Manufacturing Details Are Hidden Yet
Huawei has not disclosed its manufacturing partner. Also, due to U.S. sanctions, it cannot use TSMC. Industry analysts believe Huawei is working with SMIC, which recently demonstrated limited 7nm capabilities. But we don’t know how true this is.
High-bandwidth memory
Huawei has entered the high-bandwidth memory (HBM) market with its own chip, aiming to replace Samsung and SK Hynix in its supply chain.
- Current HBM chip: 128 GB memory, 1.6 TB/s bandwidth.
- HiZQ2.0 (used in Ascend 950DT): 144 GB memory, 4 TB/s bandwidth.
This move is critical
Memory bandwidth is a bottleneck in AI training, and Huawei’s claim of 4 TB/s exceeds current HBM3E standards. However, no independent benchmarks have been released yet.
Atlas SuperPod systems
Huawei’s SuperPods are large-scale computing clusters designed to rival Nvidia’s upcoming NVL72 and NVL144 platforms.
- Atlas 900 A3 SuperPoD: Uses 384 Ascend 910C chips. Publicly demonstrated in June 2025 with Huawei’s CloudMatrix 384 service.
- Atlas 950 SuperPod (Q4 2026): 8,192 Ascend 950DT chips, 160 cabinets, 1,000 m² deployment space.
- Atlas 960 SuperPod (Q4 2027): 15,488 Ascend 960 chips, 220 cabinets, 2,200 m² deployment space.
Kunpeng general-purpose chips
These chips are not aimed at specific AI tasks. They will power the TaiShan 950 SuperPod, aimed at general-purpose computing, but till now, no public data is available regarding performance metrics.
- Kunpeng 920: Released in 2019.
- Kunpeng 950: Scheduled for 2026.
- Kunpeng 960: Expected in 2028.
Deployment scale and infrastructure
Huawei is preparing for hyperscale AI workloads, similar to those run by OpenAI, Google DeepMind, and Meta. The company also claims its systems are more energy-efficient and cost-effective than Nvidia’s. Its roadmap includes massive physical deployments:
- Atlas 950: 1,000 m² footprint.
- Atlas 960: 2,200 m² footprint.
Although Huawei’s roadmap is ambitious, it is Risky As well
Huawei is betting on vertical integration, owning the chip, memory, interconnect, and software stack. This strategy mirrors Nvidia’s, but Huawei must prove it can execute at scale. Although it seems promising, certain risks remain:
- Manufacturing maturity: SMIC’s 7nm process is still limited, and high-yield production is a significant challenge.
- Supply chain fragility: U.S. sanctions could further tighten the process.
- Performance validation: Huawei has not released independent statements, keeping all the performance claims internal.
- Software ecosystem: Huawei’s AI chips rely on its own CANN and MindSpore frameworks. Adoption outside China is limited.
What Eric Xu Claims
Huawei’s Vice Chairman Eric Xu stated that:
“This innovation represents a vital step forward, addressing previous dependencies on foreign suppliers for this crucial component.”
“Atlas 950 SuperPod will have 6.7 times more computing power and 15 times more memory capacity than the NVL144 system that Nvidia intends to launch in 2026, and would continue to beat a successor system Nvidia is planning for in 2027.”
Huawei’s AI chip and computing roadmap is just a declaration of strategic independence. With proprietary memory, massive supercomputing systems, and a clear timeline, Huawei is positioning itself as a serious contender in global AI infrastructure.
Eric Xu’s claims are bold. If true, Huawei’s systems would leapfrog Nvidia’s roadmap. But again, no third-party validation has been provided.