An advantage in computing power no longer automatically translates into an advantage in efficiency. 算力优势,不再自动等同于效率优势。
Hu Jiaqi (胡嘉琦), “Nvidia’s Next Battle: Not Chips, but the Computing Power Ecosystem” (英伟达的下一战:不是芯片,是算力体系), Business Management Review (商学院), April 19, 2026
The global AI industry, from chip makers to think tanks, is beginning to converge on a systems-level understanding of computing power, one that Beijing has already elevated to the level of national strategy. What is unfolding is not merely parallel development, but the gradual alignment of industry practice with a model of system organization that China has spent the past decade constructing. Industry is discovering what China has already systematized.
In a recent Chinese-language analysis of Jensen Huang and NVIDIA’s evolving strategy, the focus is not on chips, but on something far more consequential: the emergence of computing power as a system. The article’s central claim is straightforward but significant. In the age of artificial intelligence, advantages in raw computing power no longer translate automatically into advantages in efficiency or performance. The decisive question is no longer who builds the most powerful chips, but who can most effectively organize, integrate, and deploy computing power as infrastructure. What is being described is not a product shift, but a structural transformation in how the digital economy is understood.
This analysis is based on Hu Jiaqi’s recent essay in Business Management Review: “Nvidia’s Next Battle: Not Chips, but the Computing Power Ecosystem.” The essay examines NVDIA’s evolving approach to AI chips and computing power. Although Digital China is not specifically mentioned in the essay, the implications and parallels would be self-evident to any Party member familiar with China’s national digital strategies.
AI is no longer a market for point solutions, but a systemic competition centered on infrastructure. (Hu Jiaqi, 2026)
This shift is visible in NVIDIA’s expanding scope. GPUs remain foundational, but they are no longer the center of gravity. Surrounding them is a rapidly growing architecture that includes high-bandwidth memory, hyperscale data centers, advanced cooling systems, high-speed interconnects, and increasingly, optical networking. These components are not independent. They are being integrated into what Huang has repeatedly described as “AI factories,” large-scale, continuously operating systems designed not simply to process information, but to produce intelligence at industrial scale. In this formulation, computing power is no longer a discrete capability. It is a coordinated system of energy, hardware, networks, and software, operating as a unified whole.
What makes this development analytically significant is not the technology itself, but the logic behind it. As AI moves from training to inference, and from experimentation to large-scale deployment, efficiency, cost, and system coordination become the binding constraints. The bottlenecks are no longer located within individual chips, but in the relationships between them: bandwidth, latency, energy consumption, and system orchestration. Competition is shifting away from isolated performance metrics toward system-level capability: the ability to define, build, and operate integrated infrastructure at scale.
This is precisely the level at which China has already been operating. Under the framework of Digital China, computing power has been reconceptualized as a form of New Type Infrastructure, to be planned, coordinated, and optimized across regions and sectors. Initiatives such as the National Unified Computing Power Network reflect a view of computing not as a market commodity, but as a strategic resource embedded within a larger system. The emphasis is not on maximizing individual components, but on achieving overall system efficiency through coordination, scheduling, and integration.
The key to this competition is no longer “who is stronger,” but “who can more efficiently define infrastructure.” (Hu Jiaqi, 2026)
From this perspective, NVIDIA’s evolution does not represent a divergence from China’s approach, but a convergence toward it. The layered architecture described in NVIDIA’s public statements, spanning energy, compute, infrastructure, software, and applications, mirrors, in industrial form, the system-of-systems logic that underpins Digital China. What appears as a response to engineering constraints and market pressures reflects a deeper structural reality: that in the AI era, power resides not in components, but in systems.
The implications are significant. To define infrastructure is to shape cost structures, determine technical standards, and organize entire ecosystems of dependency and innovation. It is, in effect, to establish the operating environment within which all other actors must function. In this sense, the emerging competition over computing power is not simply technological or commercial. It is structural.
So has the AI industry studied Digital China and is now externalizing it? There is no evidence to support such a claim. A more grounded interpretation is that both sides are responding to the same underlying constraints of large-scale AI systems. Huang approaches the problem from the perspective of engineering and industrial optimization; Beijing approaches it through state planning and system governance. Yet both are converging on the same conclusion: that computing power must be organized as infrastructure, and that advantage lies in system-level integration rather than component-level performance.
