Rack Power Density is Outpacing AC Infrastructure
As AI accelerates the evolution of data center workloads, the physical intensity of computing is scaling at an unprecedented rate. Modern high-performance compute (HPC) and AI training clusters demand rack-level power densities that far exceed the design limits of legacy electrical infrastructure, and the pace of change shows no sign of slowing.
Just a few years ago, 20–30 kW per rack was considered high density. Today, 100–120 kW racks are becoming standard for GPU-heavy deployments, and roadmaps from leading AI infrastructure providers suggest 600 kW to 1 MW per rack within the next 3–5 years. These power levels are being driven by:
Dense GPU/TPU clusters for training massive language and multimodal models
Liquid-cooled and immersion-cooled systems that allow tighter server packing
Higher interconnect bandwidths with more active components per rack
These compute-dense environments pose serious challenges for traditional AC-based power architectures, both in terms of thermal management and electrical efficiency.
Most current data centers rely on a 480V AC distribution backbone, which is then stepped down and rectified to ~48V DC at the rack level to power IT equipment. While workable at moderate loads, this approach becomes problematic at extreme densities for several reasons:
Low-voltage, high-current distribution (e.g., supplying 600 kW at 48V) demands massive conductors to handle the current, on the order of 12,500 amps. This inflates the size, cost, and complexity of bus bars, PDUs, and cable trays.
I²R (resistive) losses skyrocket as current increases, leading to significant efficiency loss and excess heat generation.
On-rack rectification and conversion generate localized heat loads, putting pressure on cooling systems and degrading power quality under transient load conditions.
UPS systems sized for AC delivery often become bottlenecks or require costly and inefficient scaling strategies.
In short, the AC-based model becomes physically and economically inefficient at the scale and density required for AI workloads.
High-voltage direct current (HVDC), typically in the 800V DC range, is emerging as a superior alternative for intra-facility power distribution in AI-ready data centers. The advantages are substantial:
Reduced Current and Conductor Size: At 800V, the current required to deliver the same amount of power drops significantly. This reduces conductor cross-section and copper usage, improving spatial efficiency in cable trays and bus ducts.
Lower Resistive Losses: With lower current levels, I²R losses are minimized, improving end-to-end electrical efficiency, especially critical as PUE targets tighten and energy costs rise.
Centralized Conversion: By moving rectification and power conversion out of the rack and into centralized power blocks, thermal loads are isolated to locations where cooling is more efficient, enabling simpler airflow management and better cooling economics.
Improved Load Response: HVDC systems support faster dynamic response to load transients—an increasingly important factor in AI workloads that spike power consumption unpredictably across GPU clusters.
The trend toward ultra-dense AI racks is already being implemented by hyperscalers and AI-native compute providers. As these densities scale toward the megawatt-per-rack range, AC systems will simply not keep up, economically or technically.
HVDC doesn’t just make high-density feasible; it enables better modular scaling, thermal efficiency, and interoperability with renewable energy and battery storage systems, most of which operate natively in DC.
At Ennovria, we design high-voltage DC distribution architectures tailored to perform efficiently in the next generation of compute infrastructure. Contact us to learn more.