AI Servers Dominate Energy Use in Hyperscale Data Centers

Servers are the primary drivers of electricity consumption in modern hyperscale data centers, especially those designed to support artificial intelligence workloads. According to the International Energy Agency (IEA), hyperscale servers accounted for up to 76% of total data center electricity consumption in 2024, underscoring how computing hardware now dominates the energy profile of these facilities.

Hyperscale data centers, operated by companies such as Google, Microsoft, Amazon, and Meta, are built around massive fleets of servers running continuously at high utilization. In AI-focused facilities, these servers are increasingly equipped with power-hungry accelerators such as GPUs and custom AI chips. These components are designed to perform extremely large volumes of parallel computation, but they draw significantly more power than traditional CPU-based servers.

The IEA’s estimate highlights a structural shift in data center energy use. Historically, energy consumption was more evenly split between computing, cooling, and supporting infrastructure. In hyperscale AI data centers, however, the compute layer itself now overwhelmingly dominates electricity demand.

While servers consume most of the energy, the remaining share is distributed across several essential systems:

  • Storage systems, which retain and retrieve massive datasets used for AI training and inference

  • Networking equipment, enabling high-bandwidth, low-latency communication between thousands of servers

  • Cooling systems, required to remove the intense heat generated by densely packed, high-power hardware

  • Other infrastructure, including power conversion, backup systems, and facility operations

Even though these components are critical to data center operation, their combined energy use is now significantly smaller than that of servers alone in hyperscale environments.

The dominance of servers in energy consumption is expected to increase further. AI models are growing in size and complexity, requiring more compute per task. At the same time, hyperscalers are deploying servers with higher power densities and running them closer to peak utilization for longer periods. As a result, total electricity demand rises even when gains are made in cooling efficiency or power distribution.

The IEA’s finding has important implications. It means that any meaningful effort to manage the energy footprint of AI data centers must focus first on server efficiency including chip design, system architecture, software optimization, and workload management. While improvements in cooling and infrastructure remain important, the compute layer is now the central factor shaping energy demand.

In addition to improving server efficiency itself, efficient power delivery is becoming a critical focus, and one of the leading approaches is becoming the use of high-voltage direct current (HVDC) within data centers. HVDC reduces energy losses by minimizing the number of power conversions between the grid and the servers. Traditional AC architectures require multiple AC-to-DC and DC-to-AC conversions, each of which wastes energy as heat. By delivering power closer to the form that servers use HVDC systems improve overall efficiency, reduce heat generation, and simplify power distribution at scale. As server loads continue to dominate data center energy use, optimizing how electricity is delivered to those servers is increasingly seen as a key lever for reducing total power consumption and supporting the continued growth of hyperscale AI infrastructure.

As hyperscale AI data centers continue to expand, the electricity consumed by servers will play an increasingly significant role in shaping power grid planning, energy markets, and the environmental impact of the digital economy.

Contact us to learn how Ennovria can help you bring HVDC to your data center.

Next
Next

Middleware in HVDC Data Centers: The Quiet Intelligence Behind a Power Revolution