Power Usage Effectiveness (PUE) has long been the primary yardstick for measuring datacentre efficiency, providing a simple, standardized way to track overhead energy versus IT load. However, as the industry faces a paradigm shift driven by AI, extreme rack densities, and urgent sustainability goals, the value of PUE is being re-evaluated to determine if it remains the ultimate metric or just one part of a larger puzzle. To answer that, we must look at what PUE actually calculates, where it falls short, and how it fits into the future of liquid-cooled infrastructure.

Why the Datacentre Industry Has Outgrown PUE

While PUE remains a useful benchmark, where lower values indicate less overhead energy relative to IT load, datacentre technology and operating conditions have evolved more rapidly than the metric itself. Today’s facilities support AI clusters drawing hundreds of kilowatts per rack, increasingly rely on liquid and immersion cooling, operate under tightening water constraints, and face new regulatory requirements that demand greater transparency than a single efficiency number can provide.

At the same time, global PUE benchmarks have shown little improvement. Uptime Institute’s most recent global surveys place the weighted average PUE just above 1.5 and indicate that it has remained largely unchanged for approximately six years. As a result, the industry faces a disconnect where technical capabilities continue to advance, yet the headline PUE figure remains largely static. Understanding this gap requires a clear view of what PUE measures and what it does not.

Defining and Calculating PUE

The formal definition of PUE is provided by The Green Grid and comes down to a single calculation:

PUE = Total Facility Power ÷ IT Equipment Power

This PUE calculation addresses a single question: How much additional energy does the facility consume to support the IT load?

For example, a PUE of 1.5 indicates that for every 1 kW delivered to IT equipment, an additional 0.5 kW is consumed by supporting infrastructure.

Calculation Component Includes
Total Facility Power All power consumed by the datacentre facility, including cooling systems (chillers, CRACs, pumps, fans), power delivery components (UPS, switchgear, generators, PDUs), lighting, building management systems, security, and general facility loads.
IT Equipment Power The energy actually delivered to compute equipment such as servers, storage arrays, network switches, routers, and monitoring workstations.

This clarity is what made PUE so influential. It exposed inefficiencies in mechanical and electrical systems and helped drive significant improvements in datacentre design and operation. As those inefficiencies have been reduced, however, PUE has become less informative as a proxy for overall sustainability or operational effectiveness.

Design PUE vs. Measured PUE

It is also important to distinguish between design PUE and measured or reported PUE, as the difference between the two is often significant in practice.

Design PUE represents a theoretical value calculated under assumed conditions: full IT load, steady-state operation, and optimized system performance. In effect, it describes how efficient a datacentre could be when operating exactly as intended. Measured PUE, by contrast, reflects how the facility actually performs over time, across varying load levels, partial occupancy, maintenance events, and real-world operating constraints. During early stages of operation—when customer adoption is gradual and IT utilization is low—fixed facility overheads can dominate, resulting in a measured PUE that is materially higher than the original design value.

The Global Average PUE

According to Uptime Institute’s recent Global Data Center Surveys, the industry’s weighted average PUE remains just over 1.5 and has changed little since roughly 2019. While newer, larger facilities often perform better—frequently achieving average PUE values in the mid-1.4 range—the large installed base of older and smaller facilities keeps the global average effectively flat. This is one reason regulators, investors, and operators increasingly regard PUE as a necessary but incomplete indicator.

Why a Datacentre in Sweden Does Not Look Like One in Singapore

PUE itself is not the issue when comparing datacentres across regions. The limitation arises when a PUE value achieved under one set of environmental conditions is used as a prescriptive design target in a fundamentally different climate. A design that performs well in Sweden is not inherently efficient everywhere, and differences of 0.3 or more in PUE can arise purely from climate, independent of design quality.

The following comparison illustrates why reference designs cannot simply be copy-pasted across different geographies.

Regional Factors Nordic Climate (e.g., Sweden) Tropical Climate (e.g., Singapore)
Ambient Temperature Low average temperatures with long periods of cold air. High average temperatures and consistent heat year-round.
Cooling Strategy Heavy reliance on free cooling using outside air. Requires active mechanical cooling or high-energy chillers.
PUE Impact Naturally lower PUE due to minimal compressor use. Naturally higher PUE due to high humidity and heat load.
Design Transferability High efficiency in cold regions but poor if copied directly to the tropics. Systems must be specifically optimized for the local environment.

When a facility is designed to replicate a Nordic reference PUE in a tropical climate, the result is often a less efficient datacentre in absolute terms. Systems optimized for free cooling may operate far from their optimal range in hot climates, increasing energy use and mechanical complexity.

Why PUE Alone Is No Longer Sufficient

There are several structural reasons why PUE is no longer adequate as a stand-alone metric.

First, PUE does not account for water consumption. Many air-cooled facilities achieve low PUE values through evaporative cooling, often at the cost of significantly higher water use—an increasingly important consideration in water-stressed regions.

Second, PUE does not reflect IT-side efficiency improvements. Technologies such as immersion cooling can reduce server power by eliminating fans and improving thermal stability. These gains may increase compute output while having little effect—or even a negative effect—on PUE, because IT power appears in the denominator of the calculation.

Third, PUE provides no insight into productivity. Two facilities with identical PUE values can deliver very different amounts of useful computation depending on utilization, workload mix, and power density.

Finally, PUE is sensitive to measurement boundaries and reporting practices. Changes in metering location or IT load definition can alter the reported value without changing underlying efficiency.

IT, Cooling, and Facility

To address the limitations of a single metric, advanced operators increasingly assess efficiency across three interconnected layers. This multi-layered approach provides a more accurate representation of actual performance.

Efficiency Layer Focus Area
IT Efficiency Focuses on hardware optimization, the removal of server fans, and maximizing server utilization to reduce waste at the source.
Cooling Efficiency Evaluates the performance of heat rejection systems, the transition from air to liquid cooling, and the energy required for pumps.
Facility Efficiency Measures electrical distribution losses, lighting, and general auxiliary building loads that support the data center.

PUE and WUE

Because PUE does not track water use, The Green Grid introduced Water Usage Effectiveness (WUE) as a complementary metric. A "good" PUE result should never be considered in isolation. Air-cooled facilities using aggressive evaporative cooling can achieve attractive PUE values while exhibiting very high water consumption. Liquid and immersion-cooled designs often reverse this trade-off, where PUE improvements may be modest, but water use is significantly reduced or eliminated.

Digital Production per Unit Energy

Increasingly, regulators, customers, and investors are less focused on PUE alone and more concerned with what a datacentre actually delivers. This has led to growing interest in metrics that describe digital production per unit of energy, such as FLOPs per watt, inferences per kilowatt-hour, or jobs per megawatt-hour. These metrics focus on how efficiently energy is converted into useful digital work.

European regulatory initiatives are moving in this direction, with proposed datacentre reporting frameworks that combine PUE, WUE, heat reuse, renewable energy share, and indicators of workload efficiency. From this perspective, a slightly higher PUE in a dense, highly utilized, immersion-cooled AI facility may represent greater real-world efficiency than a very low PUE in a lightly loaded, water-intensive site.

How OCP Reframes Power Usage Effectiveness

The Open Compute Project (OCP), through its Advanced Cooling Solutions (ACS) work, formalizes a broader, multi-metric approach. OCP recognizes that while the classic definition of PUE remains useful, it is insufficient for environments using direct-to-chip or immersion cooling.

OCP Measurement Gaps Description
Server Fan Elimination Captures energy savings from removing internal IT chassis fans, which standard PUE often ignores.
Pump and CDU Efficiency Accounts for the power required to move liquid through the cooling loop and heat exchangers.
Thermal Performance Measures efficiency gains from superior heat transfer properties of liquid compared to air.
Energy Productivity Emphasizes how efficiently energy is converted into usable compute within the full system context.

Where Immersion Cooling Fits

Immersion cooling influences datacentre efficiency across multiple dimensions, explaining both its potential benefits and why its impact is not always visible in traditional metrics such as PUE. At the IT level, immersion cooling can reduce server power by eliminating internal fans, improving thermal stability, and reducing throttling. These changes can increase usable compute per unit of energy, even when they do not produce an immediate improvement in the PUE ratio.

Feature Traditional Air Cooling Immersion Cooling
Heat Transfer Medium Air, which acts as a poor thermal conductor compared to liquid. Dielectric Liquid with a high thermal capacity for efficient cooling.
Server Fans Required for airflow, resulting in significant internal energy draw. Completely eliminated, leading to direct IT energy savings.
Heat Reuse Potential Limited, as it produces low-grade waste heat that is hard to capture. High, providing high-grade heat suitable for district heating networks.
Typical PUE Range Usually between 1.4 and 1.6 or higher in warmer climates. Consistently low, typically ranging from 1.05 to 1.2.

At the cooling-system and facility levels, immersion cooling shifts heat removal away from air handling and humidity control toward liquid-based heat transfer. In purpose-built facilities, this can simplify the thermal chain, reduce mechanical overhead, and significantly reduce or eliminate water consumption. In such designs, operators have reported PUE values in the range of approximately 1.07 to 1.2, even in warmer climates where air-cooled facilities face inherent constraints.

However, immersion cooling does not automatically improve PUE in all scenarios. In retrofit or mixed-use environments, immersion systems are often deployed alongside existing air-cooling infrastructure, meaning chillers and pumps may continue operating even though the IT equipment no longer heats the room air. Additionally, because immersion cooling reduces IT power (the denominator in the PUE calculation), the PUE ratio may remain unchanged or even increase if facility energy use does not decline proportionally—despite actual reductions in total energy and water use.

This effect is particularly evident in colocation facilities where immersion represents only a small share of total load. These dynamics underscore why immersion cooling must be evaluated using additional metrics alongside PUE, including total energy use, water consumption, workload efficiency, and achievable power density.

What Industry Bodies Recommend

Across major industry organizations, a consistent direction is emerging: the industry is moving away from PUE as the defining metric toward PUE as one input among several. The Green Grid continues to anchor efficiency discussions around PUE and WUE, emphasizing that they must be considered together. Simultaneously, the Uptime Institute highlights the stagnation of global average PUE and encourages broader performance assessments. ASHRAE increasingly treats liquid cooling as a central part of mainstream datacentre design guidance, while the European Commission is formalizing multi-metric reporting frameworks that combine energy, water, heat reuse, and digital productivity.

TCO and LCA

For practical decision-making—particularly when comparing air cooling, direct-to-chip, and immersion—organizations increasingly rely on broader frameworks. This includes Total Cost of Ownership (TCO), which captures energy, water, hardware, space, operations, maintenance, and system lifetime. Additionally, Life Cycle Assessment (LCA) is used to capture embodied carbon, construction impacts, fluid lifecycle, and end-of-life considerations. While PUE and WUE inform these analyses, final decisions are rarely based on a single efficiency metric alone.

PUE Matters, but It Is Only the Starting Point

PUE remains a clear and useful indicator of how much overhead energy a datacentre consumes to support its IT load. As a concept, it played a critical role in improving infrastructure efficiency across the industry. As a stand-alone measure of performance, however, it has reached its practical limits.

In an environment defined by AI workloads, high-density compute, water constraints, and regional regulation, meaningful assessment requires a broader perspective—one that combines PUE, WUE, subsystem efficiency, digital productivity, and holistic frameworks such as TCO and LCA. PUE is not disappearing. It is becoming one component of a more complete and accurate picture of datacentre performance.