Power Usage Effectiveness remains a useful metric, but it is no longer sufficient on its own to describe modern datacentre performance.

Why the Datacentre Industry Has Outgrown PUE

For many years, Power Usage Effectiveness (PUE) has been the primary efficiency metric used in datacentres. It is simple, easy to benchmark, and widely understood: lower values indicate less overhead energy relative to IT load.

However, datacentre technology and operating conditions have evolved more rapidly than the metric itself.

Today’s facilities support AI clusters drawing hundreds of kilowatts per rack, increasingly rely on liquid and immersion cooling, operate under tightening water constraints, and face new regulatory requirements—particularly in Europe—that demand greater transparency than a single efficiency number can provide. At the same time, global PUE benchmarks have shown little improvement. Uptime Institute’s most recent global surveys place the weighted average PUE just above 1.5 and indicate that it has remained largely unchanged—around 1.54 to 1.56—for approximately six years.

As a result, the industry faces a disconnect: technical capabilities continue to advance, while the headline PUE figure remains largely static.

Understanding this gap requires a clear view of what PUE measures—and what it does not.

PUE Definition: What It Means and How It Is Calculated

The formal definition of PUE is provided by The Green Grid and comes down to a single calculation:

PUE = Total Facility Power ÷ IT Equipment Power

This PUE calculation addresses a single question:

How much additional energy does the facility consume to support the IT load?

A PUE of 1.5 indicates that for every 1 kW delivered to IT equipment, an additional 0.5 kW is consumed by cooling, power conversion, lighting, and other supporting infrastructure.

This clarity is what made PUE so influential. It exposed inefficiencies in mechanical and electrical systems and helped drive significant improvements in datacentre design and operation. As those inefficiencies have been reduced, however, PUE has become less informative as a proxy for overall sustainability or operational effectiveness.

Design PUE vs. Measured PUE

It is also important to distinguish between design PUE and measured or reported PUE, as the difference between the two is often significant in practice. Design PUE represents a theoretical value calculated under assumed conditions: full IT load, steady-state operation, and optimized system performance. In effect, it describes how efficient a datacentre could be when operating exactly as intended. Measured PUE, by contrast, reflects how the facility actually performs over time, across varying load levels, partial occupancy, maintenance events, and real-world operating constraints. During early stages of operation—when customer adoption is gradual and IT utilization is low—fixed facility overheads can dominate, resulting in a measured PUE that is materially higher than the original design value. As the data center evolves through its lifecycle, changes in density, cooling configuration, and operational practices can further widen this gap. This distinction matters because a datacentre that appears highly efficient “on paper” may perform very differently in practice for many years. It also highlights why efficiency metrics must be interpreted in the context of utilization and deployment strategy, and why approaches that allow capacity and infrastructure to scale in step with demand—such as modular systems and deployment—are increasingly relevant.

Where We Are Today: The Global Average PUE

According to Uptime Institute’s 2024 and 2025 Global Data Center Surveys, the industry’s weighted average PUE remains just over 1.5 and has changed little since around 2019.

Newer, larger facilities often perform better. Uptime Institute’s detailed analyses show that modern sites frequently achieve average PUE values in the mid-1.4 range, while individual flagship facilities—particularly in cooler climates with heat reuse—have reported designed PUE values near 1.2.

However, the large installed base of older and smaller facilities keeps the global average effectively flat. This is one reason regulators, investors, and operators increasingly regard PUE as a necessary but incomplete indicator.

Why a Datacentre in Sweden Does Not Look Like One in Singapore

PUE itself is not the issue when comparing datacentres across regions. The limitation arises when a PUE value achieved under one set of environmental conditions is used as a prescriptive design target in a fundamentally different climate.

Facilities in Sweden and other Nordic regions benefit from structural advantages largely outside the control of the designer:

• Cold ambient temperatures for much of the year

• Extensive opportunities for free cooling

• Highly efficient dry coolers

• The ability to integrate large-scale heat reuse into district heating networks

Under these conditions, achieving very low annualized PUE values is relatively straightforward, and some Nordic facilities report PUEs near 1.2 using conventional air-based cooling architectures.

Challenges emerge when the same reference designs—or the same headline PUE targets—are applied directly to hot, humid locations such as Singapore.

Datacentres in tropical regions face materially different constraints:

• High ambient temperatures throughout the year

• Continuous, energy-intensive humidity control

• Minimal opportunities for free cooling

• Higher energy requirements for heat rejection

Even with strong engineering and high-quality equipment, achievable PUE values in such environments will generally be higher than in cold climates. Differences of 0.3 or more can arise purely from climate, independent of design quality or operational discipline.

When a facility is designed to replicate a “1.2 PUE Nordic reference” in a tropical climate, the result is often a less efficient datacentre in absolute terms. Systems optimized for free cooling or low-temperature heat rejection may operate far from their optimal range, increasing energy use, mechanical complexity, and operational stress.

In this context, PUE remains a valid metric for comparing performance within similar climates and operating conditions. The error lies in using PUE alone to guide design choices without accounting for geography. A design that performs well in Sweden is not inherently efficient everywhere.

This is why modern datacentre design increasingly emphasizes not only the PUE outcome, but also the design path used to achieve it—selecting cooling architectures that are appropriate for local conditions and less sensitive to climate, rather than attempting to replicate results achieved under very different environmental assumptions.

For this reason, the industry increasingly needs both to look beyond PUE and to consider cooling technologies that reduce geographic dependence.

Why PUE Alone Is No Longer Sufficient

There are several structural reasons why PUE is no longer adequate as a stand-alone metric.

First, PUE does not account for water consumption. Many air-cooled facilities achieve low PUE values through evaporative cooling, often at the cost of significantly higher water use—an increasingly important consideration in water-stressed regions.

Second, PUE does not reflect IT-side efficiency improvements. Technologies such as immersion cooling can reduce server power by eliminating fans and improving thermal stability. These gains may increase compute output while having little effect—or even a negative effect—on PUE, because IT power appears in the denominator of the calculation.

Third, PUE provides no insight into productivity. Two facilities with identical PUE values can deliver very different amounts of useful computation depending on utilization, workload mix, and power density.

Finally, PUE is sensitive to measurement boundaries and reporting practices. Changes in metering location or IT load definition can alter the reported value without changing underlying efficiency.

For these reasons, PUE remains useful—but only as a partial indicator.

Three Levels of Efficiency: IT, Cooling, and Facility

More advanced operators increasingly assess efficiency across three interconnected layers:

• IT efficiency, focusing on how effectively servers convert power into useful work, including fan energy, thermal throttling, temperature margins, and utilization. This is an area where immersion cooling can deliver benefits that PUE does not capture.

• Cooling subsystem efficiency, examining the energy used by pumps, CDUs, heat exchangers, and fluid handling. This is sometimes expressed through more granular metrics such as cooling-specific PUE or pPUE.

• Facility-level efficiency, measured using traditional PUE, but interpreted alongside water use, power density, workload characteristics, and climate.

Together, these layers provide a more accurate representation of actual performance.

Adding Water: PUE and WUE

Because PUE does not track water use, The Green Grid introduced Water Usage Effectiveness (WUE) as a complementary metric.

In practice, this means that a “good” PUE result should never be considered in isolation. Key questions include:

• How much water does the design consume per unit of IT energy?

• Is that level of consumption appropriate for the local climate and regulatory context?

Air-cooled facilities using aggressive evaporative cooling can achieve attractive PUE values while exhibiting very high water consumption. Liquid and immersion-cooled designs often reverse this trade-off: PUE improvements may be modest, but water use can be reduced significantly or eliminated.

Digital Production per Unit Energy

Increasingly, regulators, customers, and investors are less focused on PUE alone and more concerned with what a datacentre actually delivers.

This has led to growing interest in metrics that describe digital production per unit of energy, such as FLOPs per watt, inferences per kilowatt-hour, or jobs per megawatt-hour. These metrics focus on how efficiently energy is converted into useful digital work.

European regulatory initiatives are moving in this direction, with proposed datacentre reporting frameworks that combine PUE, WUE, heat reuse, renewable energy share, and indicators of workload efficiency.

From this perspective, a slightly higher PUE in a dense, highly utilized, immersion-cooled AI facility may represent greater real-world efficiency than a very low PUE in a lightly loaded, water-intensive site.

How OCP Reframes Power Usage Effectiveness

The Open Compute Project (OCP), through its Advanced Cooling Solutions (ACS) work, formalizes this broader, multi-metric approach.

OCP recognizes that while the classic definition of PUE remains useful, it is insufficient for environments using direct-to-chip or immersion cooling. PUE does not capture factors such as:

• Server fan elimination

• Pump and CDU efficiency

• Thermal performance gains

• Water consumption

• Workload-level energy productivity

ACS specifications therefore introduce standardized testing and reporting methods for these elements. Rather than focusing solely on “what is the PUE,” OCP emphasizes how efficiently energy is converted into usable compute within the full system context.

This approach aligns closely with emerging regulatory frameworks and reflects the direction in which the industry is moving.

Where Immersion Cooling Fits

Immersion cooling influences datacentre efficiency across multiple dimensions, which helps explain both its potential benefits and why its impact is not always visible in traditional metrics such as PUE.

At the IT level, immersion cooling can reduce server power by eliminating internal fans, improving thermal stability, and reducing throttling. These changes can increase usable compute per unit of energy, even when they do not produce an immediate improvement in PUE.

At the cooling-system and facility levels, immersion cooling shifts heat removal away from air handling and humidity control toward liquid-based heat transfer. In purpose-built facilities, this can simplify the thermal chain, reduce mechanical overhead, and significantly reduce or eliminate water consumption. In such designs, operators have reported PUE values in the range of approximately 1.07 to 1.2, including in warmer climates where air-cooled facilities face inherent constraints.

However, immersion cooling does not automatically improve PUE in all scenarios. In retrofit or mixed-use environments, immersion systems are often deployed alongside existing air-cooling infrastructure. In these cases, chillers, CRAHs, pumps, and humidity control systems may continue operating even though the IT equipment no longer heats the room air, while additional pumps and heat exchangers are introduced for the immersion system.

At the same time, immersion cooling can reduce IT power, which appears in the denominator of the PUE calculation. If facility energy use does not decline proportionally, PUE may remain unchanged or even increase—despite reductions in total energy consumption, water use, or increases in compute output.

This effect is particularly evident in colocation facilities, where immersion-cooled deployments often represent only a small share of total load. Until immersion enables a meaningful increase in overall IT density or replaces a significant portion of legacy air-cooled infrastructure, its benefits may not be reflected in site-level PUE.

These dynamics underscore why immersion cooling, like other modern cooling approaches, must be evaluated using additional metrics alongside PUE, including total energy use, water consumption, workload efficiency, and achievable power density.

What Industry Bodies Recommend

Across major industry organizations, a consistent direction is emerging:

• The Green Grid continues to anchor efficiency discussions around PUE and WUE, emphasizing that they must be considered together.

• Uptime Institute highlights the stagnation of global average PUE and encourages better measurement and broader performance assessment.

• ASHRAE increasingly treats liquid cooling as part of mainstream datacentre design guidance.

• OCP advances subsystem- and workload-level metrics through its ACS work.

• The European Commission is formalizing multi-metric reporting requirements covering energy, water, heat reuse, renewables, and digital productivity.

The industry is moving away from PUE as the defining metric toward PUE as one input among several.

The Holistic View: TCO and LCA

For practical decision-making—particularly when comparing air cooling, direct-to-chip, and immersion—organizations increasingly rely on broader frameworks:

• Total Cost of Ownership (TCO), capturing energy, water, hardware, space, operations, maintenance, and system lifetime

• Life Cycle Assessment (LCA), capturing embodied carbon, construction impacts, fluid lifecycle, and end-of-life considerations

PUE and WUE inform these analyses, but final decisions are rarely based on a single efficiency metric.

Conclusion: PUE Matters, but It Is Only the Starting Point

PUE remains a clear and useful indicator of how much overhead energy a datacentre consumes to support its IT load. As a concept, it played a critical role in improving infrastructure efficiency across the industry.

As a stand-alone measure of performance, however, it has reached its practical limits.

In an environment defined by AI workloads, high-density compute, water constraints, and regional regulation, meaningful assessment requires a broader perspective—one that combines PUE, WUE, subsystem efficiency, digital productivity, and holistic frameworks such as TCO and LCA.

PUE is not disappearing. It is becoming one component of a more complete and accurate picture of datacentre performance.