Cooling has always been one of the defining engineering constraints of datacentre design. For decades, air cooling set the limits of how servers were built, how facilities were laid out, and how much compute density operators could realistically deploy. Server fans, CRAC and CRAH units, chillers, cooling towers, and—more recently—evaporative and adiabatic cooling systems formed the backbone of global datacentre infrastructure.

That model is now under sustained pressure. AI, high-performance computing, and GPU-dense workloads have pushed chip power well beyond the range air was designed to handle efficiently. As a result, operators are increasingly asking whether liquid cooling is not just an alternative to air cooling, but a better long-term foundation for modern datacentres.

Across the industry, the answer is becoming clearer. Liquid cooling is not a marginal improvement—it is fundamentally better aligned with today’s thermal realities, sustainability requirements, and high-density compute architectures.

Understanding why requires a closer look at how air cooling works, how it compares to direct-to-chip liquid cooling, and how both differ from single-phase immersion cooling, which is increasingly viewed as the next major step in cooling evolution.

Air Cooling: The Traditional Foundation of Datacentre Cooling

Air cooling has been the dominant datacentre cooling method for decades. In a typical air-cooled environment, server fans pull air across heat sinks attached to processors, memory, and voltage regulators. Heated air is exhausted into hot aisles, captured by CRAC or CRAH units, and routed to chillers, cooling towers, or evaporative systems depending on facility design and climate.

How Air-Cooled Datacentres Are Designed

Air cooling shapes nearly every aspect of the datacentre environment, including:

• Raised or slab floors engineered for airflow

• Hot- and cold-aisle containment to limit recirculation

• Precision cooling units (CRAC/CRAH)

• Chillers and cooling towers for heat rejection

• Free cooling in colder climates

• Adiabatic or evaporative cooling to reduce chiller load in warmer regions

For low- to moderate-density deployments, this model remains effective. But its limitations become increasingly apparent as rack power rises.

The Advantages and Limits of Air Cooling

Air cooling’s primary advantage is familiarity. Server designs, operational procedures, and facility layouts have been optimized around it for years. It is well understood, widely supported, and cost-effective for traditional enterprise workloads.

Thermodynamically, however, air is a poor heat transfer medium. As rack densities approach 20–30 kW, airflow requirements escalate rapidly. Fan power increases, CRAC units work harder, chillers run longer, and overall efficiency degrades. Power usage effectiveness (PUE) worsens, water consumption rises, and operating costs increase.

This is why comparisons between datacentre air cooling and liquid cooling increasingly reach the same conclusion: air cooling is no longer a viable long-term solution for AI and HPC environments but also environments where performance and sustainability are well balanced.

Direct Liquid Cooling: Extending the Limits of Traditional Architecture

To push beyond the practical limits of air cooling without abandoning familiar server and facility designs, many operators are adopting direct-to-chip liquid cooling (DLC). Instead of relying on air to remove heat from processors, DLC uses cold plates mounted directly on CPUs and GPUs. Liquid coolant—typically water or a water-glycol mixture—flows through these plates, removing heat far more efficiently than air.

This approach directly addresses the challenge of cooling processors in the 500–700 W range and has become a common entry point for liquid cooling adoption.

How Direct Liquid Cooling Works

DLC servers remain largely conventional in appearance and operation. Air-cooled heat sinks are replaced with cold plates on select components, and coolant loops connect servers to coolant distribution units (CDUs) located in-row or in the data hall. These CDUs regulate temperature, flow, and heat rejection.

Heat is typically rejected using:

• Dry coolers in mild climates

• Chillers in warmer regions

• Cooling towers at scale

While processors are liquid-cooled, airflow is still required for other components. Server fans remain in place, and the facility continues to depend on air handling and precision cooling infrastructure.

Benefits and Constraints

Direct liquid cooling significantly improves processor-level thermal performance and enables higher chip power. However, it remains a hybrid solution. Memory, storage, networking, and power delivery components continue to rely on air, which limits total system density and preserves much of the traditional cooling stack.

For this reason, DLC is widely viewed as a transitional technology—valuable for near-term deployments, but not a complete departure from air-based datacentre design.

How Industry Guidance Is Shaping Liquid Cooling Adoption

The shift toward liquid cooling is increasingly guided by standards bodies, operators, and market analysts rather than by vendor experimentation alone.

The Open Compute Project (OCP), originally focused on hyperscale efficiency, now treats liquid cooling as a first-class design consideration. Through initiatives such as its Advanced Cooling Solutions work, OCP specifications increasingly assume higher inlet temperatures, warm-water loops, and liquid-ready server designs. Rather than framing immersion or direct liquid cooling as niche options, OCP positions them as parallel, production-ready approaches for high-density compute.

Operational guidance has followed a similar trajectory. The Uptime Institute’s research reflects the reality that traditional air-based redundancy and resiliency models break down as power density increases. Its guidance increasingly emphasizes designing availability, maintenance, and fault tolerance around liquid-based systems, signalling that liquid cooling is becoming an operational necessity rather than an exception.

From a market perspective, Dell’Oro Group’s analysis consistently links liquid cooling adoption to long-term structural trends: accelerator proliferation, rising rack power, and the decoupling of compute density from room-level cooling constraints. Rather than predicting a wholesale replacement of air cooling, Dell’Oro describes a bifurcated market—air for legacy and low-density environments, liquid as the default for new AI and high-performance deployments.

Together, these perspectives reinforce the view that liquid cooling is not a temporary response to AI workloads, but a durable shift in datacentre design.

Single-Phase Immersion Cooling: A Liquid-Native Environment for High-Density Compute

Single-phase immersion cooling represents the most complete departure from air-based cooling models. Rather than cooling individual components or chips, immersion cooling submerges entire servers in a dielectric liquid that absorbs heat directly from every surface.

This approach fundamentally changes the thermal equation. Liquid transfers heat orders of magnitude more efficiently than air, enabling substantially higher power densities with greater thermal stability.

How Immersion Cooling Works

Servers are installed in closed tanks filled with non-conductive fluid. As components operate, heat is transferred directly into the surrounding liquid. Pumps circulate the warm fluid to a CDU, where heat is removed—often using dry coolers alone, without chillers or evaporative systems.

Because the environment is sealed and liquid-stabilised:

• Server fans are eliminated

• CRAC and CRAH units are unnecessary

• Cooling towers and chillers are removed

• Airflow engineering, hot/cold aisles, and filtration are no longer required

• Recirculation and hotspot risks are eliminated

The result is a quieter, denser, and significantly more efficient datacentre environment.

IT Operation in Immersion

Hardware remains largely standard, with targeted modifications such as the removal of fan assemblies and the use of liquid-compatible components. Crucially, immersion cooling addresses both chip-level and board-level heat, managing the entire thermal envelope rather than a subset of components.

How Datacentre Design Changes Across Cooling Models

Air-cooled datacentres are fundamentally designed to move and condition large volumes of air. Their complexity lies in airflow management, humidity control, filtration, and mechanical cooling systems.

DLC facilities retain much of this architecture while introducing liquid loops, CDUs, and additional plumbing. Processor cooling improves, but the broader facility design remains largely intact.

Immersion-cooled datacentres look fundamentally different. Racks are replaced by tanks. Mechanical cooling infrastructure is dramatically reduced or eliminated. Dry coolers replace chillers. Water consumption approaches zero. Power density increases without a proportional rise in cooling complexity. Increasingly datacenter design consultants recognise the opportunity to simplify datacenter design and innovate drastically instead of optimise incrementally.

Why Single-Phase Immersion Cooling Represents the Next Evolution

As operators evaluate air cooling, direct liquid cooling, and immersion side by side, a clear progression emerges. Air cooling enabled the past. Direct-to-chip cooling supports the present. Immersion cooling is designed for the future.

Single-phase immersion cooling:

• Operates efficiently from moderate densities today through sustained ultra-high-density deployments

• Maintains performance as chip power and thermal output continue to rise

• Provides uniform thermal conditions and eliminates hotspots

• Removes the need for chillers, cooling towers, and precision air cooling

• Delivers consistently low PUE across climates and seasons

• Nearly eliminates water usage, a growing regulatory concern

• Improves performance by reducing thermal throttling

• Simplifies facility design and reduces mechanical risk

• Enables scalable, high-grade heat reuse

Where direct liquid cooling extends air-based architectures, immersion cooling represents a system-level transition.

Category Air Cooling Direct Liquid Cooling (DLC) Single-Phase Immersion Cooling
Primary Cooling Medium Air Liquid (water or water-glycol) + air Dielectric liquid
What Is Cooled Heat sinks on chips and components CPUs and GPUs (via cold plates); remainder via air Entire server (chips, boards, memory, power)
Typical Rack / System Density Up to ~20–30 kW per rack ~30–80 kW per rack (varies by design) ~20 kW to 200+ kW per tank
Thermal Efficiency Lowest High at chip level, moderate system-wide Highest system-wide
Heat Transfer Capability Limited by air properties Very efficient at chip level Extremely efficient across all components
Server Fans Required Yes Yes (reduced) No
CRAC / CRAH Units Required Yes Yes No
Chillers / Cooling Towers Common Often required (climate-dependent) Typically eliminated
Airflow Management Critical (hot/cold aisles, containment) Still required Not required
Hotspot Risk High at increased density Reduced at CPUs; remains elsewhere Eliminated if precision/targeted flow is applied
Power Usage Effectiveness (PUE) Typical 1.2–1.8 (location specific) Typical 1.2–1.4 (still influenced by location) Typically <1.04
Water Usage Effectiveness (WUE) Often high (evaporative systems) Moderate Near zero
Noise Levels High (fans, air handlers) Moderate Very low
Facility Complexity High High (air + liquid systems) Low (simplified mechanical design)
Hardware Modifications None Cold plates, plumbing Fan removal, liquid-safe components
Operational Familiarity Very high High Growing
Scalability for Future Chip Power Poor Moderate (cooling of non-chip components limits the cooling ceiling) Excellent
Heat Reuse Potential Limited Moderate High-grade, scalable
Best Suited For Legacy and low-density environments Transitional and workload specific deployments Long-term high-density/performance and mixed infrastructure

Conclusion: Liquid Cooling Is the New Standard and Immersion Is the Leap Forward

As compute density rises and sustainability pressures intensify, the direction of datacentre cooling is no longer ambiguous. Liquid cooling is moving from an alternative to a baseline expectation for modern infrastructure. Air cooling will persist in legacy and low-density environments, and direct liquid cooling will continue to support transitional architectures. But for facilities designed to meet both current and long-term high-density and performance requirements, liquid cooling is increasingly the standard.

Within that shift, single-phase immersion cooling represents a clear leap forward. Unlike incremental approaches that extend air-based designs, immersion operates efficiently across a wide range of densities—from moderate deployments today to the sustained, ultra-high-density and performance demands projected for AI and HPC systems over the coming decade.

By removing airflow constraints entirely, immersion cooling delivers stable thermal performance as chip power increases, without proportional growth in mechanical complexity, energy consumption, or water use. It simplifies facility design, enables consistent efficiency across climates, and produces high-grade heat suitable for reuse at scale.

Most importantly, immersion cooling aligns datacentre infrastructure with the long-term trajectory of compute. Rather than continuously retrofitting air-based systems or the need to move on to the next DLC rackbased solution available to chase rising thermal loads, it establishes a liquid-native foundation designed to scale with performance demands that have yet to fully materialise.

In that sense, immersion cooling is not just an improvement in how datacentres are cooled—it is a structural shift in how they are designed to operate.