As datacentres continue to increase in density and complexity, cooling has become a primary architectural constraint rather than a secondary operational concern. Traditional air cooling is no longer sufficient for many modern workloads, leading operators to adopt liquid-based approaches. Among these, two technologies dominate current deployments: direct-to-chip liquid cooling (DLC) and single-phase immersion cooling.

Both represent a decisive improvement over air cooling. However, they are built on different assumptions about how heat is generated, how systems are integrated, and how infrastructure evolves over time. Understanding these differences is essential when evaluating cooling strategies for both current workloads and future expansion.

Part 1: DLC and Immersion in the Average Datacentre

Direct-to-Chip Liquid Cooling: Targeted and Incremental

Direct-to-chip liquid cooling removes heat directly from CPUs and GPUs using cold plates mounted on the processor package. Liquid coolant absorbs heat at the chip and transports it to a coolant distribution unit (CDU), where it is rejected to the facility cooling loop.

This approach is precise and effective at addressing the highest heat-flux components in a server. For many datacentres, DLC represents an incremental evolution: it extends the viability of familiar server designs, preserves existing operational models, and integrates relatively smoothly into air-cooled facilities.

However, DLC is inherently a hybrid architecture. While processors are liquid-cooled, a substantial portion of the server remains dependent on airflow. Components such as memory, voltage regulation modules, networking, storage, and motherboard regions continue to rely on fans and room-level thermal control.

As a result, DLC environments still require:

• server fans

• hot- and cold-aisle containment

• CRAC or CRAH units

• humidity-controlled whitespace

• airflow engineering and tuning

For generic workloads and moderate densities, this model can be effective. But as densities increase, the complexity of managing both air and liquid cooling domains grows.

Single-Phase Immersion Cooling: System-Level Thermal Control

Single-phase immersion cooling takes a system-level approach. Entire servers are submerged in a dielectric liquid that removes heat directly from every component. Heat is absorbed uniformly and transported to an external heat rejection loop through controlled circulation.

Because immersion cooling eliminates airflow entirely, it removes many of the variables associated with air-based systems. Server fans are removed, hotspots are eliminated, and thermal behaviour becomes predictable across all components.

For generic datacentre workloads, immersion cooling offers several architectural advantages:

• uniform thermal conditions across the entire server

• reduced mechanical complexity

• elimination of airflow-related failure modes

• improved component reliability

• simplified facility design

From a purely technical perspective, immersion cooling provides a more complete and robust solution for high-density, long-lived infrastructure.

Flexibility: Adapting to Rapidly Evolving Chip Roadmaps

One of the most significant architectural differences between DLC and immersion lies in how each responds to hardware change.

Direct-to-chip cooling depends on cold plates that must precisely match processor geometry. While effective in standardised environments, this creates long-term dependencies:

• every new CPU or GPU generation requires new cold plate designs

• new accelerator types force redesign and requalification work

• mixed hardware platforms become harder to support

• integration workflows and supply chains must continually adapt

Over time, this tight coupling can lock operators into relatively rigid hardware patterns.

Single-phase immersion cooling removes this dependency. Because heat is transferred from the entire system into the surrounding liquid, cooling performance is largely independent of processor package geometry or vendor-specific mechanical interfaces.

This mechanical agnosticism enables:

• deployment of new CPUs and GPUs without cooling redesign

• coexistence of mixed architectures (AI accelerators, CPUs, specialised silicon)

• accommodation of future power increases by transferring more heat to the fluid

• cluster design driven by performance and topology rather than cooling constraints

As chip lifecycles shorten and hardware diversity increases, immersion cooling aligns naturally with fast-moving roadmaps and heterogeneous compute.

Thermal Behaviour: Targeted Cooling vs. System-Level Uniformity and Precision

DLC significantly improves processor temperatures, but the rest of the server continues to operate in an air environment. This split thermal model introduces challenges as density rises:

• uneven thermal distribution

• hotspots in memory and VRMs

• continued reliance on fan control and airflow tuning

• residual throttling under certain load profiles

• increasing complexity when scaling rack density

Immersion cooling removes this asymmetry. By cooling every component directly, it delivers:

• complete thermal uniformity

• stable performance under sustained high load

• more consistent boost behaviour for CPUs and GPUs

• improved component reliability

• reduced thermal cycling and mechanical stress

The result is a predictable and stable thermal envelope, well suited to high-utilisation workloads such as AI and HPC.

Implications for Datacentre Design and Operations

Because DLC remains tied to airflow, DLC-based datacentres must continue to support much of the traditional air-cooling infrastructure, including:

• precision air cooling systems

• CRAC or CRAH units

• aisle containment

• humidity control

• raised floors or overhead airflow paths

• conventional whitespace layouts

• mechanical redundancy sized for both air and liquid domains

Immersion cooling simplifies this environment significantly. By eliminating airflow as a cooling mechanism, it enables:

• removal of aisle containment

• reduction or elimination of CRAC systems

• minimal reliance on conditioned air

• smaller mechanical plant footprints

• dense, modular layouts

• simpler maintenance workflows

• consistently low PUE and near-zero water consumption

Immersion cooling does not just improve server thermals—it reshapes the cooling architecture of the datacentre itself.

Part 2: Why DLC Is the Standard Choice for AI Today

The rise of AI has temporarily shifted the balance in favour of DLC.

Modern AI deployments are dominated by large GPU clusters built around highly standardised platforms. These systems are closely aligned with OEM reference designs and vendor-qualified cold plates. Speed of deployment, ecosystem compatibility, and supply-chain availability often outweigh long-term architectural optimisation.

In this context, DLC fits well:

• GPU platforms are standardised and repeatable

• cold-plate designs are vendor-supported

• mechanical interfaces evolve predictably

• AI clusters can be deployed rapidly at scale

For organisations focused on bringing AI capacity online quickly, DLC is the pragmatic and widely adopted solution.

Part 3: Why AI Will Outgrow a DLC-Only Model

The assumption that a single GPU family from a single vendor can serve all AI workloads is already breaking down. New accelerator vendors are emerging with fundamentally different architectures, packaging approaches, memory hierarchies, and power profiles. In parallel, alternative accelerator types—AI ASICs, domain-specific processors, tightly coupled CPU-accelerator designs—are increasingly deployed alongside traditional GPUs.

This diversification introduces friction for cooling architectures tightly coupled to processor geometry.

Each new accelerator family in a DLC environment requires new cold plates, validation cycles, and supply-chain coordination. Over time, this erodes the simplicity that initially made DLC attractive.

Immersion cooling avoids this constraint entirely. By decoupling cooling from processor mechanics, it provides a stable thermal foundation for heterogeneous AI environments.

For AI builders seeking:

• competitive performance per watt

• operational excellence at scale

• freedom to adopt new accelerator architectures

• long-term infrastructure flexibility

immersion becomes not just viable, but strategically enabling.

Unlocking Compute Beyond the Traditional Datacentre

Because DLC depends on controlled airflow and humidity, it remains largely confined to conventional datacentre environments.

Immersion cooling creates a sealed, self-contained thermal environment, enabling deployments such as:

• containerised compute clusters

• edge AI systems

• industrial or telecom installations

• remote regions without HVAC

• sealed or dust-prone environments

• modular HPC systems deployable outside traditional facilities

This flexibility opens the door to entirely new architectural models for compute.

Conclusion: Incremental Today, Adaptive Tomorrow

Direct-to-chip liquid cooling is the right solution for standard AI deployments today. It aligns with current GPU platforms, enables rapid scaling, and integrates smoothly into existing datacentre designs.

But it is not a universal or permanent answer.

Single-phase immersion cooling provides a system-level, mechanically agnostic architecture that aligns with the long-term trajectory of compute: higher density, greater diversity, and faster hardware evolution.

For the average datacentre, immersion already represents the cleaner technical solution. For AI builders seeking competitive advantage, operational excellence, and a clear path forward as hardware diversifies, immersion increasingly becomes the foundation that enables what comes next.

DLC solves the immediate problem. Immersion solves the architectural one.