As datacentres continue to scale in both density and complexity, cooling has moved from an operational concern to a defining architectural constraint. The rapid rise of AI, accelerated computing, and high-performance workloads has pushed traditional air cooling beyond its practical limits. Liquid cooling has emerged as the answer, but not all liquid cooling is the same.

Today, two approaches dominate production environments: Direct-to-Chip Liquid Cooling (DLC) and Single-Phase Immersion Cooling. Both represent a decisive step beyond air cooling, yet they are built on fundamentally different assumptions about how infrastructure should evolve. Understanding that difference is critical for anyone designing facilities intended to last longer than a single hardware generation.

Direct-to-Chip: Precision Cooling, Incremental Change

Direct-to-chip liquid cooling targets heat at its source. Cold plates mounted directly on CPUs and GPUs transfer thermal energy into a liquid loop, which then carries it to a heat exchanger or facility cooling system.

For most datacentres, this approach feels familiar. It preserves conventional server design, aligns with existing operational practices, and extends the usable life of air-cooled facilities. In that sense, DLC is best understood as an evolutionary step rather than a structural shift.

However, that familiarity is also its limitation. Although processors are liquid-cooled, components like memory, power delivery, storage, and networking often still rely on airflow. This means DLC environments continue to require raised floors, air handlers, and aisle containment.

Immersion Cooling: A System-Level Approach

Single-phase immersion cooling takes a fundamentally different path. Rather than targeting individual components, the entire server is submerged in a dielectric fluid that absorbs heat evenly from all surfaces.

There is no airflow. No fans. No hotspots.

Immersion cooling does not optimize cooling, it removes cooling variability altogether. Because every component operates within the same thermal environment, performance becomes more predictable, mechanical complexity drops, and thermal behavior stabilizes.

A Structural Difference: The Comparison

The distinction between DLC and immersion is not primarily about efficiency. It is architectural.

Dimension Direct-to-Chip (DLC) Immersion Cooling
Cooling Scope CPUs and GPUs Entire System
Airflow Dependency Yes (Hybrid) None
Mechanical Complexity Moderate Low
Thermal Uniformity Variable Consistent
Fan Dependency Required Eliminated
Density Ceiling Medium–High Very High

Hardware Evolution and Long-Term Flexibility

One of the most important differences between DLC and immersion only becomes visible over time.

Direct-to-Chip cooling depends on cold plates precisely matched to processor geometry. This creates a tight coupling between cooling infrastructure and silicon design. Each new processor generation requires new cold plates, validation cycles, and integration work.

Immersion cooling removes that dependency entirely. Because heat is removed from the whole system rather than a specific component, cooling performance becomes largely independent of processor shape, socket design, or vendor-specific packaging.

In practical terms, this means cooling no longer dictates hardware choice. Operators can mix CPUs, GPUs, and custom silicon without redesigning thermal infrastructure.

Thermal Stability & Reliability

While DLC dramatically improves processor thermals, the rest of the system continues to operate in an air-cooled environment, introducing variability under load. Immersion cooling creates a uniform thermal envelope across all components.

For workloads such as AI training and HPC, this stability translates directly into reliability:

Characteristic DLC Immersion
Hotspot Risk Moderate Very Low
Thermal Cycling Moderate Minimal
Sustained Performance Variable Stable
Fan-related Failures Possible Eliminated
Long-term Reliability Good Excellent

Why Direct-to-Chip Appears to Dominate AI Today — and Why the Industry Is Quietly Looking Beyond It

In much of today’s AI infrastructure coverage, direct-to-chip (DLC) liquid cooling is often presented as the default—or even inevitable—choice. That perception is understandable. DLC aligns well with current server platforms, fits established deployment models, and can be implemented quickly using familiar design patterns. As a result, it features prominently in announcements, reference architectures, and near-term rollout plans.

From the outside, this creates the impression that DLC has “won” the cooling discussion. In reality, the picture is more nuanced. Many AI operators are adopting DLC because it is the most immediately accessible option, not necessarily because it represents the long-term end state. It integrates cleanly with today’s hardware ecosystems, supports near-term capacity expansion, and enables rapid deployment in a highly competitive environment. For organizations focused on speed and continuity, that choice makes sense.

At the same time, the limitations become more visible as systems scale. As power densities rise and workloads become more dynamic, DLC environments require increasingly precise thermal management. Localized hotspots, thermal cycling, and mechanical complexity become harder to mitigate, and long-term performance consistency depends on tightly tuned interactions between multiple subsystems. These challenges are manageable—but they grow with each generation of hardware.

This is where a quieter shift is underway. Organizations with longer planning horizons—particularly those designing their own silicon, operating at hyperscale, or planning multi-generation infrastructure roadmaps—are increasingly evaluating immersion cooling as a strategic foundation rather than an alternative. The appeal lies not only in thermal efficiency, but in architectural flexibility: uniform cooling, reduced mechanical dependency, and insulation from rapid changes in server design.

Immersion reframes the problem. Instead of optimizing cooling for each new platform, it provides a stable thermal layer that can accommodate higher densities, new form factors, and evolving compute architectures with far less friction.

A Broader Context: AI Is Only Part of the Picture

Even if all of the above trends hold true, it’s important to step back and consider the broader landscape.

AI infrastructure, despite its rapid growth, is still expected to account for only about half of global compute deployments by the end of the decade. The remainder will span enterprise workloads, edge computing, industrial systems, content delivery, and regional infrastructure—segments with very different constraints and priorities.

In many of these environments, especially at the edge, the advantages of immersion cooling become even more pronounced: simplified thermal management, reduced maintenance, higher reliability, and the ability to operate in constrained or harsh conditions without complex airflow engineering.

That opportunity deserves its own discussion.

Conclusion: What’s Deployed vs. What’s Being Designed For

Direct-to-chip cooling dominates today’s headlines because it aligns with how AI infrastructure is currently built. It supports rapid deployment, works within existing ecosystems, and performs well within known limits. But beneath that momentum, many of the organizations thinking longest-term are preparing for something different. Immersion cooling is increasingly viewed as a foundational technology—one that aligns not just with the future of AI, but with the broader evolution of compute itself.

DLC fits the present. Immersion aligns with what comes next.

And for edge deployments in particular, that distinction may matter more than anywhere else—a topic we’ll explore in a next article.