For many organisations, immersion cooling represents a first-of-its-kind infrastructure decision. Unlike air cooling or direct-to-chip liquid cooling—both of which benefit from decades of standardisation and institutional experience—immersion cooling systems are still maturing as a category.
As a result, the selection and procurement process often looks different, particularly for first-time users.
Why the Selection Process — and the Stakeholder Process — Looks Different
One practical implication is that organisations should not assume that all internal stakeholders—facilities, IT, procurement, operations, sustainability, or risk—already have sufficient familiarity with immersion cooling to provide well-grounded guidance from the outset. In many cases, they do not yet have the context needed to evaluate system-level trade-offs or long-term implications.
A common and effective approach is therefore to involve an immersion cooling specialist early in the evaluation phase. This helps establish a technically sound baseline: what immersion cooling is (and is not), how systems differ, which design choices matter, and where real constraints lie. With this foundation in place, internal stakeholders can then be engaged more productively, asking the right questions and evaluating options in context.
Immersion cooling is not a single, uniform product. System architectures, tank geometries, fluid handling approaches, control philosophies, serviceability models, and integration depth vary meaningfully between vendors. On a datasheet, solutions may appear comparable, but their operational behaviour and long-term characteristics can differ substantially.
For this reason, organisations evaluating immersion cooling typically engage in:
• deeper technical and architectural discussions
• system-level reviews rather than component-only comparisons
• hands-on demonstrations and proof points
• visits to experience centres or reference installations
This additional diligence is not friction—it is common sense preparation. Seeing how servers are serviced, how fluids are handled, how monitoring integrates, and how systems behave under load helps align stakeholders, build internal confidence, and avoid misinformed assumptions early in the process.
Once this initial phase is complete, immersion cooling deployments tend to follow familiar operational patterns. The novelty is concentrated at the decision stage—not in day-to-day operation.
What Does Not Change: Core Datacentre Disciplines Remain Intact
Despite the visual difference of immersion tanks replacing racks, the foundational disciplines of datacentre operations remain unchanged.
Availability, redundancy, and operational discipline
Immersion-cooled datacentres still follow established principles:
• redundant power feeds
• UPS and generator-backed supply
• defined failure domains
• Tier III and Tier IV design philosophies
Immersion cooling does not change availability objectives or governance models. Cooling becomes simpler, but uptime expectations remain the same.
Monitoring, alarms, and operational oversight
Operators still rely on:
• telemetry and trending
• threshold-based alarms
• predictive maintenance
• DCIM and BMS platforms
The operational mindset remains familiar: detect, analyse, respond.
What Changes Slightly: Different Interfaces, Familiar Outcomes
Several aspects of datacentre operation change—but in incremental, manageable ways.
Monitoring shifts from air to liquid
Instead of airflow, pressure differentials, and humidity, operators focus on:
• fluid temperatures
• flow rates
• pump status
• heat exchanger performance
The tooling remains the same; the signals change.
KPIs and Monitoring: From Environmental Proxies to Direct Thermal Control
While the principles of monitoring, alarming, and trending remain unchanged, immersion cooling fundamentally alters what operators measure to understand system health.
In air-cooled datacentres, many KPIs are indirect proxies for heat removal: supply and return air temperature, room delta-T, humidity bands, pressure differentials, fan speeds, and airflow distribution. Operators infer IT thermal health by interpreting environmental signals.
Immersion cooling removes much of that indirection.
Because heat is transferred directly into a liquid medium, monitoring shifts toward system-level and component-level KPIs that directly represent heat generation and heat removal.
Typical immersion cooling KPIs include:
Fluid temperature metrics
• Fluid inlet temperature
• Fluid outlet temperature
• Average bath temperature (where applicable)
These replace supply and return air temperatures as primary thermal references and are typically far more stable.
ΔT across the immersion system
• Fluid delta-T (ΔT) is a first-class KPI
• It correlates directly with IT load, heat extraction efficiency, flow adequacy, and early signs of fouling or restriction
Unlike room-level delta-T, fluid ΔT reflects actual heat transfer.
Component-level temperatures (where available)
• CPU and GPU temperatures
• Accelerator hotspot temperatures
• Correlation with TDP and workload intensity
In immersion environments, these values are typically more uniform and less prone to sudden excursions.
Flow and circulation KPIs
• Flow rate per system or loop
• Flow stability over time
• Pump operating points
Flow becomes a primary indicator of cooling health, not a secondary variable.
Mechanical and redundancy state
• Pump status and redundancy state
• Valve positions (where applicable)
• Heat exchanger availability
Operators monitor a small number of industrial-grade components instead of hundreds of server fans.
Fluid integrity indicators
• Fluid level
• Leak detection
• Fluid condition or contamination indicators (where supported)
These metrics support long-term system integrity and lifecycle management.
Operational impact
A recurring theme in operator feedback is that dashboards become quieter and more predictable over time. Several teams describe immersion monitoring views as “almost boring”—only half jokingly. Once systems are commissioned and tuned, temperatures, flows, and ΔT values tend to remain within narrow bands, with fewer alarms and clearer root causes.
For operations teams accustomed to constant airflow tuning and hotspot management, this shift to stable, causal KPIs is often one of the most appreciated changes.
DCIM and BMS Integration: Seamless in Practice, Still Worth Qualifying
Immersion cooling systems can integrate seamlessly into existing DCIM and BMS environments when designed correctly. Modern systems expose operational data via industry-standard protocols such as Modbus, SNMP, BACnet, or REST APIs.
In practice, immersion cooling becomes part of the same operational fabric as power and cooling infrastructure. However, integration depth varies by vendor, so this should be explicitly qualified:
• sensor density and placement
• transparency of operational states
• fault and alarm behaviour
• supported protocols
When designed in from the outset, integration is straightforward and operationally clean.
Servicing Motions Change, Not the Outcome
Servers are still replaced, repaired, and maintained—but the motions differ:
• servers are lifted from fluid rather than slid from racks
• excess fluid is allowed to drain
• servicing occurs in designated areas
Technicians typically adapt quickly, and many find the process calmer and more deliberate than working in airflow-dominated environments.
What Changes Fundamentally: Cooling, Density, and Early Lifecycle Phases
A smaller number of changes are more structural—but also where many benefits originate.
Cooling becomes local and predictable
Immersion cooling removes:
• hot and cold aisles
• airflow balancing
• recirculation risk
• fan tuning and failures
Thermal management becomes a contained, local process.
Servers operate without fans and at higher temperatures
Removing fans eliminates one of the most failure-prone server components and reduces power consumption. Immersion-cooled servers can operate safely at higher, more stable temperatures, enabling:
• reduced reliance on chilled water
• greater use of dry coolers
• extended free-cooling hours
• simpler heat reuse pathways
Thermal stability improves and thermal cycling is reduced.
Density increases without operational fragility
Without airflow constraints, systems can be designed for component placement rather than air movement—allowing higher density without introducing fragile operating conditions.
Logistics and installation require structured handling
Fluid delivery, storage, filling, draining, and commissioning introduce steps that benefit from experience and defined procedures. Vendor involvement and trained partners make deployments repeatable and predictable, ensuring systems enter operation in a known-good state.
Alignment with Industry Standards: Evolving Within Established Frameworks
Because immersion cooling is still maturing, standards alignment becomes more important, not less.
Open Compute Project (OCP)
OCP provides immersion cooling reference architectures covering:
• material compatibility
• corrosion and reliability considerations
• base fluid characteristics
• safety and handling principles
These establish shared boundaries without prescribing a single implementation.
ASHRAE: Direct Liquid Cooling as the Design Reference
ASHRAE has explicitly expanded its guidance to include direct liquid cooling. Immersion cooling systems are designed and integrated using these ASHRAE liquid cooling guidelines as a reference framework, including:
• allowable and recommended liquid temperature ranges
• component-level thermal reliability envelopes
• flow and heat removal principles
• IT-to-facility interface definitions
Immersion cooling aligns with ASHRAE’s intent by delivering stable, predictable thermal behaviour with reduced thermal cycling.
Uptime Institute: Availability Outcomes Over Cooling Methods
Uptime does not prescribe cooling technologies. Its Tier framework focuses on:
• concurrent maintainability
• fault tolerance
• isolation of failure domains
Immersion cooling can meet Tier III and Tier IV objectives through appropriate redundancy, compartmentalisation, and maintainable system design.
Conclusion: A Different Beginning, a Familiar Destination
Immersion cooling changes the beginning of the datacentre lifecycle more than the end.
Selection requires deeper engagement and early specialist input. Installation introduces fluid logistics. Monitoring shifts from indirect environmental proxies to direct thermal KPIs.
But once operational, immersion-cooled datacentres behave in familiar, predictable ways—often with fewer variables to manage than air-cooled environments. Dashboards become quieter, systems more stable, and operations more deterministic.
For many operators, that predictability eventually becomes a little boring.
In datacentre operations, that is usually a sign that the system is working exactly as intended.
