When organisations first begin evaluating immersion cooling, one of the earliest and most practical questions they ask is:
“How do servers actually work in an immersion-cooled environment?”
The underlying assumption is often that immersion cooling requires exotic hardware, bespoke IT stacks, or a wholesale departure from existing server ecosystems. In practice, the opposite is true. Immersion cooling is designed to integrate with the same OEM and ODM server landscape that enterprises, cloud providers, and service operators already depend on today.
What has changed is how much freedom operators have once airflow constraints are removed.
Beyond basic compatibility, immersion cooling is increasingly creating a strategic opportunity—particularly for cloud builders, AI service providers, and infrastructure challengers that want greater control over efficiency, density, and hardware design. This mirrors a pattern long established by hyperscalers, who design past traditional constraints to gain operational and economic advantage.
To understand how this plays out in practice, it is useful to distinguish between the two main categories of servers used in immersion environments today: immersion-ready and immersion-born platforms.
Immersion-Ready Servers: Evolving Existing OEM Platforms for Liquid Environments
Immersion-ready servers typically start life as standard air-cooled or direct-liquid-cooled designs from major OEMs or ODMs. These platforms are then adapted specifically for operation in single-phase immersion cooling systems.
This adaptation process focuses on several well-defined areas:
• material compatibility and long-term corrosion safety
• chassis geometry optimised for fluid flow rather than airflow
• removal of unnecessary fans and airflow-specific components
• firmware and thermal-table adjustments for liquid environments
• validation in real immersion cooling systems
One of the most impactful changes occurs at the firmware level. Thermal tables originally designed for air-cooled operation are rewritten for immersion conditions, allowing components to operate safely at higher and more stable temperatures. In practice, this can enable operating setpoints several degrees higher than air-cooled equivalents, improving efficiency and thermal stability without increasing risk.
Immersion-ready servers are delivered either directly by OEMs or through specialised integration partners. In these models, procurement, support contracts, warranties, and lifecycle management remain largely unchanged from a customer perspective. The primary difference is that the server is qualified to live permanently in a liquid environment rather than relying on airflow.
For organisations seeking a low-friction entry point into immersion cooling, immersion-ready platforms allow existing vendor relationships and operational models to remain intact. Unicom Engineering is the global leader as am OEM partner for Tier 1 server OEM, offering a broad range of immersion ready servers.
Immersion-Born Servers: Designing Around Liquid, Not Air
Immersion-born servers represent a more structural shift. These platforms are designed from the outset for immersion cooling, rather than adapted from air-based designs.
Vendors such as Hypertec and 2CRSI develop motherboards, chassis layouts, and component placement based on liquid thermal physics rather than airflow management. Without the need to move air, entire categories of mechanical and spatial constraints disappear.
As a result, immersion-born systems can offer:
• higher compute density per tank
• simplified mechanical design with fewer moving parts
• fluid-optimised thermal paths around heat-intensive components
• improved energy efficiency at the system level
• redesigned board and GPU layouts unconstrained by airflow
• configurations that are impractical or impossible in air-cooled form factors
These designs allow operators to fully exploit what immersion cooling enables rather than merely accommodating it. Many of the most unconventional and dense server architectures—multi-node systems, non-standard GPU orientations, or highly compact compute arrangements—are only feasible because immersion removes airflow as a governing constraint.
This approach closely mirrors how hyperscalers have historically operated internally. Immersion-born server platforms effectively make that level of design freedom accessible to a broader set of cloud providers, AI operators, and specialised infrastructure builders.
Designing Past Traditional Limits: From Compatibility to Differentiation
A key insight emerging from the market is that immersion-ready and immersion-born platforms are not simply about “making servers work in liquid.” They create an opportunity for certain operators to move beyond standardised infrastructure models.
For challenger clouds, edge providers, AI platforms, and specialised HPC environments, this enables:
• differentiation at the infrastructure level
• optimisation for specific workload characteristics
• higher compute density within fixed footprints
• lower environmental impact per unit of compute
• departure from rigid rack and airflow conventions
• hardware designs aligned with unique service models
This does not require every operator to become a hyperscaler. But it does allow organisations to selectively adopt the same principle: infrastructure should serve the workload and business model, not the other way around.
In that sense, immersion cooling turns server choice from a constraint into a strategic lever.
Validation Through Real Immersion Testing, Not Paper Certification
Reliability remains a primary concern for most operators, particularly outside hyperscale environments. For immersion cooling, validation extends well beyond datasheets or theoretical compatibility claims.
Server vendors active in this space typically validate platforms in real immersion cooling systems, not simulated environments. Many maintain dedicated laboratories with production-grade immersion tanks, enabling them to:
• observe fluid flow interactions at the system level
• validate long-term material compatibility
• optimise chassis geometry and component placement
• tune firmware and thermal control behaviour
• conduct extended soak tests and accelerated ageing
• evaluate behaviour under sustained real-world load
In parallel, these vendors often collaborate closely with immersion fluid manufacturers to test:
• material interactions over multi-year timeframes
• fluid ageing and oxidation behaviour
• long-term mechanical stability
• serviceability under real operating conditions
This combined testing approach ensures that immersion-ready and immersion-born servers are not only functional, but validated for long-term deployment—often to a degree that exceeds traditional air-cooled qualification processes.
Alignment with Open Reference Architectures from the Open Compute Project
In addition to vendor-led testing and qualification, immersion cooling architectures increasingly benefit from open, industry-aligned reference specifications developed through the Open Compute Project (OCP).
OCP has published a set of immersion-related requirements, material compatibility guidelines, and base fluid specifications that together form a practical reference architecture for immersion-cooled infrastructure. These documents address topics such as dielectric fluid characteristics, material selection, safety considerations, and system-level design principles.
For server vendors, these specifications provide a common foundation when adapting existing platforms or developing immersion-born designs. For operators, they offer an independent, community-driven framework to assess interoperability, reliability, and long-term suitability across servers, tanks, and fluids.
Rather than prescribing a single implementation, OCP’s work helps establish shared expectations and design boundaries—reducing fragmentation and supporting broader ecosystem maturity.
Immersion Cooling Is Broadening, Not Limiting, Server Options
A persistent misconception is that immersion cooling restricts hardware choice. While airflow-based servers still dominate the overall market, the range of validated and viable platforms for immersion cooling is expanding steadily.
Across immersion-ready and immersion-born platforms, the ecosystem now includes:
• dozens of validated server configurations
• increasing participation from major OEMs and ODMs
• support for both CPU- and GPU-centric workloads
• customisable designs for specific density or efficiency targets
• roadmaps aligned with AI, cloud, and enterprise requirements
Rather than limiting options, immersion cooling broadens the set of server architectures that can be deployed by removing airflow as the dominant design constraint.
From Server Design to System-Level Impact
Removing airflow as a design constraint has implications that extend beyond individual servers. When viewed at the system level, three effects become particularly significant: fan elimination, higher operating temperatures, and increased achievable density.
Eliminating server fans has an immediate and measurable impact. Fans are not only a source of energy consumption, but also a driver of mechanical complexity and failure. In immersion-cooled environments, fans are removed entirely, reducing server power draw while also eliminating one of the most failure-prone components in traditional designs. This contributes directly to improved energy efficiency and more predictable long-term operation.
Higher operating temperatures are another structural advantage. Immersion-cooled servers can safely operate at higher and more stable component temperatures than air-cooled equivalents. This shifts thermal management away from maintaining narrow ambient conditions and toward managing heat extraction at the source. At the facility level, this enables warmer coolant loops, simplified heat rejection, and improved compatibility with free cooling and heat reuse strategies. Importantly, higher operating temperatures do not imply higher thermal stress—thermal variability is reduced, not increased.
System density increases naturally as a consequence of these changes. Without airflow paths, fan trays, or strict front-to-back layouts, server form factors can be optimised for component placement rather than air movement. This allows more compute to be packaged into a given footprint—whether measured per tank, per square metre, or per megawatt—without introducing the airflow bottlenecks that typically limit air- or DLC-cooled designs.
Taken together, these effects mean that immersion cooling does not simply improve thermal performance. It changes the relationship between power, space, and cooling at the system level—allowing operators to deploy more usable compute within the same physical and electrical constraints.
Conclusion: Server Design Is Where Immersion Cooling Delivers Its First Impact
The question of how servers operate in immersion cooling has a practical answer: most organisations can continue using the same server ecosystem they rely on today, with immersion-ready adaptations. For those willing to go further, immersion-born platforms offer an additional layer of optimisation and design freedom.
More importantly, the server layer is where immersion cooling moves from concept to measurable impact.
By enabling both compatibility and innovation, immersion cooling allows operators to choose their position on the spectrum—from conservative integration to architectural differentiation. And with vendors validating hardware in real immersion systems, supported by open reference architectures such as those developed within OCP, the maturity of the server ecosystem is often underestimated.
For organisations looking to move beyond inherited constraints—whether for efficiency, density, sustainability, or differentiation—this makes immersion cooling not just a thermal solution, but a strategic opportunity worth serious consideration.
