When organisations first begin evaluating immersion cooling, one of the earliest and most practical questions they ask is how servers actually work in an immersion-cooled environment. The underlying assumption is often that immersion cooling requires exotic hardware, bespoke IT stacks, or a wholesale departure from existing server ecosystems. In practice, the opposite is true. Immersion cooling is designed to integrate with the same OEM and ODM server landscape that enterprises, cloud providers, and service operators already depend on today.

What has changed is how much freedom operators have once airflow constraints are removed. Beyond basic compatibility, immersion cooling is increasingly creating a strategic opportunity—particularly for cloud builders, AI service providers, and infrastructure challengers that want greater control over efficiency, density, and hardware design. This mirrors a pattern long established by hyperscalers, who design past traditional constraints to gain operational and economic advantage. To understand how this plays out in practice, it is useful to distinguish between the two main categories of servers used in immersion environments today: immersion-ready and immersion-born platforms.

The Immersion-Ready Approach

Immersion-ready servers typically start life as standard air-cooled or direct-liquid-cooled designs from major OEMs or ODMs. These platforms are then adapted specifically for operation in single-phase immersion cooling systems. This adaptation process focuses on several well-defined areas, including material compatibility checks for long-term corrosion safety and optimizing chassis geometry for fluid flow rather than airflow. Additionally, unnecessary fans and airflow-specific components are removed, and firmware is adjusted for liquid environments.

One of the most impactful changes occurs at the firmware level. Thermal tables originally designed for air-cooled operation are rewritten for immersion conditions, allowing components to operate safely at higher and more stable temperatures. In practice, this can enable operating setpoints several degrees higher than air-cooled equivalents, improving efficiency and thermal stability without increasing risk. For organisations seeking a low-friction entry point, immersion-ready platforms allow existing vendor relationships and operational models to remain intact.

The Immersion-Born Advantage

Immersion-born servers represent a more structural shift. These platforms are designed from the outset for immersion cooling, rather than adapted from air-based designs. Vendors such as Hypertec and 2CRSI develop motherboards, chassis layouts, and component placement based on liquid thermal physics rather than airflow management. Without the need to move air, entire categories of mechanical and spatial constraints disappear.

The table below illustrates the key differences and benefits between adapting existing hardware and designing specifically for liquid.

Feature Adapted (Immersion-Ready) Purpose-Built (Immersion-Born)
Design Origin Standard air-cooled or direct-liquid-cooled designs from major OEMs/ODMs. Designed from the outset for immersion cooling physics rather than air.
Chassis & Layout Existing platforms modified for fluid flow, with fans and airflow components removed. Motherboards and component placement optimized specifically for liquid thermal physics.
Constraints Largely retains the spatial arrangements of traditional server chassis. Entire categories of mechanical and spatial constraints from airflow management disappear.
Strategic Use Provides a low-friction entry point, maintaining existing vendor relationships. Ideal for cloud builders and AI providers seeking structural differentiation and density.

Strategic Differentiation Beyond Air Cooling

A key insight emerging from the market is that immersion-ready and immersion-born platforms are not simply about “making servers work in liquid”. They create an opportunity for certain operators to move beyond standardised infrastructure models.

For challenger clouds, edge providers, AI platforms, and specialised HPC environments, this enables differentiation at the infrastructure level and an optimization for specific workload characteristics. By departing from rigid rack and airflow conventions, organizations can achieve higher compute density within fixed footprints and a lower environmental impact per unit of compute.

This shift allows hardware designs to align with unique service models. It does not require every operator to become a hyperscaler, but it does allow organisations to selectively adopt a vital principle: infrastructure should serve the workload and business model, not the other way around. In that sense, immersion cooling turns server choice from a constraint into a strategic lever.

Validation and Reliability

Reliability remains the primary concern for most operators, especially outside of hyperscale environments. In the immersion cooling ecosystem, validation extends far beyond theoretical compatibility or "paper" certification. To ensure long-term stability, server vendors active in this space utilize dedicated laboratories equipped with production-grade immersion tanks.

By validating platforms in real immersion environments rather than simulated ones, engineers can observe fluid flow interactions at the system level and optimize chassis geometry and component placement accordingly. This hands-on approach allows for the precise tuning of firmware and thermal control behavior, alongside rigorous extended soak tests and accelerated aging protocols to evaluate hardware behavior under sustained, real-world loads.

This technical rigor is matched by close collaboration with immersion fluid manufacturers. Together, they test material interactions over multi-year timeframes, specifically focusing on fluid oxidation behavior and long-term mechanical stability under actual operating conditions. This comprehensive, combined testing approach ensures that both immersion-ready and immersion-born servers are validated for long-term deployment—often achieving a level of qualification that exceeds traditional air-cooled standards.

Alignment with Open Reference Architectures from the Open Compute Project

In addition to vendor-led testing and qualification, immersion cooling architectures increasingly benefit from open, industry-aligned reference specifications developed through the Open Compute Project (OCP).

OCP has published a set of immersion-related requirements, material compatibility guidelines, and base fluid specifications that together form a practical reference architecture for immersion-cooled infrastructure. These documents address topics such as dielectric fluid characteristics, material selection, safety considerations, and system-level design principles.

For server vendors, these specifications provide a common foundation when adapting existing platforms or developing immersion-born designs. For operators, they offer an independent, community-driven framework to assess interoperability, reliability, and long-term suitability across servers, tanks, and fluids.

Rather than prescribing a single implementation, OCP’s work helps establish shared expectations and design boundaries—reducing fragmentation and supporting broader ecosystem maturity.

Hardware Choice in Immersion

A persistent misconception is that immersion cooling restricts hardware choice. While airflow-based servers still dominate the overall market, the range of validated and viable platforms for immersion cooling is expanding steadily. By removing airflow as the dominant design constraint, immersion cooling actually broadens the set of server architectures available to operators.

Ecosystem Category Current Market Status & Availability
Platform Diversity The ecosystem now includes dozens of validated server configurations.
Vendor Participation Increasing participation from major OEMs and ODMs.
Workload Support Comprehensive support for both CPU- and GPU-centric workloads.
Targeted Design Customisable designs for specific density or efficiency targets, with roadmaps aligned with AI, cloud, and enterprise requirements.

From Server Design to System-Level Impact

Removing airflow as a design constraint has implications that extend beyond individual servers. When viewed at the system level, three effects become particularly significant: fan elimination, higher operating temperatures, and increased achievable density.

Eliminating server fans has an immediate and measurable impact. Fans are not only a source of energy consumption, but also a driver of mechanical complexity and failure. In immersion-cooled environments, fans are removed entirely, reducing server power draw while also eliminating one of the most failure-prone components in traditional designs. This contributes directly to improved energy efficiency and more predictable long-term operation.

Higher operating temperatures are another structural advantage. Immersion-cooled servers can safely operate at higher and more stable component temperatures than air-cooled equivalents. This shifts thermal management away from maintaining narrow ambient conditions and toward managing heat extraction at the source. At the facility level, this enables warmer coolant loops, simplified heat rejection, and improved compatibility with free cooling and heat reuse strategies. Importantly, higher operating temperatures do not imply higher thermal stress—thermal variability is reduced, not increased.

System density increases naturally as a consequence of these changes. Without airflow paths, fan trays, or strict front-to-back layouts, server form factors can be optimised for component placement rather than air movement. This allows more compute to be packaged into a given footprint—whether measured per tank, per square metre, or per megawatt—without introducing the airflow bottlenecks that typically limit air- or DLC-cooled designs.

Taken together, these effects mean that immersion cooling does not simply improve thermal performance. It changes the relationship between power, space, and cooling at the system level—allowing operators to deploy more usable compute within the same physical and electrical constraints.

Conclusion

The question of how servers operate in immersion cooling has a practical answer: most organisations can continue using the same server ecosystem they rely on today, with immersion-ready adaptations. For those willing to go further, immersion-born platforms offer an additional layer of optimisation and design freedom. More importantly, the server layer is where immersion cooling moves from concept to measurable impact.

By enabling both compatibility and innovation, immersion cooling allows operators to choose their position on the spectrum—from conservative integration to architectural differentiation. And with vendors validating hardware in real immersion systems, supported by open reference architectures such as those developed within OCP, the maturity of the server ecosystem is often underestimated.

For organisations looking to move beyond inherited constraints—whether for efficiency, density, sustainability, or differentiation—this makes immersion cooling not just a thermal solution, but a strategic opportunity worth serious consideration.