Waterless or air cooling for data centres?
Data centres can be cooled using either air-cooled or waterless-cooled systems. Deciding which method is best suited for the future raises important...
The demands placed on modern data centres are increasing rapidly – primarily due to the use of AI models, high-performance computing (HPC), and increasing hardware packing density. At the same time, operators are expected to run their infrastructures in an energy-efficient, sustainable, and reliable manner.
This pushes traditional air cooling to its limits. Air as a cooling medium, air has physical limitations when handling high heat loads: its heat transfer capacity is low, and the energy required for fans and air conditioning increases. Despite optimisation efforts, energy use often remains inefficient, resulting in a high PUE (Power Usage Effectiveness).
This is exactly where liquid cooling comes in. It offers significantly optimised thermal properties, works more efficiently, and can be better integrated into sustainable strategies. The technology is currently developing rapidly and is becoming a key technology for the next generation of data centres.
Liquid cooling is a technology used to manage thermal relief of IT systems, where a liquid medium absorbs heat directly or indirectly and dissipates it from the system. Unlike conventional air cooling, the heat is not transferred by ambient air, but by a significantly more efficient liquid. The decisive physical advantage lies in the higher specific heat capacity and thermal conductivity of liquids compared to air.
Depending on the design, hardware contact, and cooling strategy, liquid cooling can be divided into different variants. The decisive factors are how close the cooling medium is to the heat source and what requirements this places on planning, maintenance, and integration.
Indirect liquid cooling
In this approach, air remains as the cooling medium on the IT hardware itself. The heat generated is transferred to a liquid medium via air-water heat exchangers, such as rear door coolers or room coolers. Here, the liquid is not used for direct component cooling, but for ambient conditioning. The main advantage is the simpler retrofitting, especially in existing data centres.
Direct liquid cooling (direct-to-chip/cold plate)
In direct cooling, the heat is transferred directly to the cooling fluid, typically via so-called cold plates. These sit directly on the processors or GPUs and contain fine channels through which the cooling medium circulates. Heat is absorbed very efficiently, without the need for fans or intermediate media. This method is particularly well suited for high-performance applications in HPC or AI clusters, but it can also be integrated into enterprise systems.
Immersion Cooling
In immersion cooling, all IT hardware – individual components or entire servers – is immersed in an electrically non-conductive coolant. Heat is transferred directly to the surrounding medium. Fans are completely eliminated, and cooling performance is maximised. However, this approach requires special hardware and customised maintenance concepts. Immersion cooling is primarily used where maximum power density is required in a limited space, such as modular edge systems or AI data centres.
System type | Heat transfer | Energy efficiency | Integration | Typical applications |
Indirect liquid cooling |
Air-to-liquid |
Medium |
Simple (retrofit) |
Existing data centres |
Direct-to-Chip (Cold Plate) |
Liquid on components |
High |
Medium (semi-modular) |
HPC, AI, modern enterprise HPC |
Immersion Cooling |
Completely immersed in liquid |
Very high |
High (specialised) |
AI clusters, edge, hyperscalers |
Depending on the process, different media are used for liquid cooling in data centres. The choice influences not only thermal performance, but also material compatibility and maintenance.
Switching to liquid cooling offers data centre operators a range of technical and economic benefits. Beyond the cooling performance itself, the impact on infrastructure, operating costs, and long-term viability are also considerations.
Significantly higher energy efficiency
Water or other liquids can absorb many times the amount of heat that air can transport: with lower temperature differences and without high fan power. This significantly reduces the energy required for cooling. Liquid cooling can therefore achieve PUE (Power Usage Effectiveness) values of less than 1.1. This is a clear efficiency advantage over air-cooled systems, which often have values between 1.3 and 1.6.
Higher power density per rack
Liquid cooling allows significantly higher power values per unit area to be achieved. In some cases, they exceed 100 kW per rack. This creates space in the data centre, reduces the number of racks, and shortens the distances for power and data distribution. This scalability is particularly crucial for AI and HPC loads.
Reduced space requirements and lower noise levels
Fewer fans, smaller air conditioning systems, more compact room concepts: liquid cooling allows for a significantly denser design. At the same time, the noise level in the data centre is reduced considerably.
Longer component life
More stable thermal management has a direct impact on hardware reliability. Fewer temperature fluctuations, lower mechanical stress from fans, and targeted cooling at hotspots extend the service life of servers, switches, and storage systems.
Integration into sustainable strategies
Recooling with liquid cooling enables waste heat to be used, for example to heat buildings or supply local heating networks. This technology supports sustainability strategies and improves the carbon footprint of the entire data centre.
Liquid cooling is based on a closed-loop system that specifically absorbs, transports, and dissipates heat. The design is similar to classic heating or cooling systems. The central components are:
Cold plates
These sit directly on the chips or other thermally active components and transfer the heat to the cooling medium. Their design largely determines the thermal efficiency of the system.
Pumps
Pumps ensure the necessary circulation of the coolant in the circuit. Depending on the system architecture, they are designed to be redundant and controllable in liquid cooling systems for data centres, allowing them to respond to changing loads.
Sensors
Temperature, flow, and pressure sensors continuously monitor the status of the cooling circuit. This data forms the basis for automatic control and predictive maintenance.
Heat exchanger
In a heat exchanger, the heat absorbed from the primary circuit is transferred to a secondary system, such as a recooling section, which can enable the heat to be reused for heating systems.
Piping system
The piping connects all components to form a closed circuit. It is important to select suitable materials that are permanently pressure-resistant, chemically resistant, and corrosion-free.
Introducing liquid cooling in a data centre requires more than just replacing individual cooling units. It is a far-reaching infrastructural decision that has implications for planning, construction, operation, and maintenance. Four key requirements must be taken into account to ensure that the system operates reliably, efficiently, and safely:
Infrastructure and material selection
The central elements of liquid cooling are the pipes through which the cooling medium circulates. Materials such as polypropylene (PP) are often used here, as they are characterised by pressure resistance, chemical resistance, corrosion resistance, and a long service life. It is important to have a continuously closed system with minimal risk of leakage, high integrity, and precise installation planning.
Integration into existing systems
In In existing data centres, connecting to the current cooling infrastructure is particularly important. Transfer points to cold water systems, recoolers, or building cooling circuits need to be established. Indirect liquid cooling, such as via rear door heat exchangers, is relatively easy to integrate. Direct-to-chip systems, however, require adaptation of the server and rack configurations.
Redundancy and reliability
Liquid cooling must not represent a single point of failure. This applies to pumps, valves, sensors, and power supply. A typical redundancy structure (e.g. N+1 or 2N) includes duplicate pump lines, bypass lines, and automated failover mechanisms. In addition, emergency routines must be defined in case of leaks or pressure drops.
Monitoring and maintenance
The operation of liquid cooling requires precise monitoring systems. In addition to temperature and pressure sensors, flow sensors, leak detectors, and condition monitoring via IoT platforms are also integrated. Maintenance includes filter changes, functional tests of the pumps, inspection of seals, and analysis of the cooling medium for ageing or contamination.
Read more about this in our blog: Building a data centre: planning, construction, and operation
The choice of piping system is a key success factor in the planning and implementation of liquid cooling in data centres. It is not just a matter of transporting the cooling medium, but also of long-term operational reliability, depth of integration, and material performance under continuous load.
Advantages of modern plastic piping systems in cooling circuits:
This is exactly where aquatherm comes in: with innovative piping systems made of PP-RCT. aquatherm supports projects from the concept phase to implementation – including dimensioning, hydraulic calculation, and documentation. In addition, it provides BIM data, technical planning aids, and pre-assembly services for efficient project implementation. aquatherm pipe systems are used worldwide, including in data centres with high-performance AI and energy-optimised building renovations.
aquatherm offers two specialised solutions for liquid cooling in data centres: aquatherm blue for closed cooling circuits and aquatherm black for surface cooling in office rooms. Both systems are designed for efficiency, durability, and complete leak-tightness.
The decision to use liquid cooling is a strategic factor in global digital infrastructure. Current forecasts show that the market for liquid cooling is on the verge of a structural growth phase: by 2034, the global market will grow to over USD 22 billion, with an annual growth rate of around 19%. Likely, the driver is not only the technological pressure from AI and HPC, but also a profound change in the investment decisions of hyperscalers, cloud providers, and critical infrastructures.
Future scenarios clearly show that data centres with high thermal density – for example, due to NVIDIA GPUs or AMD Instinct clusters – will be designed with liquid cooling as standard in the near future. Today, 50 kW per rack is already considered the target, and in individual AI farms, 80 to 120 kW is realistic. Air cooling is only used as a supplement here. Major hardware manufacturers are now developing their own liquid cooling designs with a clear objective: greater integration depth, open standards, and lower total cost of ownership (TCO).
At the same time, new ecosystems are emerging. The Open Compute Project (OCP) community is working on standardised interfaces for direct-to-chip modules, rear-door heat exchangers, and immersion systems. The aim is to overcome market fragmentation and enable economies of scale – a signal to operators that investments in liquid cooling will rest on a more stable, interoperable foundation in the future.
Regulatory pressure will also increase. With the EPBD revision, the EU expects higher efficiency requirements for data centres – including proof of energy use. Liquid cooling with waste heat utilisation is becoming a key technology here, for example through integration into local heating networks or process heat supply. In countries such as Denmark and the Netherlands, this option is already included in energy audit requirements.
Hybrid cooling strategies are increasingly coming into focus: combinations of liquid and air cooling, depending on rack type, usage, or expansion stage. These "transitional architectures" allow for a smooth transition – for example, by integrating rear-door coolers in existing buildings and gradually expanding to direct-to-chip modules in new clusters. This makes data centres more technologically modular, regulatory compliant, and economically scalable.
For operators, planners, and investors, the picture is clear: liquid cooling is not only a response to technical limitations – it is becoming a strategic differentiator in the competition for energy efficiency, operational reliability, and sustainability. Investing in the right systems today ensures long-term operational advantages, regulatory compliance, and a significant efficiency advantage.
Ultimately, it is not just about cooling technology – it is about strategic infrastructure decisions with long-term effects. Anyone planning, operating, or modernising data centres today must answer the question of how they will deal with growing thermal density, rising energy costs, and regulatory requirements in the future. Liquid cooling provides reliable answers to these questions. But only if it is implemented systematically and with the right partners.
Now is the time to question existing concepts: where do air-based systems reach their limits? How can heat load, space requirements, and energy consumption be rethought without compromising availability or scalability? And which technical components truly contribute to the future-proofing of your data centre?
aquatherm supports this decision-making process with in-depth system knowledge and many years of application experience. Whether for new construction or retrofitting, and whether for edge, enterprise, or hyperscale environments, we will develop your piping system for a cooling solution that works – not only technically, but also economically and sustainably for the future.
Delve deeper into your planning with the white paper: Keeping AI cool – liquid cooling with AQUATHERM BLUE
Data centres can be cooled using either air-cooled or waterless-cooled systems. Deciding which method is best suited for the future raises important...
Surface cooling for energy-efficient construction projects: Complete solution with aquatherm black With the onset of summer and rising temperatures,...
The exponential increase in compute-intensive AI workloads is not only changing the performance and scalability requirements for data centres - it is...