Free cooling in data centres: efficient technology explained
With each new generation of servers, thermal loads increase, along with the demands on cooling concepts in data centres. Whether free cooling,...
Data centres are growing in size and performance, and with them the demands on efficient refrigeration cycles. According to the International Energy Agency (IEA), cooling already accounts for between 7% and over 30% of energy costs in many facilities. With the triumph of AI servers, GPU clusters, and high-density workloads, this proportion is expected to rise further.
At the same time, regulations such as the EU Energy Efficiency Directive, eco-design requirements, the EU Code of Conduct for Data Centres, and national efficiency requirements are becoming increasingly relevant to the industry. Key figures such as PUE (Power Usage Effectiveness) and WUE (Water Usage Effectiveness) are becoming the standard benchmark for assessing the overall efficiency of a data centre. Against this backdrop, the refrigeration cycle is gaining strategic importance.
A data centre produces enormous amounts of heat because every processor, memory bank, and network module converts electrical energy into heat. A refrigeration cycle ensures that this heat is absorbed, transported, and safely dissipated to the outside in a controlled manner – similar to a refrigerator, but at a significantly higher performance level. In essence, it functions as a transport system for thermal energy.
Whether based on water cooling or air cooling, the refrigeration cycle operates continuously in a data centre, around the clock and without interruption. Even small temperature deviations can lead to performance losses or place excessive thermal stress on IT hardware. A stable refrigeration cycle ensures that operating temperatures remain within safe limits at all times.
A refrigeration cycle consists of four components: the evaporator, compressor, condenser, and expansion valve. These components are found not only in data centres, but in almost every classic refrigeration system. Together, they implement the thermodynamic principle of cooling.
The evaporator is the point at which heat from the data centre enters the refrigeration cycle. Via air or water cooling coils, the refrigerant absorbs IT waste heat and gains sufficient energy to change from a liquid to a gaseous state. This evaporation marks the start of the actual refrigeration process.
The pressure and temperature of the gaseous refrigerant are deliberately increased in the compressor. Only at this higher pressure level can the heat be released again later. The compressor is particularly relevant for data centres, as its operating behaviour and load profile have a significant impact on the energy consumption of the refrigeration cycle and are directly reflected in the PUE calculation.
In the condenser, the refrigerant releases the heat it has absorbed to the environment or into a recooling circuit and condenses back into a liquid state. This component forms the interface to the ambient air or to systems that specifically reuse data-centre waste heat, for example by feeding it into local or district-heating networks.
The expansion valve reduces the pressure of the liquid refrigerant. This lowers its temperature and prepares it to absorb heat again in the evaporator. This closes the cycle.
In data centres, two circuits work together: the refrigerant-based refrigeration circuit within the chiller and a water-based secondary circuit that dissipates heat from the server rooms. These systems are supplemented by computer room air conditioners (CRAC), computer room air handlers (CRAH devices), and recoolers located in the outdoor area.
In the chiller, the refrigerant passes through the four components of the refrigeration circuit:
The pressure and temperature change in defined stages. At the same time, the key phase transitions take place: from liquid to gaseous in the evaporator, and from gaseous to liquid in the condenser.
The secondary water circuit operates in parallel with the primary water circuit. It transports the IT waste heat from the server rooms to the refrigeration machine and returns the generated cold back into the circuit.
The heated water flows to the chiller, where it releases its heat in the evaporator and is cooled back to the target flow temperature. The heat rejected in the condenser is transferred to the outdoor environment via recoolers or cooling towers, or fed into heating networks if waste-heat utilisation is planned.
In many systems, free cooling is additionally integrated into the data centre. When outside temperatures are low, outside air or dry coolers take over part of the cooling process and relieve the compressor.
Refrigeration circuits in data centres are based on the same basic physical principles as conventional refrigeration systems, but they must meet significantly stricter requirements. Thermal loads are higher, tolerances tighter, operating times longer, and the response times shorter.
Modern server architectures and GPU clusters generate considerable amounts of heat. Direct-to-chip cooling and immersion cooling are increasingly becoming the standard for AI and HPC workloads because air cooling alone is often no longer sufficient. The refrigeration circuit must absorb these loads and dissipate them without delay. Three classic cooling concepts are therefore used in data centres:
In addition, liquid and immersion cooling solutions are also becoming established, particularly in AI clusters and HPC environments. They affect not only the equipment in the server room, but also the requirements for piping networks, operating temperature levels, and recooling systems. Edge and micro data centres are also bringing new cooling concepts into play: compact systems with direct outside air supply or integrated liquid cooling, often located close to production facilities or sites with low network latency.
The highest priority in the data centre is consistency. A refrigeration cycle must not "oscillate", show slow reactions, or allow unintended temperature peaks. Sensors, hydraulics, pump performance, and control algorithms must therefore form a finely tuned system. This is because key figures and values such as PUE and WUE for data centres are determined by the optimised interaction of these factors. PUE and WUE then form the objective basis for reliable energy assessments by operators, customers, and regulatory authorities.
Refrigerants today are caught between technical suitability, safety, life cycle considerations, and regulations. Three main groups of refrigerants are used in data-centre refrigeration cycles:
Since the new F-Gas Regulation (EU) 2024/573 came into force, transition periods and quantity paths have become even more stringent. Many operators are therefore increasingly evaluating CO₂ (R744) or propane (R290) as long-term options.
Refrigerants differ in terms of pressure and flammability; they are classified according to ISO 817 or ASHRAE 34 (A1, A2L, A3). In data-centre refrigeration cycles, substances in classes A1 and A2L are predominantly used because they offer well-defined and manageable safety characteristics. At the same time, refrigerant emissions and life cycle analyses (LCA) are becoming increasingly important. Sustainability reporting now considers not only the GWP of the refrigerant itself, but also its entire life cycle – from production and operation to potential leakage over the system’s lifetime.
The EU F-Gas Regulation, the AIM Act in the USA, and other international regulations provide for a gradual reduction in certain HFC refrigerants. For operators, this means that the choice of refrigerant must take the regulatory framework into account and therefore be incorporated at an early stage in the planning of a data centre.
The water-based secondary circuit transports the IT heat to the cooling units and returns the generated cold air to the server rooms. It is therefore a central component in the data centre's cooling concept, regardless of whether air cooling, direct-to-chip cooling, or immersion cooling is used. The necessary infrastructure for this is provided by pipe systems. They are essentially the arteries of the cooling circuit in the data centre.
Pipe systems must be able to withstand constant pressure fluctuations and temperature changes over many years without material fatigue. Corrosion-free materials reduce the risk of deposits, leaks, and wear. Low thermal conductivity reduces temperature losses between chillers and air conditioning units.
In complex, densely populated buildings such as data centres, pipe systems must be safe, space-saving, and quick to install. PP-R and PP-RCT piping systems such as aquatherm blue meet these requirements: they are corrosion-free, pressure- and temperature-resistant, lightweight, and permanently leak-proof thanks to material-locking connections.
As a result, they support the technical operational safety of the secondary circuit and offer a durable solution for closed cold water networks – including applications combined with free cooling or PV-supported cooling.
A refrigeration circuit can only perform its task reliably if it is continuously monitored and professionally maintained. In data centres, impending malfunctions must be detected early and deviations corrected immediately, or better still, before an event occurs. This is because the availability of data centres has the highest priority in our digital world.
Important parameters include pressure, temperature, and flow in the cooling and water circuits, as well as the characteristic curves of the compressor. Modern data centres collect this data via sensors and consolidate it in building management systems (BMS) or DCIM systems. Digital twins and AI-based control systems ("predictive cooling") are increasingly being used to optimise cooling circuits in real time and respond proactively to changing loads or environmental conditions.
Regular leak testing, inspections of connections, cleaning of heat exchangers, and functional tests of pumps, valves, and sensors are integral parts of a structured maintenance concept. The goal is stable, reliable operation – not the pursuit of marginal, purely theoretical efficiency gains.
In addition to monitoring and maintenance, redundancy in data centres is essential. This is not only important in the event of technical problems, but also as protection against sabotage or cyber attacks. This applies equally to racks and redundant versions of critical infrastructure such as pumps, heat exchangers, pipe systems, etc.
The refrigeration cycle forms the thermal foundation of modern data centres. It combines the physical principles of refrigeration with the technical infrastructure that keeps server rooms stable and operational. Evaporators, compressors, condensers, and expansion valves interact precisely with each other, enabling the heat generated to be absorbed in a controlled manner and dissipated safely.
The coordinated interaction of the refrigeration cycle, the water-based secondary network, suitable piping systems, modern control technology, and – where feasible – waste-heat utilisation in heating networks ensures that temperatures remain within the required range at all times. Monitoring, maintenance, and increasingly AI-supported operational management contribute to system stability and help meet regulatory requirements for efficiency, reporting, and heat reuse.
A well-coordinated refrigeration cycle is a key prerequisite for the availability, planning reliability, and future viability of data centres – from large hyperscale campuses to edge and micro data centres.
Talk to our experts.
aquatherm supports you from design and component selection through to commissioning, combining industry-specific know-how with international project experience.
With each new generation of servers, thermal loads increase, along with the demands on cooling concepts in data centres. Whether free cooling,...
Data centres can be cooled using either air-cooled or waterless-cooled systems. Deciding which method is best suited for the future raises important...
Global water scarcity, climate change, and energy efficiency are defining issues worldwide. Conserving natural resources has long been a general...