Why Data Center Efficiency Gains Have Stalled Since 2018
Blog

Why Data Center Efficiency Gains Have Stalled Since 2018

Data center efficiency is measured by Power Usage Effectiveness (PUE). It is calculated by comparing the amount of energy used by both the IT equipment and the facility, divided by the IT consumption. So, a PUE of 1 means that, in essence, no energy is wasted; all the energy goes to the IT equipment with none going to the facility. Data center operators typically strive for PUE levels as close to 1.0 as possible. 

For a decade, data centers made great progress reducing PUE levels. From 2007 to 2017, the average PUE in data centers fell from 2.5 to 1.58. Unfortunately, data center PUE has flatlined since. According to the Uptime Institute’s Global Data Center Survey 2021, the average data center PUE globally is now 1.57. This means facility functions add nearly 60% to the energy use of IT. 

Why were we making such great progress for so long, but then suddenly stalled out? 

What’s Stopping Our Progress 

The bulk of it has to do with cooling. Most data centers still use mechanical air cooling systems, which are essentially large air conditioning units that continuously blow cold air near the servers. Air cooling systems can represent up to 50% of a data center’s energy consumption. 

Although air cooling systems are inherently inefficient and costly to operate, they have seen a lot of innovation and improvements over the last decade or so. Modern data center energy management systems can intelligently monitor and control cooling efficiency and power consumption in real time. Smart sensors can detect energy waste and inefficiency to inform real-time adjustments, such as variable speed fans. 

Air cooling systems also used to use single-stage compressors with fans that blew at 100% speed or 0%; they were either on or off. From 2008 to 2018, most data centers switched to inverter-driven compressors and variable speed fans, which adjust cooling capacity and air flow based on need. If the servers can remain functional at only 30% fan speed, the fans will automatically adjust accordingly. Smarter systems can also draw on historical data to predict future temperatures and pressures, optimizing energy efficiency.  

However, you can only improve air cooling systems so much. Even with optimized energy efficiency, the amount of power required to cool data centers with air cooling is still increasing.

There are a few reasons for this. First and foremost is increasing transistor counts, coupled with the mainstream adoption of data-heavy technologies such as artificial intelligence (AI), machine learning, edge computing, internet of things (IoT) and blockchain mining. The chips and processors needed to power these technologies are too energy dense and generate too much heat to be efficiently cooled with air.

For example, in April 2021 Cerebras released its new WSE 2 chip, which boasts 2.6 trillion transistors and 850,000 AI-optimized cores, and draws 23 kW of power. Most air cooling systems in data centers are designed for lower rack densities, and can only handle about 8kW to 12kW per rack. These types of next-level chips are simply too powerful for air cooling to handle, no matter how advanced the air cooling system may be.

Air Cooling is on Its Last Legs

Innovation stalls when you reach the limits of improvement. We haven’t seen dramatic improvements in gas engines for a long time, for example. The lifespan, fuel efficiency and emissions of gas engines have all improved significantly over the last few decades, but several automakers are now phasing out internal combustion and pivoting to electric vehicles. Like gas engines, air cooling systems are reaching the limits of energy efficiency.

Over the last five years or so, many data centers have started using adiabatic assist to squeeze out more capacity and efficiency in their cooling systems. This is essentially just spraying water on the surface of the chiller evaporator coils of air cooling systems. While this method improves chiller efficiency and capacity, it does so at the cost of high water consumption. Water Use Efficiency (WUE) in data centers is also a growing concern. It is estimated that data centers worldwide consumed over 660 billion liters of water in 2020 alone, and the average data center uses enough water to fill an olympic-sized swimming pool every two days.

2-Phase Immersion Cooling: The Key to PUE

When it comes to cooling, there’s no free lunch. Smart systems and adiabatic assist can only improve air cooling systems so much. In order to break the stalemate and make PUE levels as close to 1.0 as possible across the globe, air cooling needs to be phased out and replaced. The most advanced cooling system that can deliver the lowest PUEs while lowering costs, decreasing space, and using as little water as possible is 2-phase immersion cooling (2PIC).

With 2PIC, servers are immersed in a dielectric (nonconductive) fluid. The fluid absorbs the heat generated by the servers until it reaches its boiling point. The transition from liquid to gas is called a phase change—”phase one” of the process. 

The second phase change is that of the gas changing back into a liquid. Vapor from the boiling fluid condenses on a cooling coil placed just above the fluid surface. The condensed fluid collects on the coil then falls back into the tank, where the cycle is repeated continuously. The process is self-contained, and the fluid itself rarely has to be maintained or replaced. If IT gear is removed from the DataTanks™, the fluid evaporates quickly and cleanly, making the IT equipment very easy to service. 

2PIC allows data centers to run the highest density chips without large, bulky, expensive cooling systems. LiquidStack DataTanks™ can be equipped with high performance condensers of up to 500kW per flat rack. Entire racks of high-power chips, like discs in a jukebox, can be submerged and cooled, enabling a staggering amount of compute power at a significantly lower cost. Even at very high rack densities, 2PIC always keeps temperatures stable because the boiling point of the fluid never changes.

2PIC also doesn’t require compressors – one the biggest energy hogs – or any fans to move air around. 2PIC also eliminates the need for adiabatic cooling because you do not need to consume or evaporate water to reject heat. In fact, 2PIC doesn’t consume water at all, because its operating temperatures are much higher and closer to the temperature of the chip than air cooling. This allows the cooling system to reject the heat naturally without the need for water as an additional transport vehicle.

Data center efficiency gains have stalled since 2018, but 2PIC will ensure this slump is only temporary.

Stay in the know!

Sign up to get notified about the latest LiquidStack news, resources and events.

Contact Us