With the rise of new technologies for data centers, and entirely new types of data centers in all manner of locations, it’s not surprising that systems and procedures change over time. When it comes to efficiency, particularly liquid cooling vs air cooling in data centers, the latter has dominated in recent years.
But what’s the real difference and which one will have a greater, more positive effect on your business? This article will explore that difference as well as the specific situations that call for different cooling methods.
For a long time, air cooling electronics has been the preferred method. Especially with electronics and computer components decreasing in size over time. And for the most part, air cooling is still a viable option.
Given low electronic densities and affordable energy prices, blowing cold air across electronics generally works. At the design level, engineers have worked to improve heat exchangers in CPUs. But equipment densities are often pushed, making the need for better cooling methods imperative.
Particularly as energy prices have increased, the inefficiency of air cooling has come under speculation. The trick is to save energy and not lose cooling power; everyone wants to be as energy efficient as possible.
Some solutions include more targeted air circulation through thermal imaging and computational fluid dynamics. Often times it’s as simple as re-examining basic data center architecture. But despite its popularity, the flaws in air cooling are starting to show more and more.
Not Enough Energy to Go Around
Simply put, air cooling is just not very effective as a heat transfer medium. And it’s becoming more and more apparent across the industry. Air cooling usually consists of using large fans. But most data centers are strapped for space as it is, with equipment continually increasing in density.
That means less fans in less data centers. Which in turn means data centers lacking the ability to move cool air through equipment at an efficient rate. Not to mention the extra need for fins on hot spots to help transfer that heat away from its source and into the air.
Standard computer room air conditioning often isn’t enough anymore. Add in the rising costs of energy, and the amount of energy required to actually chill air temperatures down and push it through a data center, and the result is rather expensive.
Utilizing water may be a more cost-effective approach moving forward. This realization alone has helped boost liquid cooling as a viable option for cooling needs in data centers. And it may very well be the future industry standard.
Though air cooling has been more common over time, liquid cooling is on the rise. Particularly as technology becomes more advanced, certain situations require more power. More power often requires more efficient cooling.
However generally speaking, computer performance hasn’t changed drastically in the last decade or so. At the data center level, improvements to individual systems entail the implementation of accelerator processors like graphics processing units (GPUs), application-specific integrated circuits (ASICs), and field programmable gate arrays (FPGAS).
Hardware accelerators are often used in machine learning, but industry trends suggest they’ll be used more often in IT services like:
- Data mining
- Video/live media
- Engineering simulation
- Load balancing
- Fraud detection
Hardware accelerators like GPUs need a much higher rate of cooling than normal-performance CPUs, because of significantly higher thermal design points. A demand for more powerful chips means a demand for better cooling methods.
Accelerators help cut down on problems like rack density in data centers. A GPU paired with an Intel processor can likely provide nearly three times the previous density. Whereas many of the racks were previously operating at a higher-density for their workload rather than be considered high-performance computing.
Immersion cooling works well with both GPUs and CPUs, making liquid cooling an effective method when using accelerators like GPUs to cut down on rack density.
Cooling the Edge
Recent years and the demand for better, faster performances from data centers has pushed for a new type of data center. These new facilities exist and operate on the network edge. They can be in remote locations, engineered for machine learning and filled with high-density computer hardware.
Heavy workloads in confined, remote spaces that aren’t operated by nearly as many onsite workers require better, more efficient cooling methods. Not all edge data centers would require liquid cooling, but it offers the opportunity of lower energy consumption.
Meaning more edge data centers could be deployed in areas that offer less power sources. Plus traditional air cooling simply won’t always be available in more remote areas.
Did you know that Microsoft has an underground data center? Learn more from our blog on Everything you need to know about the Microsoft Underwater Data Center.
High-Density Storage Cooling
Efficiently cooling storage can be a difficult task. But as storage density continues to increase, liquid cooling may offer a solution. Though non-sealed hard disk drives cannot be cooled using liquid (and make up a big part of installed storage in data centers), newer trends offer a solution.
Most newer generations of storage hardware requires units to be sealed since they’re helium filled. Since they’re sealed, they would be safe for liquid cooling. Additionally, Solid State Drives can be cooled with full-immersion solutions.
With that in mind, the need to separate air-cooled storage from liquid-cooled processing would essentially be eliminated. Plus the effects of heat and humidity on components can be reduced by immersing drives in cooling fluids.
Many approaches to liquid cooling are more catchall solutions. Meaning it can be difficult to find the right option or path to allow for a full move to liquid cooling. Rather than using it on a case by case basis and risking potential shutdowns and setbacks. It needs to be a manageable, feasible operation.
But since water has between 50 and 1,000 times the capacity to remove heat than air, the decision is a no-brainer. Every data center is different, but operators should at least look into the potential of a switch. Given a smooth transition, implementing liquid cooling should be cost effective and non-disruptive.
It might be easier to use forced air in the short term. Especially because a transition to liquid cooling might be more expensive at first. But over time the effectiveness of liquid cooling combined with lower energy costs and better data center efficiency, and the answer is clear.
The point is, every data center’s situation is unique. However, most data centers are facing the same obstacles when it comes to improving operational efficiency, particularly when it comes to liquid cooling vs air cooling. Taking a closer look at your data center’s current situation and future plans will help you decide on what cooling options are best for you moving forward. Or if you are looking into upgrading software you should read our blog on Windows Server 2019 Key Features for more information.
And if you find that any cooling improvements will come with an upgrade or renovation to your data center, you may require a full on data center decommission. In that case, be sure to enlist the help of a certified IT asset disposition company. At Exit Technologies, we offer full IT equipment services ranging from asset recovery, network equipment sales and recycling, data erasure, and full data center decommission services.
Have something to add? Let us know your thoughts in the comments below!