To sign up for our daily email newsletter, CLICK HERE
While the technology is moving at a rapid pace, it is important to understand the evolution of the data center’s components. In fact, we should focus on concepts like airflow management and data center cooling management. It has only been a couple of years since the best practices of airflow management were established.
Conversation related to it first surfaced in the 1990s when the standard method during that time involving organizing computer rooms with racks facing the front proved to be impractical. Engineers of the data center solution added the hot and cold aisle assortment of server racks. Having recognized the value of separating the cold and hot aisle, a notion of best practices emerged to improve the benefits of this separation.
In 2005, Intel and Oracle reported on several case study projects wherein they had deployed server cabinets using vertical exhaust ducts or chimneys connecting from the cabinets to a suspended ceiling return air path. It completely separated the return air from the overall data center. However, the studies highlighted the effectiveness of the cooling and the potential for higher rack capacity, the most notable aspect that the study cited measured evidence of lowered cooling energy costs.
Soon after the report, in June 2006, the Lawrence Berkeley Nationals Labs reported at the National Energy Research Scientific Computing Center in Oakland, California. The study encompassed a cold aisle containment experiment that resulted in significant savings on cooling unit fan energy, increased economizer hours, and chiller plant energy at a higher set point, after the conversation on data center airflow management transformed from its effective focus on effectiveness to efficiency.
During the 90s and 2000s, operators and designers were concerned about the ability of air-cooling technologies to cool power-consuming servers rapidly. With the design densities reaching over five kilowatts per cabinet, some believed that operators would have to look for rear-door heat exchangers and in-row cooling mechanisms to cater to the increasing densities.
For decades computer rooms and data centers continued to use raised floor systems to provide cold air to servers. The cold air from a computer room air handler or computer room air conditioner pressurized the space under the raised floor. Furthermore, punctured tiles offer a way for the cold air to leave the plenum and move towards the main space. Once the air has passed through the server, the heated air returns to the CRAH/CRAC cooled, generally post combining with the cold air. It was the most standard data center design for many years, and it is still used today.
But will it still be effective for the next-generation workloads and server designs?
The Demand For Improved Data Center Cooling And Server Design
There is a very straightforward concept of server cooling. The heat must be eliminated from the electrical components of IT equipment and server to mitigate the chances of overheating the components. In case the server gets too hot, the onboard logic will get switched off to protect server damage. Along with the heat, you should also worry about particle contamination.
Some analytics and big data servers are extremely sensitive to such adulteration. Along with the threats of physical particulate contamination, there are some treatments regarding gaseous contamination. Some types of gases lead to the corrosion of electronic components. Such types of the conventional cooling system will still have their place in the data center. But new workloads need a better way to cool the servers on which they run.
Increasing Adoption of Liquid Cooling
There was a time when liquid cooling was viewed as a puzzle piece that generally made data centers complicated. By implementing new design considerations along with data center architectures, liquid cooling has taken a new form, making it more consumable than before. Leaders are using liquid cooling systems designed to offer solution providers like STL Tech holistic turnkey packages. These designs include purpose-built liquid cooling platforms, software, and components.
Furthermore, administrators are using a liquid cooling plug-and-play structure that efficiently fits into the architectures of the modern data center. These designs are being used in various areas including artificial intelligence, machine learning, edge, and smart city, oil and gas, HPC, VDI, application delivery, research and education, financial services, modeling and rendering, CAD, gaming.
Considering the data center design and integration with modern systems, the air conditioning of a conventional computer room is no longer sufficient. Additionally, the increasing energy costs and support of advanced use cases can become quite expensive. Liquids are more conductive to heat, which means that even at room temperature, liquid can offer better cooling than cold air.