Data Center Heat Energy Re-Use Part 1: Ship the Hot Air Next Door22 min read
Is it possible to recycle some data center energy to achieve a PUE less than 1.00? Over the years I have heard such claims in various sales presentations, coffee corner casual asides and even unretractable press releases. Such claims are invariably followed by wise nods, knowing winks and, “Well, somebody doesn’t understand PUE basic math.” Such claims usually derive from either obtaining off-grid electricity or producing some energy from the ICT ΔT that is used to perform some other task. Neither case changes the basic math of total power into the data center divided by IT consumed power = 1 or higher, unless the ICT thermal output were converted to energy to provide a portion of the energy required to operate our ICT equipment thereby subtracting that portion from total consumption. This result of ICT equipment energy use exceeding total data center energy use gets us a PUE less than 1.00, and also gets us into the realm of physics-defying fantasy.
Does that mean there is no value to efforts in mining the heat energy produced by our ICT equipment? I assert that it definitely does not mean that. After all, we have already recognized the great value in actually increasing PUE by dramatically reducing the divisor of our PUE equation through virtualization, turning off comatose servers, migrating from Windows to Linux, etc. The PUE rock showing above the water level may be bigger, but the sea level itself is lower (hence, the ongoing discussion for new and improved metrics). Such is the case with looking for ways to perform useful work with the thermal energy produced by the work performed by our ICT equipment. While we may not give ourselves a bragging-rights PUE decimal fraction, perhaps we can improve our organization’s bottom line and contribute positively to the planet’s health. Back in October 2011 MIT Technology Review published an article by Neil Savage, “Greenhouse Effect: Five Ideas for Re-using Data Centers’ Waste Heat.” The five examples he cites in this article actually represent five general strategies and I find them to be a useful jumping off point for exploring developments over the subsequent eight and a half years. The ideas were:
Notre Dame University data center heated a greenhouse.
Oak Ridge National Laboratory developed a mechanism that affixed to a microprocessor and produced electricity.
A Syracuse University data center produced its own electricity and used excess cold water for air conditioning an adjacent office building in summer and excess hot water to heat it during the winter.
An IBM research data center in Zurich used warm water liquid cooling and used the warmer “return” water for heating an adjacent lab.
A Telecity data center in Paris provided heat for research experiments on the effects of climate change.
Each of these specific examples actually represents a different category of data center energy re-use, and I will explore each idea in a series of articles over the next several weeks. I’ll be starting today with the most straightforward – the Notre Dame practice of exploiting the elevated temperature of the data center return air to do some useful work – in this case maintaining an operational greenhouse temperature at the South Bend Greenhouse and Botanical Gardens through the sometimes brutal northern Indiana winter. We call this strategy, “Ship the hot air next door.”
Shipping the hot air next door, on the surface, may seem like the simplest and most straightforward application of data center waste heat re-use, but unfortunately it can get complicated pretty quickly when we try to compensate for it’s being what we call low grade heat energy. Nevertheless, there are opportunities in special circumstances. Besides the Notre Dame University greenhouse heater, I have read of data center return air being used to heat adjacent office spaces and auditoriums. I can imagine in the Pacific Northwest where cannabis has been legalized a potential symbiotic relationship between commercial greenhouse operators and data centers. While most site selection rules tell us to avoid agricultural areas, particularly if we are going to use some kind of free cooling, I’m thinking that rule may not apply to indoor farming. More seriously, John Sasser, Senior Vice-President of Data Center Operations at Sabey, Inc., explained to me how they use the waste heat energy from their UPS rooms to significantly reduce the lift required of their block heaters for generator oil and coolant.
Another hypothetical use of data center return air comes from courses on airflow management I taught twenty years ago. One of the first objectives was to get clear on the difference between comfort cooling and data center cooling, since most data centers were still essentially using comfort cooling practices and principles – i.e., stick a thermostat on the wall and dial in a set point somewhere around 70˚F. I would show a CFD model of a hot aisle – cold aisle data center with no other attention to airflow management. There would be random blobs of blue, green and red in both the hot aisles and the cold aisles and I would tell a story about being at home under similar conditions. I would be upstairs in the man cave watching a game on TV maybe in shorts, maybe in boxers; my wife would be downstairs stylishly attired in sweat pants and her Bill Belichick hoodie. That was how most of us were cooling our data centers back then. I explained how I could take the return air from that data center and ship it next door to my data center where I employed all the best practices of airflow management. I could use that return air for cooling and easily be within the ASHRAE recommended server inlet guidelines, which, by the way, were a bit tighter than they are today. Chuckle if you must, but later in this series I’ll present a very viable scenario for a very similar re-use of the chilled water loop.
In general, there are a variety of effective obstacles to profitably re-using the heat energy in our data center waste air. Probably the biggest obstacle is the working waste air is typically going to be in the 80˚ to 95˚F range, which prevents it from being particularly useful except when it can be applied to comfort temperature range applications in close proximity to the data center. That requirement for proximity is another obstacle, as hot air does not travel well. Another obstacle is that anyone looking for squeezing out more energy efficiencies after reaching their 1.10 PUE, or whatever that big audacious target may have been, has most assuredly already implemented some form of free cooling and fine-tuned the fine tuning. The dilemma however is that when our free cooling temperatures are the lowest during that cold time of the year and our associated waste air temperatures are the lowest, that is exactly when our customer (internal or external) is going to be looking to us for the most heat. Conversely, during the summer, when we are cranking out hot return air, our customer may be looking for some heat relief.
In both these cases, however, there are ways to take advantage of our data center waste heat that may require adding to our facilities engineering tool kit. Heat pumps and absorption chillers may help us bridge the gap from low grade heat energy to at least somewhat useful grade energy. In both cases, heating or cooling, the folks who are using this design approach typically start with transferring the heat from air to water. Yandex, a large Russian data center operator with facilities around the world is doing just that in Finland. In their data center in Mantsala, hot waste air is piped through heat pumps and air-to-liquid heat exchangers to produce 90˚ to 115˚F water that then requires a short lift to 130˚ to 140˚F, which is useable for local district heating. This particular site has been operational for several years and has reduced the carbon emissions of the city by 40% and reduced local consumers’ heating prices by 12%. Similar strategies have been deployed by Facebook in Denmark and numerous facilities in Sweden, including a Digiplex evaporative air-to-air economizer retrofit project in Stockholm projected to provide heating for 10,000 homes. Cooling, on the other hand, can be accomplished by using absorption chillers to convert the heat to cooling, though it can be a stretch to get good performance from an absorption chiller with the air temperatures we typically see. I will discuss absorption chillers in more detail in a later installment covering liquid cooling.
Meanwhile, why is it that my good examples all seem to come from Northern Europe, at least in this section on shipping the hot air next door? The cooler ambient temperatures, providing access to much larger blocks of free cooling time, have attracted data center companies serious about energy efficiency and pursuing carbon emission reduction and neutral goals. More importantly, however, is probably the prevalence of district heating. For my American readers who may not be familiar with the term, we are merely talking about a heating and cooling district where all the homes or buildings are connected to a single heating and cooling source, versus having individual boilers or furnaces. Each district will either buy heating and cooling from some other supplier (utility) or have its own production facility. The opportunity for the data center is to become that source, profitably, at a lower cost than either the existing vendor or captive facility. The scale of this opportunity can be illustrated by the fact that data centers now account for over 10% of Sweden’s municipal heating supply. Taking it one step further, Finland actively markets itself as a great place to locate your data center because of the income opportunities associated with selling to these heating district networks. In addition, in many public utility jurisdictions, if you are going to sell this energy to your neighbor, you may need to get yourself registered as a utility.
So does that mean that localities and regions that are limited in implementation of heating district networks are relegated to heating greenhouses, generator heater blocks, internal office spaces and perhaps melting some snow on the sidewalk between the parking lot and front door, none of which will likely have a separate profitability line on the company 10K report? While shipping the hot air next door may not be a profit center, it can still deliver an internal benefit and some of these payback periods can still be relatively short. However, there are still significant opportunities associated with chilled water loops and various iterations of liquid cooling, which I will cover in the upcoming weeks.
Real-time monitoring, data-driven optimization.
Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.
Real-time monitoring, data-driven optimization.
Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.
Ian Seaton
Data Center Consultant
Let's keep in touch!
Airflow Management Awareness Month 2019
Did you miss this year’s live webinars? Watch them on-demand now!
0 Comments