Airflow Management Investments Don’t Always Require Complicated ROI Justification18 min read
When the folks at Upsite Technologies asked me to write a piece for their blog library on how to sell airflow management upstream in an organization without having to develop and explain complicated return on investment (ROI) studies and projections, my first knee jerk reaction (jerk reaction??) was to ask “Why?”. After all, I have made a pretty good living developing those exact justifications and there’s that whole bite the hand that feeds you deal. Nevertheless, their explanation was how this very requirement for a detailed cost justification is where a lot of organizations get stuck and end up doing nothing. When “nothing” means omitting the basic elements of good airflow management such as plugging open holes in the raised floor, filling unused rack mount units, and separating hot aisles from cold aisles, it became readily apparent that sometimes you can just overthink the intuitively obvious. It has always seemed to me that the benefits of good airflow management in the data center are just too intuitively obvious to be ignored; however, there have always been just enough decision-makers with a somewhat less than keen sense of the obvious to keep some of us busy hammering out detailed ROI projections. For the rest of the world, there are the much simpler approaches of dreaming of summer in central Texas or succumbing to peer pressure.
Here in my home town in central Texas we have been experiencing daily highs from 95-104˚F for the past 6-8 weeks – maybe not as hot as Phoenix, but it’s a wet hot. So you just want to have your decision-maker with his fingers clutched to the purse strings to relax, close his eyes and imagine he is in his living room a couple doors down the street here from Ian. It’s around 100˚ outside and the AC is running with a thermostat setting at 70˚ and all the windows are open and the oven in the kitchen is on broil and all the burners on the stove are on high. The clothes dryer is tumbling cotton towels. The door to the attic is open. Two cars have just been put in the garage after an all-day drive from Terlingua and the door between the garage and the house has been left open. The showers are all running hot water and every light in the house is on. What’s going on? It could be the kids just got home from school for the summer, or maybe we’re trying to imagine what our electric bill would be like without effective data center airflow management.
Cooling a data center without effective airflow management is a lot like having a house full of renegade teenagers in the middle of summer. The scope of wastefulness is similar, though the scale is a bit different – maybe 1500 kW hours for your residential cooling bill in July and maybe 3.6 million kW hours to cool a moderate sized data center in July. What are some of the elements of this waste? Without effective airflow management in the data center, we will inevitably have some bypass airflow, that is to say, we will have produced a volume of cold air that gets sent out into the data center and returns without having removed any heat load. Why would we do that? Think about that one hot rack over in the corner that is always giving us thermal alarms. Since we cannot deliver the precise volume of air at the precise temperature precisely where it is needed, we just have to pump up our cooling equipment fans and blast the room. In order to assure our problem rack is cool enough, the rest of the space is over-supplied. Short-cycling all that extra cooling air is akin to opening the window right next to your AC register and blowing that cold air out into the Texas summer, only worse. It is worse because in this case that cold air returns to our cooling equipment (thermostat) and if it is below our set point it will turn off the cooling and just send that air directly back into the data center. You know the old axiom: nobody ever got fired for over-cooling a data center? Not so fast. Over supplying over-cold air can actually cause some computer equipment to get over-heated due to such short-cycling. That short-cycling also reduces the difference in the temperature of air that is being pumped out of our cooling equipment versus the temperature of the air coming back to the return air intake. That reduction in ΔT effectively reduces the cooling capacity of our cooling units (Google sensible cooling or browse the Upsite blog library for the physics on that relationship), so that 60 ton computer room air handler (CRAH) may be functioning as a 20 ton or 30 ton air handler. We take a hit there on operating expense, capital expense, and robustness of our redundancy.
Hot air re-circulation is another effect of having our kids back home for the summer – our cooling air does its job and removes some heat. Unfortunately, that air is not done. After it has cooled the dining room, it travels through the kitchen on its way back to our return air in-take and removes some more heat from the oven baking a pizza and the stove top frying up some French fries. By the time that air gets to us in the family room where we are playing poker and watching the game on TV, it hits us at a refreshing 90˚F. The same thing is happening in our data center when we fail to maintain good separation between the fronts of our server equipment and the rears – between the cold side and the hot side. Our response in the data center is pretty much the same as our response at home – we lower the thermostat setting. With an ASHRAE recommended maximum server inlet temperature of 80.6˚F (Fig. 1) , we reduce our set point to maybe 68˚F, resulting in 50˚F supply air temperature, requiring 40˚F chilled water temperature. Every degree of the chiller is worth anywhere from 1% to 4% of our total cooling budget, depending who you’re talking to and what kind of equipment you have. Therefore, the temperature lowering strategy for combating hot spots caused by re-circulation is going to cost you anywhere from 5% to 20% of the total electrical bill to run the data center. This is a completely unnecessary expense if good airflow management were in place. With this little bit of information, a thorough and complicated ROI case could be made; however, that should be enough information to drive a back-of-the-bar-napkin justification for all manner of data center airflow management.
Classes | °F |
RECOMMENDED | |
A1 to A4 | 64.4 to 80.6 |
ALLOWABLE | |
A1 | 59 to 95 |
A2 | 50 to 95 |
A3 | 41 to 104 |
A4 | 41 to 113 |
B | 41 to 95 |
C | 41 to 104 |
Besides the inherent logic for the value of airflow management, perhaps clarified for some by the metaphorical association with the home energy bill for a house full of teenagers in central Texas during the summer, peer pressure may constitute a compelling argument for some members of the specification and procurement decision chain of command. A data center with good airflow management typically represents a competitive advantage over a company with data centers with ineffective airflow management or immature airflow management. Put the comparison in terms that may be meaningful to those decision-makers somewhat removed from the daily fray with BTU’s and Gigabytes: the company with a 5MW data center would get the equivalent of a $30 million sales order that the other guy did not get, without having to pay for a salesman to actually get that order. (Note: the “complicated” calculation is available on request.) For commercial data centers, that difference translates to either lower costs or higher profits. Other forms of peer pressure come from the government, industry standards bodies, and equipment vendors. The California energy code requires rigid barriers between cold aisles and hot aisles, that is to say, containment—the definition of good airflow management. All data center standards or best practice guidelines either recommend or require maximum or at least substantial hot and cold separation. ASHRAE requires that good airflow management be in place before taking any steps to follow its guidelines on using the server classes to optimize operating temperature. Dell Computer now for several years has asserted that over 90% of the data centers in the US, with adequate airflow management, could deploy their servers without any air conditioning.
Now that I have actually written it down, the case for airflow management is pretty compelling, even without the complicated ROI analysis. Fan energy is disproportionately nonlinear to fan rpm and flow by an exponent of 3. Therefore 50% of unnecessary airflow is costing over 3 times as much as the needed airflow would cost. Unnecessarily cold set points can add unnecessary cost anywhere from 5% up to 20% of the total data center energy budget – that includes the energy for servers, switches, storage, all elements of cooling, UPS and PDU losses, battery charging, lighting, coffee pots, and coffee cup warming pads. Every industry association, vendor, relevant government agency and pompous pontificator either recommends or mandates sealing openings in the raised floor, sealing open spaces in cabinets, and separating hot and cold aisles with containment. I suppose that if you are paying $0.02 to 0.03 per kW/hour for electricity, your payback may stretch your accounting goals; however, for the rest of the world, paybacks less than two years should be expected and paybacks measured in weeks are not at all uncommon. Nevertheless, if you really fancy a complicated ROI or CFD analysis and are keen about subtracting the cost of the analysis from your ROI, then let us know. However, know that the cost of a detailed ROI or CFD analysis can easily equal or exceed the cost of AFM improvements.
Ian Seaton
Data Center Consultant
Let's keep in touch!
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
0 Comments