The Importance of Mutual Understanding Between IT and Facilities12 min read
Decisions and actions typically under the jurisdiction of the IT side of data center management can have a profound impact on the mechanical plant functions and resultant operating costs of the data center. By understanding these relationships, IT and facilities management should be able to form a more cooperative approach to managing the data center, resulting in a more effective and efficient operation, thereby better fulfilling often contradictory objectives.
Key Decisions and Actions
- Specifying High ΔT Servers vs. Low ΔT Servers: Higher temperature rise (ΔT) means less airflow volume per kW for cooling. This results in a lower total conditioned airflow rate and less fan energy, reducing operating cost. Historically, blade systems have had higher ΔT’s than 1U-3U rack mount servers, though some newer Energy Star servers are closing that gap.
- Specifying A1, A2, A3, or A4 Servers: Different server classes operate at different maximum temperatures, resulting in more free cooling hours, significantly reducing operating cost.
- Specifying Equipment That Breathes from Front-to-Back: Not doing so violates hot and cold aisle segregation, resulting in the need for lower set points and higher air volume delivery, and therefore reduced total and redundant cooling capacity and higher operating costs.
- Specifying Solid State Storage or Tape Storage: Rate of temperature change maximums for tape storage may limit access to free cooling hours in some climate zones.
- Specify Cooling Unit Set Point: Best practice is to specify maximum allowable IT equipment inlet temperature and let mechanical plant find its own level.
- Specify Cages that are Compatible with Containment: If not properly done, there will be a need for extra volume airflow and lower cooling unit set points, resulting in higher operating costs.
- Employ Airflow Management Best Practices: Adhering to airflow management best practices can reduce required airflow volume and allow you to increase cooling unit set points, lowering energy fan costs, lowering chiller fan costs, and increasing free cooling hours.
Why These Are Important
Specifying server, storage and telecommunications hardware, qualifying applications, maintaining virtual uptime, managing hardware and service providers, and forecasting transaction and storage capacity requirements beyond business strategic horizons ought to be job enough for IT management; however, it turns out that most of the traditional and generally understood tasks bear heavily on the efficiency and design capacity of the mechanical plant, as well as holding the key to either optimizing or sub-optimizing the efficacy of data center architectural plans. For these reasons, we recommend that many of these decisions and routine activities be conducted in concert with facilities management and/ or architectural engineering resources for new spaces to decrease the likelihood of finding yourself a year or two later painted into a corner that prevents growth, responses to competitive situations, or delivering a needed point to the organization’s bottom line.
The end goal in discussing these considerations is to have an agreement between IT and facilities as to how these decisions impact the other. To ensure efficiency and optimize the data center, both teams need to work together and understand the impact that the aforementioned considerations will ultimately have on the data center environment.
Ian Seaton
Data Center Consultant
Let’s keep in touch!
1 Comment
Submit a Comment
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
Interesting article, it says that you can run less air to save energy, instead of saying the fact that you do not have to have higher temperature drop. Running higher air temperature means lot more free cooling hours, and lower energy consumption for same degree drop at higher temperature. For example if you go from 54 F to 44 F you use certain amount of energy which is higher than dropping same 10 degree drop from 60 F to 50 F. Yes, we are now saving for sure more than 30% energy over projects that were designed say in year 2000. We at ASHRAE TC 9.9 increased the envelope of higher temperature and humidity range for running data center. It is true and in practice at number of places. And yes when as facility engineer I know make and model of every equipment that is in rack or is going to be in rack ad what present day equipment will replace 7 year old server than I can help IT engineer quite a bit. That is why so many engineers who were running data centers and working with IT are now becoming CTO’s and CFO’s of data centers.