Choosing a Colo: Key Considerations Around Infrastructure & Growth10 min read
Your data center is a diverse machine of interacting pieces all working together to support your business and your users. This means that organizations must look at their own facilities and understand the critical components helping it all run effectively. There are several decision points which must be made when examining the right colocation datacenter for your organization. In selecting the right facility, the decision-making process should revolve around the following:
- Cooling. The goal of any efficient data center colocation provider is to meet the cooling needs of the computing equipment and facility and drive down cooling costs and the power usage effectiveness rating or PUE. PUE shows how much overhead is associated with delivery of power to the rack. A measurement of 1 says there is no overhead. 1.2 would represent 20% overhead. In the most efficient scenario, customers would pay for the power they use (metered) multiplied by a PUE factor to account for additional power needed to cool the facility and keep the lights and other devices running. Look for a colocation provider who is thinking ahead and utilizing natural resources, such as free outside air, and managing to a low PUE.
- High-Density Metered Power. In data center colocation, power density and availability have emerged as critical requirements. Power densities found in data centers five years ago can no longer meet the needs of high capacity servers and storage devices. In addition, the availability architecture has changed dramatically. Traditionally, many data centers focused on the number of backup or spare devices in the power delivery architecture. You would see N designs where N is the number of devices needed. Today there are 2N designs which provide two active devices, N+1 with one spare and N+2 with two spares. The state of the art combines 2N feeds and N+2 infrastructure into a design known as 2N+2 which is described below. When working with a data center colocation, it’s important to understand that today’s demands have outgrown many existing data centers. Metered power allows you to pay for what you use today and grow your power draw incrementally over time.
- Physical Security. For organizations looking for truly secure facilities, insist on in-house security teams. In analyzing a good security model, consider the following:
→ In-house security staff. Having an in-house security team (not outsourced) ensures that those employees have the data center’s security needs in mind. Armed guards and a full security staff should be a consideration in the decision process.
→ Multi-factor identification and authorization. Ensuring the safety of millions of dollars’ worth of equipment will require ID checks, biometrics, and other forms of identification measures.
→ Layered security zones. These ensure that there is redundancy in the security policy as well. Entry points, floors, and access to customer cages all represent layers of security. Some data centers have gone so far as to build a building within a building for maximum security.
→ Camera and security systems monitor the 360-degree data center picture. Truly secure environments will fully prohibit any public access. Furthermore, environments which are not 24x7x365 secure should be pushed down on the consideration list. Look for advanced security measures including state-of-the-art camera systems, bollards, fencing, and security all the way from the roof to the parking lots.
- Data Center Infrastructure Management (DCIM). Don’t be left in the dark: DCIM is much more than the latest buzz acronym to hit the data center industry. Data centers have always been purpose built facilities with lots of complex technology. Managing those technologies has been problematic. At best, devices would have management software but individual software systems could not work together. At worst, blinking lights needed to be monitored in person. The result was a highly tuned system that when it breaks, breaks ugly. DCIM changes that. Look for a data center colocation provider with DCIM built into their infrastructure.
The idea here is to create an environment which can directly align with your own business, and support user functions moving forward. When you look at the overall data center – take these four points as key takeaways when partnering, building or updating your own data center architecture. That last point is most critical.
- Sense and respond software. Management software needs to do more than tell you what just happened. It needs to provide trend analysis and intelligence that can identify problems before they happen and solve them.
- Innovation through integration. Look for a data center provider that has in-house software development resources that can integrate disparate software management tools into a robust system.
- Secured portal and reporting. You need to have an online view of your critical infrastructure and the ability to produce reports and trending.
- Everything is connected. Critical infrastructure must work as a system. Devices such as power and cooling delivery need to be connected to a common network to allow for seamless monitoring. Furthermore – monitoring, management, and data center control all revolve around an interconnected framework of hardware and software resources. When you see that everything is connected, you’ll quickly understand all of the components you’ll need to monitor and control.
1 Comment
Submit a Comment
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
I would argue the most important consideration is the quality of the facilities operations, maintenance and management team. While the large national/international colocation firms have the scale to self-deliver operations and maintenance, many mid and smaller colocation companies to do not have the scale and resources to develop best in class facilities teams. Given that a high percentage of all outages and incidents in a data center are caused by human error, the quality of the facilities operations and maintenance teams have a significantly greater effect on uptime than systems design.