How the Perception of DCIM Has Changed Over the Years – Part 120 min read
Over these past few years, my view on DCIM has really changed. Several years ago, I was asked to lead an AFCOM half-day workshop on DCIM. I organized a 5-hour workshop, created some fun coursework, and ensured the audience participated. I didn’t expect this to become pretty popular with people demanding that I repeat this workshop regularly. Our little get-together has evolved from a workshop to a regular presence at AFCOM Data Center World.
And each time we did this workshop, we did something different. At the latest AFCOM Data Center World event in Austin, we had a packed session for our DCIM class. And once again, we did something different. To illustrate how the perception of DCIM has changed over the years, we focused on the latest and most critical operational points in managing a data center. And we did all of this by leveraging new and advanced technologies, like AI, machine learning, and virtual realities.
In the image below, we had a volunteer, Linda, come up and demo a full VR experience of a live data hall.
She came up, put on that headset, and easily navigated the virtual data hall within five minutes.
The point here is that the evolution of digital twins, data modeling, and virtual reality has come a long way. And it’s being applied in DCIM. Before we dive into that, let’s look at what’s changed.
DCIM in the early days
In the early days of DCIM software tools, data centers tended to be centralized for a given organization. They were also fewer in number. This began to change when cloud computing and data center colocation providers emerged. Enterprises increasingly have owned and leased IT physical and virtual assets spread across more locations. This is called “hybrid IT” or “hybrid computing environments.” This growing sophistication complicates operations and maintenance by making it more challenging to maintain visibility and control over all assets.
A significant change in controlling today’s most advanced cloud and data center environments revolves around visibility, monitoring, and management. New DCIM solutions, used by some of the most prominent vendors in the industry, are designed to add intelligence to the data center, using dynamic solutions to derive actionable information about equipment locations, connectivity points, power usage, and power capacity. With this information, organizations can identify areas for ongoing optimization of data center operations.
DCIM solutions serve as a bookend, working cohesively at each end of the physical infrastructure. At one end, DCIM solutions bridge the IT stack by delivering visibility into asset and connectivity management to help streamline capacity management efforts and accelerate work order management tasks. At the other end, DCIM bridges the facilities stack by monitoring power usage and cooling efficiencies to help drive operational effectiveness and data center uptime.
Fast forward to today. Much has changed since DCIM suites became commonly used toolsets. Data center owners and managers face new challenges and requirements, such as accelerating small, unstaffed IT deployments at the edge, meeting corporate sustainability goals, and defending against cybersecurity threats.
At the same time, recent technological developments offer new possibilities. Modern DCIM systems will simplify deployment and management regardless of the number of assets and sites. Similarly, new opportunities allow data center managers to optimize operations and maintenance through analytics and new digital services. They will provide the data and tools necessary to support integration with existing management apps and services such as 3rd party monitoring and management services, building management systems (BMS), and electrical power monitoring systems (EPMS).
But it’s not just about the tools, it’s very much about the people as well.
Outages and human error: How DCIM impacts modern management issues
As many leaders want to improve operations, become better with sustainability, and help their staff become less tired, some emerging challenges face digital infrastructure. These challenges include:
- Distributed infrastructure. Today, more extensive infrastructure portfolios assist organizations in staying ahead of a diverse digital market. This translates to strained visibility levels where edge ecosystems and smaller locations may not have the observability you require to scale your business.
- Unclear root causes. Taking too long to determine an issue’s root cause can waste time and resources. Yes, you’re capable of resolving a problem quickly. But what happens when you don’t know what happened first? Fixing a critical issue is essential, and it is even more vital to ensure that it doesn’t happen again.
- Admin fatigue. Already data center and IT administrators are tasked with maintaining critical infrastructure. You create the ‘ swivel chair ‘ analysis issue when too many dashboards, alerts, and sensors go off. This means administrators might miss key issues or not know which screen to examine. Without good observability, the human element quickly becomes stressed and fatigued.
- Lack of standardization. Disparate systems, too many non-integrated components, and silo operations lead to a lack of standardization. This also creates problems with compliance and regulation. Another major issue when there is a lack of standardization is observability across the entire portfolio. As our environments grow, they become more complex, and working with standardization and visibility tools makes managing those ecosystems easier.
- Challenges with scale. Without good visibility, there’s just no good way to scale a business effectively. This is the case as our digital footprints become more distributed. Being able to scale at the pace of a digital market means having visibility into the entire ecosystem. This also means being able to scale based on specific business requirements predictively.
- Loss of productivity and efficiency. The human element can either be a savior or a detriment to the business. Your data center footprint is changing. Whether investing in edge, IoT, or new data center campuses, you need to support people, process, technology, and Without good tools in place, people quickly become lost and potentially disenfranchised. All of this leads to a lack of efficiency and a loss of productivity.
- Sustainability issues. Without observability across the entire data center and infrastructure portfolio, there is no way to deliver on ESG promises and green initiatives effectively. Today, there are new investments in greener and more sustainable solutions. However, organizations without suitable levels of visibility won’t know where to invest time and resources, which can lead to poor performance of some systems where sustainability is hurt. Leaders in the data center space know that observability in a digital infrastructure requires a new approach that focuses on the business, user, environment, and process.
As mentioned earlier, people are tasked with doing more, watching more screens, and conducting more manual tasks. IDC estimates that human error costs organizations more than $62.4 million annually. A significant part of errors created by humans is because of tedious tasks and manual processes. Further, a recent Uptime Institute study points out that more than 70% of all data center outages are caused by human error, not by a fault in the infrastructure design. What does this cost when it all goes down? Quite a bit. Data center outages are expensive! Disasters and outages occur with and (often) without warning, leaving severe business disruption in their wake. At the same time, increasing dependency on the data center means outages and downtime are growing costlier over time. According to a 2016 Ponemon study, the average data center outage cost has steadily increased from $505,502 in 2010 to $740,357. Now, it averages out to about $9000 per minute!
Outages, AI, and DCIM
What does modern DCIM architecture look like with all of this in mind? OK … I ran out of room on this blog and had to do a part 2. In our next blog, we’ll explore five critical attributes around DCIM. Specifically, we’ll focus on how DCIM has changed and which features and attributes are essential parts of a modern DCIM system. Don’t worry; you won’t have to wait too long. I already wrote the other part, and it should be posted soon!
Real-time monitoring, data-driven optimization.
Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.
Real-time monitoring, data-driven optimization.
Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.
Bill Kleyman
Industry Analyst | Board Advisory Member | Writer/Blogger/Speaker | Contributing Editor | Executive | Millennial
Bill Kleyman is an award-winning data center, cloud, and digital infrastructure leader. He was ranked globally by an Onalytica Study as one of the leading executives in cloud computing and data security. He has spent more than 15 years specializing in the cybersecurity, virtualization, cloud, and data center industry. As an award-winning technologist, his most recent efforts with the Infrastructure Masons were recognized when he received the 2020 IM100 Award and the 2021 iMasons Education Champion Award for his work with numerous HBCUs and for helping diversify the digital infrastructure talent pool.
As an industry analyst, speaker, and author, Bill helps the digital infrastructure teams develop new ways to impact data center design, cloud architecture, security models (both physical and software), and how to work with new and emerging technologies.
0 Comments