How the Internet of Things will Impact Data Centers and Capacity11 min read
Only Rip Van Winkle could have slept through last week’s story and video emanating through the ether from researchers at MIT and Harvard about flat sheets of paper autonomously folding themselves, origami-like, to create “legs” and then scuttle about. If you are of a mind to do this, a quick Google search will get you to the “easy” directions.
Oddly enough, the origami robots made complete sense to me, as I most recently have been working on a project on behalf of Autodesk’s bio/nano/programmable matter research lab, and a project known as Cyborg. The particular project that I have been peripherally involved with brings attributes of bio-genetic design algorithms to the world of industrial design/engineering — in other words biomimicry delivered, via cloud-based SaaS, to anyone with 3D modeling and printing capability at their disposal, from the hobbyist/aficionado “maker” market to high-end design engineers and architects.
Futon, make Thyself!
I have watched videos of programmable matter that Ikea will love: Imagine taking delivery of a block of matter that, when you add water (or whatever), folds itself into a pre-configured design. For anyone who has actually tried to follow Ikea directions, this would prove a boon.
Likewise, imagine liver cells printed as media into an organ that eventually can be transplanted to a patient. I have seen the images of nanobots designed to identify, attract, attach to, capture and “eat” the cancer cells in the bloodstream. And then there’s “4D” matter printing where the“product” organically forms itself in time? Think sculpture that creates itself autonomously. And what if you could walk into any built environment, scan and map it using a mobile device, and then “print it out” from your remote location?
Cyborg is intended to be a cloud library ecosystem of all algorithms across all fields of design and engineering with the tools to make it accessible to anyone with the interest. Data, anyone?
Internet of Things
Why, you ask, am I going on about this one area of science/engineering endeavor (3D modeling, printing)? Because, frankly, the statistics on how many internet-enabled units (sensors and smart devices) of all kinds there are is simply too mind-numbingly immense. I thought it might be useful to create a context — one class of smart device and one domain of cloud services– to consider to simply get the breadth and scale of all the possibilities. On another client project, I was exposed to applications for Google Glass in surgery. And before that, in yet another case, at robotic surgery itself and cloud-based remote diagnostics…
Gartner says it could be 26 billion “units” by 2020 (and I take units to mean smart-sensor devices of all kinds) worth potentially $300 billion in new incremental revenues.
That’s what’s known as the Internet of Things (IoT); with the “T” being, predominantly, “smart-sensor” devices — some of which are the rough equivalent of a microserver that you can hold in your hand (or will, sooner than later, wear in Google Glass or in fabric and will surround you in your environment — Google Nest).
And I, of course, still refer to my device euphemistically as a ‘phone — rabbit ears and tin foil leap to the mind. The Internet of Things is all about infrastructure; it’s all about the sensors and the devices and the computing and network technology — software and hardware — that enables, drives and supports them.
Internet of Everything
So, what’s the diff between IoT and IoE?
IoE is all of IoT — PLUS all of the data generated, captured, processed, stored, mined, analyzed and applied. Cisco Global Cloud Index says that “annual global data center IP traffic will reach 7.7 zettabytes by the end of 2017,” with 5.3 zettabytes of that being cloud-based — in other words, two-thirds of all computational workloads will be processed in the cloud, a 35 percent CAGR for the 5-year period ending 2017.
I Predict… errrr, lets see, hmmm… Great Change?
If we look at data centers in the rearview mirror, we are likely to miss the picture presented by Gartner in its IoT estimates and Cisco in its IoE projections. The mirror tells where we have been and when we got there. In short, Moore’s Law got us this far, but we seem to think that that is a good prognosticator of where we are going, and I maintain that it may not any longer be sufficient. Or better said, it is sufficient, but it is accelerating. At one time, it was generally meant to be that microprocessor performance would double every 24 months at “about” the same cost. Then it was trimmed back to 18 months to reflect the acceleration of processor innovation.
If this were the automotive industry, we are somewhere in between the 1939 DeSoto and the 1969 Camaro in terms of auto design and technology. What’s coming at us, by comparison, is a McLaren Formula 1 machine.
We are at the very beginnings of the accelerated growth in server rack power densities (and their implications for cooling). Most data centers today are still well under 10kw at the rack, there are owners who today who are at or above 20kw per rack, and some who predict being at or over 50kw by the end of the decade. (And a few specialized instances of even higher than that.)
Technology advance, convergence and consolidation at the chip, memory, network and storage levels all are conspiring to drive density up. And this is a good thing, on balance.
Alternatives in raising inlet temperatures, outside air economization, adiabatic cooling, liquid cooling and fluid immersion, and just plain better air management will all find a place in future designs. As power costs and carbon content only continue to grow as issues, and as awareness of data center water consumption increases, automated real-time monitoring, measuring, analyzing and managing of both power and cooling (DCIM) for efficiency, effectiveness, eco-sustainability and cost will become standard data center management requirements. And these capabilities will be baked into future designs.
Software-defined autonomous management of networks and data center capacity is still in early-stage nascence, as is hyperscaling. But the demand for agility with reliable availability and resiliency will rapidly accelerate this. The demand for IT, data center and cloud capacity to become a stable, well-governed standardized commodity (with classes of diversity to meet differing requirements) capable of being traded on open exchanges. Standards will evolve by which applications and service lines can be defined for the full stack. We may not be there yet, but the future is hard upon us, and coming fast enough to make our heads spin.
In short, we ain’t seen nuthin’ yet! The next decade will see major data center innovation and transformation. Advanced systems design and engineering knowledge and skills will become a requirement, in order to be able to properly advise CIOs, CTOs and CFOs on IT requirements, capabilities and capacity planning.
Autonomous paper origami robots anyone?
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
0 Comments