Physical Layer Changes at Heart of Evolving Data Center Infrastructure

 
 
By Arthur Cole  |  Posted 2013-02-25 Email Print this article Print
 
 
 
 
 
 
 

If you were to ask the typical CIO to list the top five changes in the data center, it would most likely include top trends like cloud computing and virtualization. Mobility would also make the cut, as would Big Data, enterprise-grade flash storage and then power and cooling or some other form of green IT.

Less well known, of course, are the many small things taking place on the infrastructure level. Developers and designers have not given up their fondness of continual tweaking just because data environments have become more distributed and dynamic. In fact, many of these subtle changes are a direct response to the big events taking place — a recognition of the fact that even tried-and-true practices need updating from time to time.

Take storage. Almost always, the story is the addition of flash tiers on or near the server as a means to boost throughput and streamline infrastructure around more modular architectures. But did you know that the opposite is also happening? As Enterprise Storage Forum’s Paul Rubens notes, many storage firms are reserving some of the silicon in their controllers and other hardware for non-storage applications like pre- and post-processing, analysis and filtering, as well as metadata management. In this way, key Big Data functions can be applied directly in the storage array, rather than pushing large loads through already overworked networks.

As well, keep an eye out for increased integration of storage and network controller functions on high-end processors. Intel is already working on these kinds of advanced architectures primarily as a way to enhance parallel processing capabilities for extremely large and complex data environments. While initial applications would likely come from the HPC and exascale supercomputer realms, large enterprises like Facebook and Google are constantly on the hunt for solutions that improve their data-handling capabilities without breaking the budget.

Another little-noticed development is the increased availability of specialized hardware that runs counter to the long-standing commodity movement. Low-power servers like HP’s forthcoming Project Moonshot models are designed from the ground up for large-scale, web-facing computing environments, not your run-of-the-mill enterprise. So while CRM, BI and the like will sit comfortably on traditional x86 infrastructure, newer applications governing web transactions and Big Data processing are likely to get their own hardware.

Rack architectures are also being primed for a make-over as part of the ongoing trend toward interoperable, modular designs. At the recent Open Computer Summit, Intel and Quanta Computer showed off prototype structure that employs “rack disaggregation” that separates compute, storage, networking and power systems into discrete entities that can be more easily pooled across disparate architectures. Not only does this enable greater flexibility when matching resources with data needs, it also improves lifecycle management, system resiliency and overall resource consumption.

All of this activity points out one of the more crucial facts about the rapid evolution from static, siloed infrastructure to more dynamic virtual and cloud environments: that not all of the action is happening in software. The physical layer still has a lot of room to maneuver when it comes to meeting the needs of a mobile, collaborative workforce, and opportunities still exist for those who recognize that the old ways of doing things are not always better.

 
Originally published on www.itbusinessedge.com.
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...