When it comes to IT infrastructure in the data center, it hasn’t always been the best of times for the channel lately. Most of the activity in the data center has focused on consolidation, which doesn’t create a lot of upgrade opportunities because it targets leveraging technologies such as virtualization to increase utilization rates.
But as data centers in the cloud era continue to evolve, new requirements for much higher levels of responsiveness will compel IT organizations to fundamentally re-architect data centers in ways that should create massive new opportunities for the channel.
The most immediately apparent opportunity is going to be a shift away from one-size-fits-all general-purpose servers. Instead of primarily throwing high-performance x86 processors at every application workload, the data center of the future is going to consist of servers that include both high-performance and low-energy processors.
For example, Advanced Micro Devices (AMD) recently outlined a server strategy that includes everything from low-energy processors based on x86 and ARM processors in 2014 right up to a 5GHz processor that could appear in servers beyond 2014.
According to Andrew Feldman, vice president of AMD’s server business unit, the data center of the future will be defined by multiple classes of processors that are optimized for specific classes of application workloads.
The Opteron X-Series, code-named Kyoto, is a family of low-energy processors that are ideally suited for memory-intensive big data applications. Compatible with the Intel Atom architecture, these processors are designed to run applications that work with the x86 instruction set. At the same time, AMD has licensed 64-bit ARM processor technology, which will show up in new data center platforms such as HP Moonshot servers.
At the high end, AMD also plans to launch in 2014 a new series of advanced processing units that integrate graphics processing capabilities on the same CPU as an x86 processor.
Just about every server vendor sees a lot more heterogeneity in the data center going forward, largely because the economics of using low-energy processors, which are routinely used in consumer electronics devices, are too compelling to ignore.
“The economies of scale of ARM processors in particular are going to be enormous,” said Feldman.
A challenge right now is that there’s not much in the way of tools to manage diverse data center environments made up of different classes of processor technologies.
“The cloud is the first place we’ll see that level of heterogeneity,” said Dave Vellante, co-founder of the IT research firm Wikibon, “but the tools needed to manage all that are not there yet.”
A big step in the right direction is the concept of creating virtual data centers that will enable a more homogeneous approach to managing the data center. But before any of that capability can be brought to bear, IT organizations will need to embrace software-defined approaches to managing network and storage.
Beyond making it easier to pool resources, software-defined approaches to managing the data center give IT organizations the ability to manage IT infrastructure at a much higher level of abstraction. That ability, in turn, is the key to applying IT automation technologies in ways that will enable heterogeneous data centers to cost-effectively scale.
From a channel perspective, all this diversity in the data center adds up to a demand for faster servers, storage and networking equipment, while also creating a hunger for the expertise needed to make the IT environment truly agile. Those kinds of channel opportunities rarely come along more than once a decade.
Michael Vizard has been covering IT issues in the enterprise for 25 years as an editor and columnist for publications such as InfoWorld, eWEEK, Baseline, CRN, ComputerWorld and Digital Review.