Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. View our editorial policy here.

Intel’s broad strategy to increase significantly the utilization rate of x86 servers is part of a sweeping effort to make enterprise computing more efficient—an effort that could have far-reaching implications for the channel.

Given all the cores that Intel is adding with each successive wave of new processors, the challenge now is finding ways to optimize the running of, for example, transaction-processing and batch-oriented applications on the same servers. To that end, Intel has been making significant investments in big data technologies such as Hadoop, including adding the ability to deploy a graph database on top of version 3.0 of its Hadoop distribution.

While graph databases are an emerging class of database systems that are increasing in popularity, rather than deploying an additional database that needs to be managed, Intel is making the case for deploying a graph engine on top of Hadoop. That approach not only makes it simpler to manage the overall IT environment by reducing the number of databases that have to be managed, but it also gives organizations another reason to invest in a batch-oriented platform such as Hadoop.

With Hadoop increasingly emerging as the primary source of big data in the enterprise, Intel now views Hadoop as another data type that needs to be orchestrated simultaneously alongside other data types running on an x86 server, said Jason Fedder, general manager of channels, marketing and business operations for Intel’s Datacenter Software Division. That’s significant, explained Fedder, because it means getting the maximum amount of utilization from x86 processors that historically only ran one type of application at a time.

“We foresee a lot of exponential growth in terms of data types and associated application workloads that will drive up utilization,” Fedder said.

For solution providers in the channel, the implications of that strategy are profound. While it may not lead to a shrinking of the physical data center, it does mean that as x86 servers will get more efficient in the number and types of workloads that they can run concurrently. That should make it more economically feasible for organizations to invest in a higher number of workloads per server in a way that is cost effective to deploy, Feder said.

As part of the effort to make its distribution of Hadoop even more appealing, Intel in its latest Hadoop offering is also enhancing the security and management tools that now come embedded in the platform. In the meantime, Hadoop is evolving in a way that allows multiple types of engines to be layered on top of a big data substrate.

As Intel invests big in Hadoop and nonvolatile memory technologies to address some longstanding utilization rate criticisms of the x86 processor, the impact on the channel could be far-reaching, Fedder said. The economic effects on the channel could stem from a variety of factors, ranging from the number of physical servers sold (in light of the fact that more workloads will be able to run on any given server) to the total cost of managing a data center environment where the density of the overall virtual server environment is about to become that much greater, he said.

Meanwhile, Hadoop, in particular, and big data, in general, are emerging as massive opportunities for the channel. The only question now is how best to go about taking advantage of it.

Michael Vizard has been covering IT issues in the enterprise for 25 years as an editor and columnist for publications such as InfoWorld, eWEEK, Baseline, CRN, ComputerWorld and Digital Review.

Subscribe for updates!

This field is required This field is required