Virtualizing spinning-disk data storage systems is no longer simply a trend; it’s now the norm and considered vital to backing up, archiving, and protecting data for a growing number of businesses large and small. The days of simply adding storage hardware to an IT system and pouring in e-mail, word documents, spreadsheets, photos and everything else willy-nilly are essentially gone.
However, as the use of virtualization in storage environments increases, so does the need for tools to manage these virtualized systems. An increasing number of vendors—from large organizations such as Hewlett-Packard and VMware to smaller companies such as Scalent Systems—are readying products for release later this year that are designed to ease the management crunch created by storage virtualization.
Virtualization software allows a single desktop PC or server to be carved up to behave as though it were many different, separate computing systems; each virtualized node behaves almost identically to an independent physical machine. As a result, capacity that would normally go unused can be put to work doing different storage duties at different levels of availability.
By the end of the decade, virtualization will be more the norm and less the exception. Virtualization transforms physical hardware—such as servers, hard drives and networks—into a flexible pool of computing resources that businesses can expand, reallocate and use at will.
Over the past few years, virtualization has started to move from test environments to production scenarios, according to industry observers. Analysts say they expect the pace of adoption to increase and say IT managers are searching for better tools, demanding more power and flexibility, and finding new ways to apply virtualization techniques.
Data-storage-use percentages in general are low, analysts say. It is common to find that companies write only 10 to 15 percent of their business data to a storage apparatus, leaving 85 to 90 percent of capacity in machines that constantly draw in power for availability and cooling. With the current emphasis on power conservation and eliminating so-called greenhouse gases from the atmosphere, having servers—or portions of servers—sit idle is not the most efficient use of expensive capital goods.
The problems of efficiency can be seen in the sheer numbers of servers that currently are being used—30 million installed in the United States alone, according to research company IDC.
In most cases, analysts report, companies are persuaded by sales representatives from storage companies to purchase much more capacity than they actually need.
“The server—any server—doesn’t care what information is on it,” said Patrick Eitenbichler, marketing manager for HP’s StorageWorks business, in Cupertino, Calif. “It needs to use the same power draw whether it’s empty or full. And there’s really not much difference between regular app/Web/database servers and storage servers here. The main thing is that storage servers are always going to need more capacity, while regular servers use the load balancing in virtualization to utilize the finite capacity they have.”
Successful virtualization deployment also means that the number of storage servers can almost always be trimmed way down—sometimes as much as tenfold—with better utilization of each machine, analysts say. This isn’t particularly good news for storage hardware makers, but the increasing number of first-time buyers in the marketplace has more than offset the consolidation effect of virtualization, at least up to now.
Using virtualization software, a roomful of servers can be consolidated into a single physical box, provided that it’s powerful enough.
“Pundits claim this trend is cyclical because it’s returning us to the old days of a single large, powerful computer—à la the mainframe—running all of the tasks in an organization,” said James Bottomley, chief technology officer at business continuity and disaster recovery provider SteelEye Technology, in Palo Alto, Calif.
“Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it’s instructive to examine virtualization in light of this mainframe comparison to see if there are any lessons to be learned,” Bottomley said.
Although virtualization can be as simple as partitioning a single PC hard drive or as complicated as a distributed network of hundreds of storage servers, the goal—and most often, the result—is the same: better use of resources, more control over storage and an improved return on investment. However, as each system becomes larger and more complex, the key to better server deployment is how well a business’s virtualization management software works.
Analysts at companies such as IDC, Gartner and Forrester Research report that 75 percent of Fortune 2000 companies are now using some form of virtualization every day in production environments. As more and more content pours into business storage coffers, more—and more accurate—ways of accessing and retrieving business information will be needed.
IDC analyst Michelle Bailey has spotted a widening gap between storage virtualization and the automated ability to control it. A number of companies old and new are stepping up to address this issue as the gap expands.
A key problem: how and when to partition a storage server—or any server, for that matter—to the best advantage of the IT system?
“That’s the key area we’re going to see grow big-time over the next five to 10 years,” Bailey said. “More and better virtualization management controls are going to be needed. With applications, hardware, services and people changing all the time, the business and system requirements are also changing. All these variables require adjustments from the IT staff—unless a lot of [the adjustments] are automated. The key is to get the business folks and IT staff together on the same page.
“When you set up a partition for a Web-based app, for example, you have to set up a DNS [Domain Name System] address for it, and the [partition] size may vary. And the networking will change. New tools to help accommodate this—instead of requiring hours of staff time—are going to be very important in the years to come,” Bailey said.
Scalent, a 3-year-old company also based in Palo Alto, has set its sights exactly in that direction. The company offers a separate operating-system-agnostic software layer positioned under the operating system that allows the pieces of a storage server system to be moved around at will and then put back into place as needed.
It’s all done on a single-page console.
“Making changes to a virtualized server can be painful,” said Kevin Epstein, vice president for marketing at Scalent. “With our software, an administrator can reach in, turn off certain machines or partitions which aren’t being used enough, or reapportion partitions. It also virtualizes the networking and storage connections to make those movable. We call it ‘rack once, reconfigure infinitely.’”
Other companies in the virtual storage space don’t agree with Scalent’s approach.
“We don’t see the difficulty of making changes in virtualized servers at all,” said Mike Grandinetti, vice president and chief marketing officer of server virtualization software provider Virtual Iron, in Lowell, Mass. “We mask the complexity of virtualized servers with an integrated approach built right into the server software. We don’t need another layer of software to do the job.”
Hitachi Data Systems is the only storage hardware and software maker to build its virtualization management tools right into the head controller.
EMC’s VMware division, the largest-selling virtualization software organization, as well as HP, Sun Microsystems and IBM, also plan to announce updated virtualization tools this year.