With the volume of data storage predicted to double every 18 months, the question isn’t are you going to sell more storage, but whose? Here’s what your customers need to know before they decide, according to Peter Fuller, Co-Founder and VP of Business Development from Scale Computing, a developer of midmarket clustered storage solutions.
Storage costs can represent anywhere from 30 percent to 50 percent of a company’s total IT hardware spending. Many products are “scale-up” systems that can’t meet the growth curve of data consumption. Look for solutions that provide low-cost, controller-less scale-out features that let you add capacity, as needed to reduce management costs.
Second generation storage are considered to be controller-based systems. Third generation storage is referred to as being unified, scale-out, on commodity hardware, and all within an extremely flexible clustered storage environment. It decreases cost, increases control and makes management easier.
Your solution should make it possible to mix and match protocols in the same cluster without having to scrap your investment in storage every time a new protocol is introduced. Look for solutions with fine-grain scalability, which allows IT managers to scale by as little as 1TB per node at a time. Future disk density compatibility is another important feature. And disk-based storage routinely doubles in capacity; the better choice is to look for a scalable solution that is also density agnostic.
Simplified storage management is a necessity for relieving heavy storage management overhead. Administrators should be able operate the management console from any node for control and flexibility. Unified architecture is the coming wave in the next generation data center. Many companies have saved huge sums of money by consolidating their SAN and NAS environments onto one easy-to-mange scale-out platform.
Many storage companies have built storage systems that set arbitrary limits on scalability. Some can scale to 178TB under one control unit, others to over 400TBs, some to less than 16TB, and others not at all. Find a solution with virtually unlimited, as-you-need it scalability. Systems should also include built-in data replication and snapshot features at no additional cost. The ability to replicate data off or on site will preserve your content and give you further levels of control over your growth.
With storage costs decreasing by 25 percent per year, the notion of being forced to purchase storage ahead of the need is archaic. If a filer’s drive capacity is reached and you need an additional TB, the only option is to purchase the next larger system available. Find the finest-grain scalability available.
Generation 3.0 storage includes file and block-level protocols, allowing SAN/NAS environments to be run from each storage node. CIFS, NFS and iSCSI all run simultaneously. This allows IT managers to eliminate file servers, consolidating onto a single, easily scalable platform.
Storage provisioning and capacity planning engulf 14 percent of a storage manager’s time; close to 300 hours per year. It’s the single-largest aspect of the job. Consequently, 83 percent of storage managers list capacity planning as their top stress. When storage is purchased, architecting it can be complex and costly.
Third generation storage will stripe and mirror data across multiple nodes and even more drives, making everything essentially “data aware.” There’s no need to architect redundant systems, because the entire storage pool already has multiple copies of itself parsed among the nodes and drives. Systems that require you to create, then manage redundant systems of controllers or arrays add far too much complexity into your every day life.
Controller-based systems typically are programmed to read only nodes of a particular density. Non-controller based solutions can accept nodes of multiple densities to the cluster. Nodes purchased today will work with nodes purchased in the future. This makes capacity planning history and growth much more convenient.
An extremely flexible architecture with fine-grain scalability allows IT managers to scale by as little as 1TB per node at a time. And, unlike systems that require control units to recognize specific node densities, it requires no control unit and allows IT managers to mix and match node densities as they need, providing the ability to buy only the storage required, when it is needed.