Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

I let you in on one of my big passions – I have a certain fondness for visiting data centers. Maybe it is the feeling of power coursing through all those racks of servers, or getting access into the inner sanctum of IT after passing through a series of security checkpoints. Or it could be just seeing how all this gear has been wired up. I always was big on looking at the backs of equipment and checking the cables whenever I got a demo from some vendor.

So when the folks at Schneider Electric and its American Power Conversion subsidiary asked me if I wanted to come to their open house of a new kind of data center, they were talking to the right guy, and I jumped at the chance. The place is an oddity for several reasons. First, it is built like an actual working data center but with one key difference: there is literally nothing inside it. Instead, the mostly empty building has lots of HVAC equipment, electrical power, and plenty of monitoring and modeling tools. The idea is to have “a facility dedicated to practical solutions, not a not of hype,” says Aaron Davis, the chief Marketing Officer for the subsidiary.

Schneider built its data center, which it calls its Electric Technology Center, to serve as a test bed for its customers, to show IT managers what they need to do to reconfigure their own data centers as they have evolved from mainframe-centric to house more distributed systems. It is a great idea and overdue. As IT shops outgrow their data center infrastructures, they want to be able to figure out the power and cooling issues and how companies can retool their data centers appropriately.

Chances are you have customers with some pretty old equipment in their data centers that they’d like to replace but literally don’t have the energy to do it. Their raised floors are probably filled with outdated cabling that is so thick you have lost much of the airflow capacity and cooling ducts. Their air conditioning is on overload because it was never designed to cool racks of gear, and the temperature varies greatly from one aisle to another as a result. Their backup generators and power conditioning equipment is probably not matched to the gear it is backing up, and you have no idea of what should be upgraded first.

Wouldn’t it be great to model what you need to do, before you actually have to bring servers down and remodel? That is the essence of the idea behind what Schneider is trying to do with its new testing facility, located outside of St. Louis. Think of it as one big (more than 100,000 square feet) big playroom where you can bring in gear and move it around and test various situations before you have to deploy it in your own shop. It is a great channel opportunity, and a great place to learn more about data center power and cooling issues.

Some companies are fortunate and able to rebuild or relocate their entire data center, something that I got to witness first-hand when the data center at the end of my block was rebuilt to new specs. (See the article here on my night at Rejis when they moved their facility just a few feet:)

But not everyone can just take a former parking lot and erect a new building to serve modern needs. Some IT shops have to do a fair amount of retrofitting, and that’s where the St. Louis test bed comes in handy. Firms can build racks and lay them out on the floor, and try out different scenarios to measure airflow, power consumption, and temperature gradients for their gear. There are also two huge temperature controlled testing rooms that can rapidly heat or cool down and be used to see what happens to particular gear.

I am glad that the company picked St. Louis to build their facility, because being the data center groupie that I am I hope to visit often and get to see what they are doing with their customers. Plus, it is a really neat looking building that also serves as a showroom for some of the company’s product lines. Schneider bought APC earlier this year, and has merged them with their MGE division, which sells electric power control equipment. While most of us know APC from their battery backup boxes (or we should), they also make large-scale rack power and cooling gear that are designed for data center use.

Their push has been to isolate airflow just around the immediate vicinity of the racks, so you are cooling the smallest air volumes and reducing the amount of power for these cooling needs. This has lots of appeal, particularly these days when everyone is going green and when oil prices continue to reach new highs. At the launch event last week, representatives from the US Department of Energy and spoke about how they are working together to reduce energy usage of data centers. “This is real low-hanging fruit,” said Douglas Kaempf, who runs the Industrial Technologies Program at DOE. The Schneider facility has 7 MW of power supplied by the local utility, which is enough to power a reasonable suburb.

Ironically, the Schneider facility is located in between two massive data centers of Mastercard and Citibank, just the other side of the Missouri River from where one of the worst floods happened about 15 years ago. Don’t worry – all three are on high ground and have plenty of backup resources too.

If you are looking at a data center remodel, keep this place in mind. The daily rental fee starts at $5,000, depending on customer needs.