Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Vendors are looking to both hardware and software to help customers handle growing problems with heat in data centers. The focus comes at a time when the promise of greater compute power is being thwarted by the increased heat generated by faster processors and denser form factors.

System makers say new features in their servers as well as power management devices in chips from Advanced Micro Devices Inc. and Intel Corp. will help. In its next generation of PowerEdge servers, Dell Inc., of Round Rock, Texas, will offer enhanced heat pipes, officials said. The copper pipes hold small amounts of water that, as the server heats up, vaporizes, pulling the hot air toward a heat sink, where it’s cooled. The new systems will offer heat pipes that pull even more hot air, officials said.

IBM last week introduced the eServer Rear Door Heat Exchanger—known as Cool Blue—a 4-inch-thick door that attaches to the back of server racks. The Armonk, N.Y., company’s device will use chilled water already available in the air-conditioning systems in most data centers to cool air as it’s blown out the back of servers, said Alex Yost, director of IBM’s xSeries.

Yost estimated that Cool Blue will help remove up to 55 percent—or about 50,000 Btu—of heat generated by a fully populated server rack and save approximately $9,200 per rack annually.

Click here to read more about Cool Blue.

In addition to their own technologies, OEMs continue to partner with companies such as Liebert Corp. and American Power Conversion Corp., which make power and cooling supplies for data centers. Russell Senesac, product manager for APC’s Infrastruxure offerings, said IBM’s Cool Blue fits in with his company’s philosophy of housing the cooling devices as close to the heat source as possible.

APC, of South Kingston, R.I., offers an 18-square-foot cooling unit, the NetworkAir IR, which is housed next to server racks. Senesac said APC is working to shrink the device, giving it greater cooling power in a smaller space. In addition, the company is enhancing its Manager software, which helps administrators plan their cooling needs. Currently, the software gives users a room-level view of their cooling capacity and redundancy, Senesac said. Within the next year, that view will be narrowed to the row level, and, within two years, to the rack level.

Data center administrators will take all the help they can get. Two years ago, few of them gave a second thought to thermal issues. Now heat is among their top concerns.

“We have an aging infrastructure originally designed to house, power and cool a mainframe that is now supporting dozens of racks of high-performance servers,” said Michael Hodges, manager of system services at the University of Hawaii, in Honolulu, and a Sun Microsystems Inc. systems user. “The old strategy of cooling the machine room is inadequate and wastes energy, which by all accounts is going to become very expensive within the decade. Vendors need to standardize on solutions that focus on cooling the equipment.”

Ron Mann, director of rack and power systems in Hewlett-Packard Co.’s Enterprise Storage and Servers group, said most of the Palo Alto, Calif., company’s current hardware, software and services offerings can address user needs when power consumption per rack is up to 15 kilowatts. The challenge in the future will be when that number grows. HP Labs is working on solutions, Mann said.