YouTube Facebook LinkedIn Google+ Twitter Xingrss  



By Salvatore Salamone

May 15, 2003 | In silico research is hot -- literally.

Rapid adoption of high-performance, high-density computer and storage clusters can potentially lead to equipment overheating and failure. The heating problem, which is just starting to receive attention, is sure to become more severe as equipment such as server blades come to market.

Most life science managers, while paying great attention to the electrical power issues in their data centers, have dealt with data center heating issues simply by adding air conditioning (AC) units and cranking them up.

"Right now, I have a brute force cooling system for the entire data center," says Jon Benson, network systems administrator at Neurome, a company that studies gene expression patterns with regard to brain function and diseases. "It is a simple wall-mounted thermostat that monitors temperature and humidity and cycles the system on and off accordingly. I do have APC environmental monitors in each rack, and a couple of my servers have internal temperature monitors, but these are not linked to the AC."

Benson is not alone when it comes to turning up the AC. Most IT managers use exactly the same approach to cooling their data centers, and for good reason: It's simple, and it works.

But the situation is changing. In what seems a paradoxical situation, advances in fabrication of components such as processors and disk drives have reduced the electrical power consumption per component, but the miniaturization of the components allows for many more of them to be packed into smaller spaces.

That requires more power in data centers. More power means more heat needs to be removed, and that requires additional AC.

"A fully loaded data center equipment rack can have an electrical load of 3,500 to 4,000 watts," says Kevin Macomber, marketing manager at Wright Line LLC, a company that designs and builds what it calls technical environment solutions for offices and data centers. "That's the equivalent of about 60 65-watt light bulbs."

Macomber notes that it is not uncommon to see enclosures chewing up between 6,000 and 8,000 watts. That's about 100 to 125 65-watt light bulbs all packed into a rack the size of a tall refrigerator. How does this translate into cooling requirements?

A data center rack can hold up to 42 1U servers (servers that are one rack unit high). A fully packed rack with high-end dual-processor servers has a heat load of almost 42,000 BTUs per hour, according to Hewlett-Packard. That's about the same heat load for a one-story house. So a data center full of racks has huge cooling requirements. The heat load only worsens as more powerful servers based on Intel's Itanium processor become more widely used.

Bottom Line -- How Hot Is Hot?
The problem is serious. A 70ºF operating temperature is considered the norm in a data center rack. But The Uptime Institute, an organization of end-users and vendors concerned about facilities and data center management, says it has measured temperatures above 100ºF in some densely packed racks. Rule of thumb says that long-term electronics reliability is reduced by 50 percent for every increase of 18º above 70ºF, according to the Institute.

As a result, IT managers are cranking up the AC just to be safe -- an inefficient practice. "It's like bringing home a carton of eggs, putting them on the table, and turning up the AC [to cool them]," Macomber says.

A better approach would be to maximize cooling around hot spots in a data center. Many life science companies are already accomplishing this in an ad hoc manner. In a common raised-floor data center, cool air is circulated through conduits under the floor panels. Companies often put perforated floor tiles in front of hot spots to selectively direct cooling. Others place vented panels in the ceiling above hot spots to draw out the heat.

Even though a systematic approach to cooling makes more sense, most data centers deal with heat haphazardly. Only 15 percent of U.S. companies are looking for more efficient ways to cool their data centers, according to Hewlett-Packard.

HP Services has just launched what it calls a “smart” cooling service that models data-flow temperatures and air flow in three dimensions. "With the model, we can see the temperature difference top to bottom in a data center, and we can get the temperature distribution for any horizontal or vertical plane," says Brian Donabedian, a site planner and environmental specialist with HP Services. The model also shows how air-flow patterns change with the height of ceilings or racks, as well as with the width of racks or rows of equipment.

The HP technician can use the model’s output to recommend how to position equipment, perforated floor panels, and ceiling vents. HP says such intelligent cooling techniques  can reduce the cost of cooling by 25 percent. That can equate to a savings of about $1 million annually for a medium-sized to large data center using about 15 megawatts of power.



For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.