July | August 2006 | I’ve got a little device called the Kill A WATT manufactured by P3 International. It goes in-line between an electrical socket and an appliance, and can be configured to show Amps, Watts, Watt-hours used since reset, average power consumption over time, and so on. It’s a great toy for people like me who sometimes wake up wondering how much energy the refrigerator uses if one stands in front of it with the door open (a fair amount), or how much it costs to heat a cup of water in the microwave (less than a penny).
I recently brought this device to our co-location facility and measured power consumption on a few different servers starting at boot up, and under both loaded and idle conditions. Broadly speaking, I found that rack-mount servers burn somewhere around 1 A at 110 V, per rack unit they occupy. Of course, I’m sweeping many details under the rug here. There is an initial spike when the system is physically kicking its moving parts into motion. There are differences between the various manufacturers and architectures, as well as between loaded and idle conditions. System configuration is the dominant factor. You can affect power consumption most dramatically by installing additional disk drives or RAM in a system. Systems with the “blade” form factor threatened to burn out my $25 piece of consumer electronics, and I didn’t have a DC measurement tool handy, so I omitted both of those types of system.
What’s in a Watt?
My question was this: How much does it cost me just to power and cool these servers?
Watts = Volts * Amps. My 1A device operating at 110 V (AC) supply, for 24 hours, is consuming 2.64 kWh per day. Commercial electrical power in downtown Boston costs around 14 cents/kWh. I’m spending $0.37 per day to power each 1 U of server. This would be higher if I utilized every stage in the CPU pipeline 24/7, if I spun the disks constantly, or if I was constantly turning the systems on and off.
Then there is the matter of heat: Computers can be thought of as small heaters that produce residual computation. This is similar to incandescent light bulbs, which are best modeled as inefficient heaters, losing 5 percent of their energy as visible light. We have to get rid of this heat somehow. Assume that cooling takes the same amount of energy, but operates at 50 percent efficiency. A reasonable rule of thumb is that it costs twice as much to dissipate heat as to create it. The details depend on geography, architecture, and a vast number of other factors. It’s cheaper and simpler to shed heat into Lake Superior in the winter than into the Arizona sun in the summer.
Simply bringing the energy to the server and then taking away the wasted portion of that energy costs about $1 per day, per single rack unit. That’s $365 per year, per server.
Consider the compute cluster: For 100 nodes, we’re talking about an annual expense of $36,000. That expense is tied to the cost of electricity, which will presumably rise in absolute dollars in the coming years. This is nontrivial. It is also invariant over whether or not the machines are being used in a productive manner. In some organizations, the person buying or using the cluster does not incur the facilities costs associated with their technology. Even if the electric bill is centrally paid, it still needs to be paid, and should be considered by the user/owner.
It seems to me to be a moral imperative to put a decent user interface in front of those CPUs and get some good use out of them, rather than simply burning fossil fuels for no reason. At the very least, we should be making use of the features built into every modern chip for going “idle,” if not spinning down completely during periods of inactivity.
One final thing: If anyone out there has a hydroelectric-, photovoltaic-, tide-, or wind-powered cluster, I would love to talk with you.
Chris Dwan is a principal investigator with The BioTeam. E-mail: firstname.lastname@example.org.