YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

SGI Releases New HPC Blade Platform

By Allison Proffitt

August 8, 2007 |  SGI’s recently announced Altix ICE product is positioned to bridge the divide between the high-throughput and high-performance environments. ICE is based on the Intel 5000X chipset and features 32GB of memory per blade, with up to 512 Intel Xeon processor cores per rack delivering 6 TFLOPS of performance.

“Customers are tired of clusters,” SGI sciences segment manager, Michael Brown, told Bio•IT World, listing problems such as productivity disappointments and struggles powering and cooling clusters. But SGI thinks the Altix ICE solution will alleviate many of those headaches.

ICE relies on several technologies to offer what SGI promises is breakthrough reliability, density, simplicity, power, and cooling options. SGI expects these advances to make ICE a logical solution for life science tasks requiring both multiple CPU cores per job and multiple parallel jobs.

ICE is n+1 configured; power and cooling backups are redundant, but conserved. If a power supply or fan fails, the system is hot-swapped to the remaining supplies until the problem is fixed. Paired with SGI’s “ESP” software that contacts SGI when a feature malfunctions, the configuration is reliable and efficient. “We’ve had SGI call [an ICE early user] about a fan failure, and the site didn’t even know there was a problem,” said Brown. The fixes were made without any system interruption.

Using SGI Altix 4700 innovations, the system runs at 76% energy efficiency at the rack level, and is cooled with a multi-door water cooling system. Especially for smaller bioscience operations, said Brown, the dense configuration of each rack and the efficiency of the cooling system will allow organizations to pack more racks into smaller space.

Finally, Brown touted the system’s scalability and performance. Operating system synchronization is integrated across nodes. A 2% increase in overhead efficiency, he said, can lead to a three-fold increase in computing power by conserving wasted cycles. In addition, InfiniBand connected storage is faster than local disk storage. The result  is a system that functions “much closer to a single system than clusters,” for less cost.

The system connects with a duel InfiniBand backplane, and can be installed and running in two hours, Brown said.

Subscribe to Bio-IT World  magazine.

Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.