YouTube Facebook LinkedIn Google+ Twitter Xingrss  

SC08 Showcases HPC Muscle Systems, Top500 List


By Salvatore Salamone

AUSTIN, TX | The recent SC08 conference* marked the 20th anniversary of the annual supercomputing conference and expo, with the application of high-performance computing (HPC) in life sciences a major theme of the meeting.

Researchers who need masses of computational power might want to consider a program at Argonne National Laboratory. “We award large blocks of time to accelerate research,” said Peter Beckman, director of the Argonne Leadership Computing Facility (ALCF). The facility is home to one of the world's fastest computers for open science and is part of the U.S. Department of Energy's (DOE) effort to provide leadership-class computing resources to the scientific community.

The main system at ALCF is the Intrepid, a 40-rack IBM Blue Gene/P capable of a peak-performance of 557 Teraflops (557 trillion calculations per second). The ALCF won the SC08 High Performance Computing Challenge Award, which compares systems using a suite of benchmarks.

As part of the DOE's annual Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, researchers are encouraged to submit proposals to get time on ALCF systems. Several life sciences projects were selected in 2008. The awards included 12 million processor hours for a computational protein structure prediction and protein design project and 5 million hours to investigate gating mechanisms of membrane proteins. Other awards went to projects to simulate and model synuclein-based protofibril structures as a means of understanding the molecular basis of Parkinson's disease, and large-scale simulations of cardiac electric electricity.

Even proposals that do not win a slot in the annual contest still have a chance to earn time on the Blue Gene system. “Ten percent of the machine can be used for discretionary projects,” said Beckman. Indeed, some of the current winning proposals started out as part of that discretionary usage effort.

Top 500 Trends

According to the latest Top500 list of the world’s most powerful supercomputers, IBM retained the top spot with its Roadrunner system, deployed at Los Alamos National Laboratory. The system, a BladeCenter QS22 Cluster, had a peak performance when running the Linpack benchmark application of 1.105 petaflops. (One petaflop is one quadrillion floating point operations per second.) 

A close second was the Cray XT5 supercomputer at Oak Ridge National Laboratory called Jaguar. The system, only the second to break the petaflop barrier, posted a top performance of 1.059 petaflops.

While the list gives vendors boasting rights for marketing purposes, it also can give life scientists insight into HPC trends. For example, the new list reveals that quad-core processor based systems have rapidly taken over the Top500. Already 336 systems are using them, while 153 systems are using dual-core processors and only four systems still use single core processors. And seven systems use IBMs advanced Sony PlayStation 3 processor with nine cores. 

One point reiterated throughout SC08 was the growing demand among researchers for HPC power to conduct their work. Fortunately, researchers don’t need a Top500 listed system to get impressive computational power. And the exhibit hall floor was a showcase of such systems from IBM, Dell, HP, Sun Microsystems, SGI, Appro, Penguin Computing, and others.

One notable HPC debutant with a life sciences connection was Convey Computer Corp. It announced the Convey HC-1, a hybrid computer that gets its processing power from a combination of Intel Xeon processors and Xilinx Field Programmable Gate Arrays. During one session, Rajesh Gupta (University of California San Diego) discussed how researchers using the Convey system and a UCSD proteomics application called Inspect were able to accelerate their studies.

Using this combination, UCSD scientists can now perform unrestricted searches of massive protein databases looking for unanticipated modifications in peptide samples. Gupta said this has enabled scientists to study protein-protein interactions and identify cancerous fusion peptides. Prior to implementing the new solution, such computationally intense searches were simply too slow to be feasible.

Energy and Storage Center Stage

Another theme was the importance of storage systems and energy consumption, as HPC processing power continues to grow, to ensure a cost-effective approach for research. BlueArc, Isilon Systems, and Panasas were among vendors presenting solutions that deliver the capacity to store massive amounts of life sciences data, tools to manage these data, and the performance to match data hungry CPUs in today’s HPC systems.  

On the energy consumption front, the Green500 list was released, which ranks systems based on their energy efficiency and processing power. Like the Top500 list, rapid change is the norm. The energy-efficiency of the top supercomputers increased by 17 percent over the previous rankings last June.

A session titled “Will Electric Utilities Give Away Supercomputers with the Purchase of a Power Contract?” took a deeper look at HPC energy consumption. The first speaker quoted Kermit the Frog, noting “it’s not easy being green.” Much of the panel’s discussion focused on the need for organizations to use energy benchmarks to get a better understanding of where electricity is being used in the data center. Several panelists noted that only one-third of the electricity used in data centers goes to power the actual IT equipment. Much of the energy is used for cooling and a significant amount of power is simply lost due to inefficient power distribution within a data center.

Daniel Reed, director of Scalable and Multicore Computing at Microsoft, noted a once radical energy cost saving idea that is becoming more common today. “[HPC] facilities do not have to be located next to researchers,” said Reed. In recent years, Microsoft, Google, and several other organizations have moved their data centers to regions of the country where the cost per kilowatt-hour of electricity is significantly below the national average. With the ubiquity of high-speed Internet connections, providing researchers in one location with access to an HPC center in another is not out of the question. That’s not a solution for everyone, but it hammers home how HPC system energy usage and costs have been elevated to the highest level of importance.

*SC08 – Austin, Texas. November 15-2, 2008

 

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.