YouTube Facebook LinkedIn Google+ Twitter Xingrss  

The Promise of Petascale Computing


By Malinda Lingwall and Craig Stewart

August 8, 2007 | In 1981, the IBM PC 5150 was the hot new item on the U.S. consumer technology market.  It had a cycle time of 250 nanoseconds, and its whopping 16 kilobytes of user memory were enough to hold eight typewritten pages. In business, the IBM 3033-S mainframe had a blazing fast cycle time of fewer than 60 nanoseconds with some 4 to 8 megabytes of main storage — enough for a few high-resolution photographs. A single-sided 5.25-inch floppy disk held 160 kilobytes, or about 80 typewritten pages.

We’ve certainly come a long way since then. The past 25 years or so have seen the speed of personal computers grow nearly a thousand times. IBM’s BladeCenter HS21 is almost 200 times faster and has 100,000 times more storage than its 1981 mainframe ancestor. From floppy disks larger than your hand, we now store data on 12-gigabyte USB flash drives the size of your thumb. And it’s a good thing, too — it would take almost 20,000 single-sided floppy disks to hold the 3-gigabyte human genome!

Peter Cherbas, director for Indiana University’s Center for Genomics and Bioinformatics, provides equipment and expertise to allow researchers to carry out high-throughput genomic experiments and analyses. According to Cherbas, “Computing capacity and scientists’ appetites push each other upward in an endless spiral. When I was a graduate student we were thrilled by the purchase of a ‘computer’ to calculate standard deviations.  Now we’ve learned how to generate terabytes of data cheaply and quickly, and the current generation of life science students are learning how to ask penetrating questions of all those data — questions that place increasing demands on storage and processing.” 

But we’re reaching the point where even today’s technology is insufficient. The life sciences have only just begun to exploit computational analysis and data generation; the amount of data will likely increase exponentially within the coming decade. 

Recognizing this, the Department of Energy and the National Science Foundation (NSF) have set goals of achieving petascale computing before the end of this decade. According to the NSF: “The petascale [high performance computing] environment will enable investigations of computationally challenging problems that require computers operating at sustained speeds on actual research codes of 1015 floating point operations per second (petaflops) or that work with extremely large data sets on the order of 1015 bytes (petabytes).”

Practical Petascale
Scientific simulations such as protein folding, weather forecasting, and particle modeling are at the heart of this push toward petascale computing. Such problems were only just coming into focus in the early 1980s. At that time, a typical particle physics simulation of about one million particles took dozens of hours to run on a computer equivalent to the IBM 3033. A decade ago, a simulation of 322 million particles ran at a sustained rate of 170 Gflops and took less than ten hours to complete. In 2005, a simulation ranging above 524 million particles ran at a sustained rate of over 100 Tflops in just seven hours. This remarkable progress, to more than a hundred teraflops coupled with a significant decrease in time, illustrates why there’s a trend toward petascale computing.

Why select a physics example, rather than a biology example, to chart this increase in speed? High performance applications in physics have been in existence for longer than high performance applications in life sciences and so more data are available. However, high performance applications in biology are already changing our understanding of living organisms.

University of Illinois at Urbana-Champaign professor Emad Tajkhorshid and his team recently simulated the dynamics and function of a protein responsible for transport of large molecules across the cell membrane. Using the software package NAMD, Tajkhorshid was able to refute the leading hypothesis about how such transport takes place. This study involved thousands of computer hours on systems at Indiana University, the National Center for Supercomputing Applications (Urbana-Champaign, Illinois), and the Pittsburgh Supercomputing Center, made available thanks to the NSF-funded TeraGrid.

By one measure, life science computing is leading the way into the era of petascale computing. In 2006, the RIKEN Institute’s Protein Explorer system was the first system with a reported peak capacity of 1 petaflops. However, the Protein Explorer is a dedicated system that performs only molecular dynamics calculations. In addition, it’s not capable of running the Linpack benchmark, which is a standard applications package used to measure peak theoretical and sustained capacity. The biennial TOP500 list uses Linpack to rank the 500 most powerful commercially available systems. According to the June 2007 list, the world’s fastest system is Lawrence Livermore National Laboratory’s IBM BlueGene/L, with a sustained capacity of 280 teraflops and a peak theoretical capacity of 360 teraflops.

With 192 systems on the TOP500 list, it’s not surprising that IBM also leads the race for petascale computing. Its new BlueGene/P, the first of which will be installed at Argonne National Laboratory later this year, will be capable of a sustained capacity of one petaflops and a peak theoretical capacity of three petaflops. This milestone heralds the true beginning of petascale computing.

Petaflops systems are only half of the petascale computing equation; we also need more data storage. Not only are we keeping old data and producing new data all the time, but our needs change as data analysis methods change.

In the early 1980s, when vinyl records were ubiquitous, who would have imagined that we would someday have 80 gigabyte video iPods for our massive personal digital music collections?

Scientific data storage follows the same principles. In the 1980s, floppy disks (or even earlier, punchcards) sufficed. Now, scientists are generating terabytes of data from a single protein folding simulation or next-generation sequencing experiment, and creating amazingly detailed 3D visualizations of molecules. Even before we reach one petaflops, life scientists may need to store petabytes of data. It won’t be long before the largest public data sets in the life sciences reach a petabyte.

“We’ve been content to compare a few dozen gene sequences from a few dozen organisms, but we’re beginning to see the benefits to be reaped by comparing all the genes of hundreds of organisms,” says Cherbas. “Who knows where we’ll be 20 years from now?”

Already this year, Indiana University has allocated hundreds of terabytes of storage for use by the U.S. research community, with allocations expected to top 1 petabyte by sometime in 2008. Other disciplines are already there — the BaBar high-energy physics project has a database of more than a petabyte. It would require nearly two trillion of the single-sided floppy disks to hold a petabyte of data. (Lucky, then, that we’ve long since moved on to tape robots and silos to hold our data — they take up much less floor space!) “The one near certainty,” Cherbas says, “is that before [long] we’ll look back and say, ‘How on earth did we ever think 200 petabytes of storage was enough?’” 

----------------------------------

Malinda Lingwall is Information Manager for the Indiana METACyt Initiative at Indiana University. Email: mlingwal@indiana.edu.

Craig Stewart is Associate Dean of Research Technologies for University Information Technology Services, Indiana University. Email: stewart@iu.edu.

----------------------------------

IU’s involvement in the TeraGrid is supported by the National Science Foundation under grants ACI-0338618l, OCI-0451237, OCI-0535258, and OCI-0504075.

There will be a petascale computing workshop at the IEEE 7th International Symposium on Bioinformatics & Bioengineering (BIBE ‘07) in Boston in October. See: http://www.cs.gsu.edu/BIBE07/

Subscribe to Bio-IT World  magazine.

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.