YouTube Facebook LinkedIn Google+ Twitter Xingrss  



Paper View 

By Debra Goldfarb
Group Vice President of WW Systems and Servers

Debra Goldfarb, IDC June 12, 2002 | The high-performance computing (HPC) market is unlike any other. Although it is a small, niche market, its influence is powerful and pervasive. In fact, one could argue that it is intrinsically linked to the health of our society from an economic, defense, and public well-being standpoint. Consider, for example, that the designers of your car used a supercomputer, that the gasoline in your car's gas tank was likely found with the use of a supercomputer, and that the prescriptions in your medicine cabinet were developed in part using a supercomputer.

To better understand the power and uniqueness of supercomputers, it helps to get some concept of their astonishing speed. Current computer systems are measured in terms of the number of calculations they make in a second. Because the calculations can be made to differing decimal points of accuracy, they are called "floating points." Hence the term "flops," for floating-point operations per second, came to designate the performance of a supercomputer. Today, systems are measured in gigaflops and teraflops — denoting billions or trillions of calculations per second.

To get an idea of how fast this really is, consider a computer that runs one billion calculations each second. A billionth of a second is about the time it takes for light to move one foot. So, if you stood on the goal line of a football field and struck a match, a supercomputer could perform more than 300 calculations before someone standing on the other goal line saw the match light.

During the dozen-plus years I have been following the high-performance computer industry, there has been a complete transformation of this market segment. Once the domain of very expensive, highly specialized systems and technologies, the HPC industry today is dominated by general-purpose architectures, built around standard, commodity technologies. Furthermore, instead of supporting the technical market exclusively, HPC systems are part of a broader server portfolio that supports markets as diverse as retail or cosmology.

For the mainstream computer suppliers, this change has been a boon, enabling them to reach new markets and high-profile customers with nominal incremental R&D investment. For high-end niche supercomputer suppliers, such as Cray Inc., however, the outcome has not been so rosy. For these suppliers, the combined effects of disruptive technologies, a shrinking market opportunity, and high R&D costs have eroded profits and slowed supercomputer manufacturers' ability to introduce powerful new products, as well as to find relevancy for these products within a broader market.

The dynamics behind this shift are complex and encompass many entangling business, market, and technological forces. The primary factor is the generalized advance of computer technology — processor speeds, storage capacity, network bandwidth — in rough accordance with Moore's Law, which states that performance doubles every 18 months. (Strictly speaking, Moore's Law pertained to the number of transistors on a silicon chip.) The bottom line is that the general-purpose market has become good enough to support a large portion of the HPC user community. But for those who fall outside of this market structure (i.e., the current "state of the art" is not capable of supporting their applications), this market shift has been extremely painful.

At the International Data Corp. HPC User Forum in Santa Fe, N.M., in late April, speakers from Bristol-Myers Squibb, the National Cancer Institute, Indiana University, the Sandia and Los Alamos national laboratories, and other organizations provided insight into the emerging computational requirements related to R&D in the post-genomic era. Nearly all focused on their current and future applications, and how those applications relate to current and emerging architectures. Although these organizations had extensive computing infrastructures in place, they spoke candidly about the challenges ahead as they tackled complex problems in the areas of cancer research, systems biology, cell-membrane transport, and network-interaction analysis, to name a few.

None of the speakers saw one platform as the panacea for the future. In fact, they all agreed that there is a need for heterogeneous computational environments to support the array of increasingly complex applications. In this context, heterogeneous includes existing high-performance architectures such as cluster and symmetric multiprocessor (SMP), vector-intensive, massively parallel processing (MPP), and field programmable gate array (FPGA), as well as future computing schemes such as nanotechnology and quantum systems. Speakers worried further that changing market dynamics in HPC might delay or halt the delivery of needed new high-power computing systems.

The conundrum I have been wrestling with — probably the primary question for those who participate in the HPC community — is whether the rapid progression of new biological approaches will require fundamental changes in the computing topography we see today, or whether the market will be satisfied with currently available technology. Will bio-IT catalyze a revival in advanced computer architectures? And if so, who will build these computers? One thing we have learned over the past 10 years is that the classical supercomputer business model does not work. But will the large server manufacturers invest in new classes of high-end computers?

Stay tuned.

Debra Goldfarb is IDC group vice president of worldwide systems and life science research. She can be reached at dgoldfarb@idc.com. 






For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.