Such large-scale "PC philanthropy" efforts as SETI@home, FightAIDS@ home, and Folding@ Home (see Paper View), and public-sector research efforts such as the National Science Foundation TeraGrid and the North Carolina Bioinformatics Grid have dramatically increased the profile and acceptance of massive Internet grid computing well into the public domain. They have also triggered widespread efforts within the computer industry to jump on the grid train — any grid train — before it leaves the station. But be careful: Make sure you know where it's going.
People tend to use "grid" interchangeably to represent concepts that span the complexity scale from the simple harvesting of spare PC cycles from co-workers' computers to grand visions of the Internet as a massive distributed computing infrastructure.
Clearly, grid computing means different things to different people, often at different times. To its most visionary pundits, grids symbolize the penultimate step in the evolution of computing architecture into a universal source of pervasive, utility-like computing power that companies can purchase as needed, much as they purchase electricity today. Most stalwart advocates believe that grids not only represent the IT environment of the future but also will ultimately eclipse in significance what the Internet is today.
By contrast, to the average Joe running the data center at a research lab, grids represent a way to improve performance on certain classes of applications by distributing bits of the application to otherwise idle servers and desktops sitting within his or her data center or departmental hallway. To the average researcher, grids are a price/performance play — that is, they are a means of building scalable computing resources using mostly existing commodity components.
The Next Big Thing?
The concepts and technologies feeding grid computing are not new. Parallel computing has been the subject of debate for over 30 years. Even utility computing, a concept currently receiving considerable industry and press attention, was first mentioned by University of California at Los Angeles computer scientist Len Kleinrock in 1969. Admittedly, the technical architecture that Kleinrock had in mind bears little resemblance to current concepts of grid and utility, but the point stands: There's nothing new under the sun.
After new IBM President and CEO Sam Palmisano outlined his vision of on-demand computing at a recent customer event, several financial analysts reacted by declaring Internet grid computing "the next big thing," adding that it would change the way we think about and use computers.
All hype aside, it is unlikely that grids will fundamentally change the way that scientific and technical computing is done in the near term, particularly in the private sector. In addition to oft-heard concerns about data security, there are a couple of important reasons for saying this:
Application fit. A surprisingly widespread misconception is that grids are just inexpensive supercomputers and can be used to
|All hype aside, grids won't fundamentally change computing.
solve any problem for which supercomputers are traditionally used. This is only partially true. Grids do extremely well with embarrassingly parallel applications (e.g., BLAST, DOCK, GAMESS, and so on). However, a grid would do a lousy job with large-scale metabolic and signal transduction pathway simulations, for example, or other applications that require the movement of massive amounts of data between memory and processor.
Operational fit. The principal attraction of grids is that they are configured from nondedicated components. These components are typically operated and/or managed by independent individuals or organizations. In addition, the interconnect speeds between grid nodes could be anything from modem speeds to multigigabit LAN speeds.
The result is a highly dynamic — some might say unpredictable — complex of nodes in which individual components and characteristics can change at any time, both planned and unplanned. As desirable as it is from a price/performance standpoint, unpredictability does not lend itself too well to applications requiring strong service-level agreements. Most grid experts agree that end-to-end quality of service is a serious adoption inhibitor for users with applications that are organizationally strategic and have strict operational constraints. In this case, we would expect data center managers to tend toward platforms in which performance characteristics are more predictable and measurable, such as dedicated clusters.
If one looks at the industry as a whole, there is no doubt that grid computing will play an important part going forward, albeit a bit part. IDC estimates that as much as 30 percent to 40 percent of the computational workload in current and future life science applications possesses technical characteristics that would lend it well to distributed execution.
Savvy IT managers are vigorously looking for better ways to distribute workloads across their organizations. However, we believe that the most critical workloads will be distributed throughout a given organization's data center (i.e., into dedicated or semidedicated clusters). What that leaves is a class of applications for which the wall clock is less perilous, even if the applications themselves are considered strategic. For these, we believe that most of the applications will stay within organizational boundaries.
Mark Hall is director of life sciences research at IDC and can be reached at firstname.lastname@example.org.