YouTube Facebook LinkedIn Google+ Twitter Xingrss  



Inside the Box - Chris Dwan 

August 18, 2004 | "Grid computing" is one of the hottest technology buzz phrases of the century. At conferences, it can be a challenge to find a single product or vendor not claiming to be "grid aware," "grid ready," or "grid compliant." All the major computer manufacturers have signed up (most several times) to comply with and support emerging standards for both grid computing and Web services. Textbook-sized tomes have titles such as Using The Grid. Yet the uncomfortable reality of combining a nascent technology with near-hysterical hype has left some technical folks with a distinct ennui toward any mention of "the grid."

One source of confusion centers on the definition of a grid. I have heard definitions from the trivial ("any pair of CPUs that do not share a common file system") to the violently overblown and vague ("the grid makes the world your computer"). Some definitions center on application-level interoperability and guaranteed levels of service. Others focus on namespace and data management. Many definitions center on finding spare CPU power (a la Seti@home and the Condor project of the University of Wisconsin-Madison), even though the vast majority of users are not CPU bound.

One widely accepted definition of grid computing comes from Ian Foster's 2002 white paper "What Is the Grid?" Foster and his co-author, Carl Kesselman, assembled the seminal book The Grid: Blueprint for a New Computing Infrastructure. Foster provides the following definition:

• A grid coordinates resources that are not subject to centralized control.

• A grid uses standard, open, general-purpose protocols and interfaces.

• A grid delivers nontrivial qualities of service.

In short, grid computing is the creation of large-scale, useful systems across organizational and platform boundaries. Foster is quick to point out that the idea of interoperable, utility computing is far from new. As early as the 1960s, authors were predicting a national infrastructure of computation to rival the existing systems for voice communication, highway transportation, and electrical power distribution. Obviously, the task of creating such an infrastructure is far more time-consuming than the act of predicting it.


Very few scientists would be willing to accept that their jobs "ran somewhere on the grid" and leave it at that.
The Global Grid Forum (GGF) is a standards body formed in the late 1990s. The GGF has produced a number of important documents specifying the elements that seem to be central to implementing a grid. These include data, security, performance, and scheduling. The Globus toolkit is an open-source implementation of the standards adopted by the GGF. Several commercial software vendors have adopted various GGF standards in their products. With regard to job submission to remote clusters, an additional standard exists: The Distributed Resource Manager API (DRMAA).


Think Locally 
One barrier to broad adoption of GGF standards is the fact that the Globus toolkit is a large, unwieldy, and brittle piece of software. Version 2.4 was used for several years by a variety of projects, and it remains the most robust implementation of grid services available. Still, nobody who has successfully integrated even a few resources using Globus 2.4 will claim it was simple or painless.

In 2003, a consortium of industry and academic experts agreed to a set of standards bringing together Web services and the grid. They agreed grid computing could be best built on a foundation of Web services. This idea is implemented in Version 3 of the Globus toolkit.

As long as the core software implementing grid services is difficult to install, there is little motivation for the cluster administrator or the IT manager to jump on board. The most hyped benefit of the grid does not accrue to the local users or owners of the system. Why spend person-months installing and configuring software so that external, nonbillable users can benefit?

The key to turning the grid from hype into reality is this: We must focus on the fact that it is locally beneficial to design systems that are scalable and interoperable. In the long run, such systems will be far simpler and cheaper to maintain than the alternative. The first few administrators (and their users) to install any system will bear the major costs of first-release bugs and downtime. In the end, though, the corporation benefits from the increased flexibility and scalability. Major pharmaceutical companies are beginning to demand that their systems interoperate, with good results. This will also be true in the public domain.

Trust is a major barrier to the adoption of grid technologies. Owners and administrators must trust that their resources are being wisely and securely used. While the idea of a "take as you need, give as you can" philosophy of computing pervades writing about the grid, this altruism quickly falls down at the corporate firewall, or in the budget oversight committee hearing. There are solid, practical reasons to create interoperable systems, but until real trust exists between the partners in a grid, all technical attempts to build it are doomed to fail. Users must also trust the grid; they must believe their computational results are reliable and replicable. I've met very few scientists who would be willing to accept that their jobs "ran somewhere on the grid" and leave it at that.

For grid computing to make the transition from a promising technology to robust infrastructure, we need to build real applications with real users and real support needs, and learn from the process. Simply calling a cluster a grid, or declaring an image-editing suite to be "grid aware," does nothing but irritate those of us in the trenches. I have endured more than one hot-button push with clients wanting to "install the grid," only to see whatever system we created languish at the "Hello, world" stage because there was never a defined customer or requirement driving the effort.

Actual corporations and research teams have accomplished nontrivial work on widely distributed, heterogeneous systems using open protocols. Invariably, these were groups with difficult global tasks and short timelines. As with every other emerging infrastructure, the enduring standards will be set by those who go ahead and build enduring systems.

In short: Look with suspicion at anyone who tries to sell you a grid, but jump at the chance to join a team that's building one.

Chris Dwan is a senior consultant with The BioTeam. E-mail: cdwan@bioteam.net. 




Illustration by James Yang





For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.