YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Interfaces Will Save the World


By Chris Dwan

Dec 2005 / Jan 2006 | I was recently reminded of something both obvious and important: Well-designed interfaces make it possible to scale tasks and resources well beyond what might be expected. The reminder was a virtual conference on Genomics and Bioinformatics. Instead of occurring at a particular physical location, it was hosted using the Access Grid, a videoconferencing network made possible by both government grants and generous host institutions. The Access Grid is based on an open-source videoconferencing standard. Any site meeting certain technical standards for connectivity and AV equipment can broadcast sound and images of participants at that site. Even though only two of us were physically present at the Boston University location, the conference included hundreds of people from around the world. This is an example of a well-defined interface allowing interactions and community building that would otherwise be quite difficult.

Well-defined interfaces can also help to span the chasm between the IT and research worlds. At the dawn of a discipline adopting high-performance computation, it is feasible for a select few practitioners to serve as their own IT department. These early adopters fill primarily scientific roles but demonstrate an aptitude and a willingness to be the person who knows about computers within their group. They end up with server-class computers on their desks. As their IT skills become more and more vital to the research group, they are pressed into service: writing scripts, maintaining Web servers, and performing other jobs further and further from their research. Eventually, these people must make a choice to either abandon their scientific duties altogether or to push for these tasks to be taken up by a centralized IT department.

This brings us to the classic divide between IT and science. It is as simple to understand as the difference between a dedicated and a shared resource. The transition can be a painful one. In talking with the leader of a research group in Cambridge, he told the story of having the compute cluster he had built placed under central IT control. He described it by analogy, describing the use of the cluster as being as vital to his research as pipetting. With the cluster as a shared resource, managed by computing services, his researchers, who actually built the cluster in the first place, now had to ask permission before doing the tasks that they had simply performed on their own for many years. He pointed out that it would be idiotic to outsource a task such as pipetting, yet somehow it makes good business sense with computing.

Of course, this has two sides. When a research core is responsible for the data storage, networking, and CPU cycles that support hundreds of researchers, it makes no sense whatsoever to risk a system crash by giving a single power user administrative privilege. IT groups at many large organizations now employ scientific consultants, generally as part of their full-time staff, whose sole responsibility is to interface with the researchers and help to bridge the widening gap. These people are the other incarnation of the computing-savvy researchers I mentioned above. They are pulled from their computing background (through aptitude or enthusiasm, and sometimes both) into being part of the scientific community. Their primary role is to speak both languages and to be a translator and facilitator.

I believe that one technology that can ease this situation involves standardizing the interface to the data and compute resources. By exposing functions familiar to the researchers in a standard way, we can provide a growing menu of comprehensible, useful tools without having to reengineer a new solution for every particular user. Web Services, using WSDL and SOAP, are a fine choice, besides being my favorite technology these days.

There are other choices: Grid computing is one. By “grid computing,” I mean the Globus-style, wide-area, nonhomogeneous variety, not simply cluster computing with better marketing. The reason that this never caught on was that it was expensive both to be a user and to be a provider of a grid resource. It was (and remains) difficult to write code to make use of grid resources. On the provider side, converting a perfectly functional SMP machine, data storage device, or cluster to a “grid-aware” resource was complex and liable to introduce both stability and security issues. Since neither side saw an immediate benefit, nobody was in a rush to adopt the technology.

Web Services, by contrast, provide an immediate, selfish benefit to both users and resource providers. Users gain access to resources they couldn’t use, or couldn’t use easily. Providers gain the ability to solve a problem once, with a verifiable way to check that it’s still working. There is an additional benefit as well: A market has sprung up for relatively pure interface environments. Just as the HTML and HTTP standards opened up the market for Web browsers, SOAP and WSDL open the market for Web services clients.

It has been my experience that placing this particular interface layer on top of a high-performance computing resource is valuable to all levels of user. My colleague Bill Van Etten recently wrote in this space about Concord-Carlisle High School in Massachusetts and how they are using a cluster computer in secondary education. That exact same interface is currently in use in universities, both for teaching and for research, and it’s also supporting discovery pipelines in major pharmaceutical companies.

The difference in use is primarily based on the level and type of customization that users employ. For the more novice user, tremendous expertise must be put into the interface itself. Encapsulating expert knowledge about exactly what is being done by the computer, and why, guides their exploration. On the other end of the scale, a discovery pipeline is not a place for human exploration, at least not in the same way as a high-school classroom. The important factor there is that the IT services must be able to provide stable, reliable services, and the researchers must be able to trust those interfaces to be consistent and correct enough to build large systems around them. A well-designed interface can serve both of these needs at once.

Bioinformatics has passed the level of complexity at which any one individual can understand the entire stack of skills required in its practice. Imposing a standardized interface layer between IT and researcher is a clear and logical step, and over the past year I’ve seen that beyond being clear and logical, it works for people from high schools through pharmaceutical companies.

 A Vision for Scientific Computing

Chris Dwan is a senior consultant with The BioTeam. E-mail: cdwan@bioteam.net.

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.