YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Antibody Docking on the Amazon Cloud


By Adam Kraut

May 19, 2009 | Inside the Box | It was 18 months ago in this column that my BioTeam colleague Mike Cariaso proclaimed, “Buying CPUs by the hour is back” (see, “Sunny Skies for Compute Cloud,” Bio•IT World, Nov 2007), in reference to Amazon’s Elastic Compute Cloud (EC2). Back then, we were perhaps a bit ahead of the hype vs. performance curve of cloud computing. A few forward-thinking companies were finding ways to scale out web services, but there was little EC2 activity in the life sciences.

However, in the past two years, utility computing has begun to make an impact on real world problems (and budgets) in many industries. For life scientists starved for computing power, the flexibility of the pay-as-you-go access model is compelling. The Amazon EC2 process makes the grant process used by national supercomputing centers look arcane and downright stifling. Innovative research requires dynamic access to a large pool of CPUs and storage.

A great place to clear the air about this emerging technology is computational drug design. IT and infrastructure decisions made early in the discovery process can have a profound impact on the momentum and direction of drug development. For protein engineers and informaticians at Pfizer’s Biotherapeutics and Bioinnovation Center (BBC; see “Programming at Pfizer’s BBC” Bio•IT World, Jan 2009), the challenging task of antibody docking presents enormous computational roadblocks.

Respectable models of a protein’s three-dimensional structure can usually be generated on a single workstation in a matter of hours. But they require refinement at atomic resolution to validate whether the newly modeled antibodies will bind their target epitopes. One of the most successful frameworks for studying protein structures at this scale is Rosetta++, developed by David Baker at the University of Washington (see, “Improving Structure Prediction,” Bio•IT World, Nov 2007). Baker describes Rosetta as “a unified kinematic and energetic framework” that allows “a wide-range of molecular modeling problems … to be readily investigated.” Refinement of antibody docking involves small local perturbations around the binding site followed by evaluation with Rosetta’s energy function. It’s an iterative process that requires a massive amount of computing based on a small amount of input data. The mix of computational complexity with a pleasantly parallel nature makes the task suitable for both high-end supercomputers and Internet-scale grids.

Rosetta So Much Better
When Giles Day and the Pfizer BBC informatics team designed its antibody-modeling pipeline using Rosetta, it soon realized it had a serious momentum killer. Each antibody model took 2¬–3 months using the 200-node cluster. With dozens of new antibodies to model, the project essentially grid locked until the team could find enough compute capacity to do the sampling. Plus, the pipeline was used unpredictably as it hinged on results in other departments. What was needed was a scale-out architecture to support “surge capacity” in docking calculations.

Traditional options were limited to expanding in-house resources by adding more nodes to the cluster or reducing the sampling. A doubled CPU capacity could potentially halve a two-month calculation, but would entail acquisition, deployment, and operational costs. Consequently, Day contracted the BioTeam to provide a cloud-based solution.

The result was a scalable architecture custom fit to their workloads and built entirely on Amazon Web Services (AWS). The AWS team is years ahead of the competition. AWS is unveiling new features and API improvements almost monthly. The AWS stack is fast becoming our first choice for cost-effective virtual infrastructure and high-performance computing on-demand.

The new docking architecture at Pfizer employs nearly the entire suite of services offered by Amazon. A huge array of Rosetta workers can be spun up on EC2 by a single protein engineer and managed through a web browser. As Chris Dagdigian pointed out in his Expo keynote (see p. 8), the cloud isn’t rocket science. The Simple Storage Service (S3) stores inputs and outputs, SimpleDB tracks job meta-data, and the Simple Queue Service glues it all together with message passing. What Amazon did right in 2007 was elastic compute and storage. What they do even better in 2009 is to provide developers everywhere with a complete stack for building highly efficient and scalable systems without a single visit to a machine room. The workloads at Pfizer that previously took months are now done overnight and the research staff can focus on results without pushing their projects on the back shelf.

Adam Kraut is a Scientific Consultant at BioTeam. He can be reached at kraut@bioteam.net


This article also appeared in the May-June 2009 issue of Bio-IT World Magazine.
Subscriptions are free for qualifying individuals. Apply today.

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.