YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Cycle Computing's Tour de Cloud


By Kevin Davies

November 20, 2009 | Despite the enormous appeal of cloud computing, making practical use of resources such as Amazon’s EC2 is not straightforward. One key issue is scheduling. Jason Stowe, the founder and CEO of Cycle Computing, says that his firm’s service, the CycleCloud, attempts to solve a very straightforward problem: “Amazon allows you to provision 1000 servers. Now what?... How do you make it so the submission you just did 5 minutes ago starts running sooner than the submission yesterday, because its higher priority?”

“With EC2, the math really changes,” says Stowe. The cloud can dramatically accelerate the time result compared to a fixed cluster. “A task that used to take six weeks now takes less than a day. It’s going to really affect a lot of industries, but life sciences is first, because you guys are generating a lot of data, and the ratio of computation to data is still very high. It’s on a scale that works well on EC2.”

Amazon tries to help users, but as Stowe says, “they’re not explicitly in the business of helping life sciences users provision computing resources to run them. They’re in the business of providing infrastructure as a service.” Scheduling that activity and helping people use the cloud infrastructure practically and securely is what Cycle offers.

Cycle helps users take internal workflows and provision clusters for short-term needs. It also works with other software and instrument providers to offer HPC environments dynamically to customers that don’t have the IT staff to manage them. “Pharma is very forward thinking on this,” says Stowe. “They’re definitely the first with their feet in the water, along with finance and maybe 1-2 other industries, such as Web 2.0.”

Math is different in the cloud. Stowe says 1000 compute hours costs the same whether its 1 CPU for 1000 hours or 1000 CPUs for 1 hr. “That cost differential is huge compared to the old model,” he says. “I’d have to buy 1000 computers to get the 1-hour time to market. A lot of people couldn’t afford to do that, especially on short notice. An old-school IT view would be: if your department was really moving, you could get a new computer in a month, or 4 months for a cluster (installed, racked, cabled, powered, storage, etc.) Now you can do that in 15 minutes—and it costs less!”

Early Start
Stowe trained in math and computer science and enrolled at Carnegie Mellon in 1993, a couple of years earlier than his class mates. He later transferred to Cornell and had a stint at Microsoft. His interests focused on algorithms for rendering, but his first company foundered in the 2001-02 dot com crash. But one of his customers later called up from Toronto, requesting rendering assistance for a movie. He spent a year working on the Disney film The Wild, running a supercomputer with 1250 processors in the top 100 list at the time.

Stowe started Cycle in 2005 to help companies do HPC using Condor as an open-source scheduler (see, “Condor Rules”). “Everybody had the same problem,” said Stowe. “For bursts of usage, they had to buy clusters for peak utilization, but then day-to-day wouldn’t need the peak usage. When EC2 came out in 2006, we saw that as a way of solving that problem.” Early clients included JP Morgan Chase and Lockheed Martin. Through a mixture of luck and hard work, Stowe built larger implementations, especially in financial services. And he got a credit on The Wild.

In late 2007, Stowe won his first paying customers. Cycle’s first life sciences client was Varian, a manufacturer of mass spectrometers. Cycle reduced an internal simulation from six weeks to under a day on EC2 just by “spinning it up.” Stowe took the call on a Tuesday, and the calculation was done by Thursday: a classic demonstration of the virtues of Condor, the cloud, and Cycle.

Cloud services is the fastest growing segment of Cycle’s business, making up a third of Cycle’s total business. A lot of people want both. “We implement Condor internally, which makes it easier to use it externally.” Some of Cycle’s pharma clients include Pfizer, Eli Lilly, and Johnson & Johnson (see p. 31). Stowe says there are two good reasons why life science companies have embraced cloud computing. One is outsourcing: pharma and the life sciences has been more comfortable with outsourcing IT and other technologies for a while, he says. “There’s definitely more of a culture that we want to focus on science, not IT. Other industries view IT as proprietary.”
Second are some clear user cases. Users can perform runs every night, avoiding lengthy down times for empty clusters, or review years of old data with a new algorithm or technology. “Those data are going to sit there idle, or you can do it on EC2, have it down and you’re done.”

From A to Z
Amazon was years ahead of the field, says Stowe. “They’re by far the easiest to use, easiest to start up, particularly if you’re comfortable with programming. They also have a lot of vendors like us that make it easier to use the servers.” But Stowe is looking at other vendors, including HP, vCloud, and IBM, who might at some point provide comparable or better infrastructure.

Cycle offers clients a turnkey cluster in effect. “You want to spin up a cluster to run work, you don’t need to know anything about virtual machine images,” said Stowe. “We have folks to help with app migration for HPC work, we do training on Condor, training on clusters, 24/7 support. We offer a full range of services. Someone can very easily say, here’s a proteomics workflow or assembly of genomes, I want to turn it on and dial it up.”

Stowe can’t reveal much in the way of early customers, but he’s attracting interest from academic research organizations. The Cycle model works well in the grant funding world, particularly when there isn’t a major IT department. “With cloud computing, instead of a capital expenditure (and depreciation), you have an operating expenditure.”

Lilly has worked in genomics applications, including BLAST, pharmacokinetics using NONMEM, statistics analysis through R, as well as simulations of clinical trial data. Cycle has done chemical simulation work in molecular dynamics with Schrodinger, helping its scientists run calculations by ramping up large numbers of nodes in the cloud.

Not surprisingly, Stowe sees opportunity in managing next-gen sequencing data. “We’re looking at [next-gen] instrument providers by helping provide an on-demand cluster that spins up and has software provision,” says Stowe. “Here, it would be running alignment and assembly calculations.” Considerations would include where to grab the data, how to transfer it, and establishing the proper provisioning methodology. “How do we qualify these images, so if the FDA wants to know 18 months ago how this machine was generated, we have all that data, the qualification and audit? The cloud offers benefits of reproducibility and tracking.”

A next-gen run might generate 6 Terabytes image data, distilled down to 100 GB. “At that point,” says Stowe, “it makes a lot of sense to parallelize and use the cloud to do computation. Plus, those 100 GB can be streamed potentially as the instrument is operating. So over the 7-day read, you can stream that data up, so it doesn’t have huge bandwidth requirements.” The first part of the sequence analysis would be done on the instrument, and cloud calculations could deliver the assembled genome. “Plug your Internet connection into [the sequencer], and stream it up to EC2, and there’s no IT department required for a wet lab to be able to use the results.” 

Condor Rules
Cycle uses an open-source grid scheduler called Condor to handle scheduling. “It provides a heck of a lot of benefits,” says Stowe. Some say the cloud means the death of scheduling because they can get resources any time they want, but that overlooks a couple of problems: what a scheduler does, and the people considerations.

Condor started in the mid ‘80s doing CPU cycle scavenging (hence the name). Scheduling is critical because priorities for work change and nodes can come and go at any time, for example as users restart their workstations, and jobs need to be run incredibly fault tolerant. Stowe says Condor tends to be very forgiving, especially in a cloud environment, where many dedicated servers may be coming and going. Condor can provision more nodes, and when they aren’t needed, it doesn’t cause a problem. Cycle is also adding other schedulers to its framework, including PBS, and SGE, following user requests.

Then there’s the people side. “Generally, an IT department won’t say, ‘Here’s an AMEX card with no credit limit, go nuts!’... There’re financial constraints and budgeting, ‘storage triage.’ Is it worth keeping these data around? You need something submitted to be run 3 days ago, and I have a paper due or a result needed ASAP. I don’t want to wait for previous projects to finish. Any time there’s any limit on the resources you can consume, or any form of priority for jobs, there’s going to be some scheduling.”

 

 

 

 

 

 

 

 

 

 

 

 

 


This article also appeared in the November-December 2009 issue of Bio-IT World Magazine.
Subscriptions are free for qualifying individuals. Apply today.

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.