YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Virtualization and the Cloud


By Chris Dwan

September 27, 2011 | Inside the Box | Upton Sinclair used to say, “It is difficult to get a man to understand something when his salary depends on not understanding it.” Even the best technical professional, submerged in the hurly-burly of managing a complex network of systems will find vague discussions of “clouds” and other over-hyped terms to be a frustrating distraction. I sometimes get a hint of Sinclair in the resistance to virtualization and to outsourcing to the Amazon cloud. They are relatively old-news technologies, but I still encounter people who think that they require proofs of concept and elaborate justification. Consequently, I still find myself in conversations about finding a chassis to host a particular service, whether the software will collide with some existing piece of production code, and so on. Too many systems professionals live in a haze of all-night efforts and panicked phone calls because their services are tied to an aging bundle of transistors, fans and disks. I know projects that have been stalled for months waiting on the purchase a single server.

I lost most of my skepticism back in 2005—the year I walked into the offices of a major pharma company to install BioTeam’s Inquiry software. A member of the systems group took five minutes and created a new virtual Linux machine for the install. This machine was a copy of a “golden image,” already updated and in a known configuration. Once the install was done, we saved a snapshot of the machine. Over the years, we have gone back to that snapshot as a way of restoring service when “something changed.” That service has migrated between physical chasses countless times, and it merrily continues to work.

Virtualization is not a panacea, and it is not appropriate for all applications. Most servers, however, should probably exist under Citrix Xenserver or VmWare. You do lose a slice of performance to virtualization—the hypervisor consumes cycles that might otherwise have been available to the researchers—but there are very few groups who need to eke out every cycle from their overloaded cores. A reliable and stable system that can decouple the nightmares of hardware from the waking horrors of software is worth a percent or two in raw performance. In a job-scheduled compute farm, I consider month-over-month utilization above 80 percent to be a sign of a well run group. On an array of individual servers hosting web applications, I would be shocked if the monthly average use exceeded 20 percent.

Now let’s consider outsourced virtual environments—call them clouds if you must. Judging from Inside the Box columns from the mid 2000s, I see that cloud was already emerging as a marketing-driven synonym for cluster, and perhaps farm or grid. By cloud, I mostly still just mean Amazon’s EC2. While other providers (including Rackspace) are making strides, the majority of private cloud companies are still just selling co-located hosting. The cloud question is this: You’ve already decoupled the operating systems from some particular physical chassis. Why, then, keep the chassis in some particular data center with a property tag on it? Note that this is not all-or-nothing—it’s OK to say yes for some services and no for others. BioTeam has customers who have moved their entire IT infrastructure into EC2, although I think that hybrid approaches will be the norm. Terabytes are heavy, and there are tasks (base calling and mapping from next gen sequencing, for instance) where having a few local servers makes sense. There is also a social advantage to local, dedicated servers. When you own the server, you enable experimentation and development for free—or at least on somebody else’s budget.

New News: Opscode and Aspera

My colleagues Adam Kraut and Chris Dagdigian have been working with Chef (http://www.opscode.com/chef) and its bevy of cleverly named tools (‘recipes,’ ‘knife,’ ‘cookbooks’). Chef is an open-source kit for automating system and software deployment. It is a general answer to that old question of how best to capture the necessary steps to get back to a known server configuration. Chef uses a Ruby-like syntax to define the files, services, and packages that make up your deployment. In and of itself, that’s powerful. Chef places a premium on what they call “idempotence,” the ability to apply a recipe over and over without creating problems. Straightforward integration with Amazon and Rackspace’s cloud offerings make it truly compelling.

Aspera (see, “Asper’s fasp Track,” Bio•IT World, Nov 2010) specializes in some fairly deep magic in the network stack. Their tools optimize and throttle I/O utilization across wide networks. Aspera started out serving media companies who ship video across satellite links, but the aspect that I find most compelling for the life sciences is the ability to rate limit a particular point-to-point connection. They can control how much of the network a particular service will consume (similar to the Quality of Service settings available on many routers or switches). Since the throttling takes place in software, it can operate across wider networks, even point-to-point on the internet, giving control and predictability. One common concern about cloud adoption is the time required to shift substantial amounts of data. With Aspera, I can calculate when my input data will be available in EC2, and also know that I won’t earn the ire of my networking staff by consuming the whole pipe.

Chris Dwan is a consultant with the BioTeam. He can be reached at cdwan@bioteam.net.

This article also appeared in the 2011 September-October issue of Bio-IT World magazine.

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.