YouTube Facebook LinkedIn Google+ Twitter Xingrss  


By Chris Dagdigian

By Chris Dagdigian

March 17, 2004 | Life science clusters almost always use scientific software and algorithms freely distributed under open-source licenses with little or no usage restrictions — greatly easing integration issues. A far more challenging problem is dealing with the emerging class of commercial software sold with built-in rights-management restrictions.

These latter packages are typically sold to customers locked for use on a specific machine, or (more commonly) with "floating licenses" checked out from a license server that strictly enforces an allowable number of concurrent users. The most common system for license management on Unix-based systems we have encountered is FLEXlm from Macrovision.

FLEXlm-licensed applications must obtain a license token from the license server before running. If the server is unavailable or all of the tokens are in use, the application will fail.

This is problematic in clusters, as the number of license tokens is almost always smaller than the number of available cluster nodes. A researcher submitting a large number of license-dependent jobs may find only 10 percent ran to completion while the remainder failed from license "starvation."

One solution is to purchase enough licenses to cover all nodes in the cluster, but this can quickly become cost prohibitive. Another approach is to make the cluster "license aware" when scheduling jobs.

The BioTeam recently faced this challenge at a Novartis Institutes for BioMedical Research (NIBR) facility. NIBR was configuring a new cluster for use primarily by chemists and molecular modelers. These researchers required several FLEXlm-licensed software packages and sought to make efficient use of a limited number of expensive floating licenses.

NIBR had already chosen Sun Grid Engine Enterprise Edition (SGEEE) to run on the cluster. The BioTeam was asked to deploy SGEEE and integrate several FLEXlm-licensed scientific applications. Acceptance tests for determining success were rigorous. The cluster had to withstand test cases developed by the researchers while automatically detecting and correcting license-related job errors without human intervention.

The core problem turned out to be the most straightforward to solve. To prevent the NIBR cluster from running jobs when no licenses were available, the Grid Engine scheduler needed to become license aware. This was accomplished via a combination of "load sensor" scripts and specially configured Grid Engine "resources."

· Load sensor scripts give Grid Engine operators the ability to collect additional system measurements to help make scheduling or resource allocation decisions.

· Resources are a Grid Engine concept used primarily by users who require a particular need to be met in order for a job to complete successfully. A user-requested resource could be dynamic ("run job only on a system with at least 2 GB of free memory") or static ("run job on the machine with laser printer attached").

The NIBR plan involved creating custom resource attributes within Grid Engine so that scientists could submit jobs with the requirement "only run this job if a license is available." If licenses were available, the jobs would be dispatched immediately; if not, the jobs would be held until licenses were available.

To this point, the project was easy. Much more difficult — and more interesting — were efforts to meet NIBR acceptance tests.

The first minor headache resulted from trying to accurately automate querying of the FLEXlm license servers. One FLEXlm license server was an older version that only revealed the number of currently in-use licenses. This meant that the total number of available licenses (equally important) needed to be hard-coded into Grid Engine. NIBR researchers felt strongly that this could create cluster management problems, so the license server was upgraded to a version that allowed the load sensor script to learn how many licenses were available.

The next problem was figuring out how to automatically detect jobs that still managed to fail with license-related errors. The root cause of these failures is the loose integration between the FLEXlm license servers and Grid Engine. Race conditions may occur when Grid Engine launches cluster jobs that do not immediately check out their licenses from the FLEXlm server. Delays can cause Grid Engine's internal license values to get slightly out of sync with the real values held by the license server.

Nasty race conditions between license servers and cluster resource management systems such as Grid Engine are mostly unavoidable at present. The solution everyone is hoping for is FLEXlm support of an API (application programming interface) for advance license reservation and checkout. Applications such as Grid Engine could then directly hook into the FLEXlm system rather than rely on external polling methods. Until this occurs, we are left with half-measures and workarounds.

More Online
Some of the specific code and methodologies used in this project are available at www.bioteam.net.
 
Since preventing license failures entirely was not possible, our attention turned toward automatically detecting them as they occurred.

Unfortunately, this had to be addressed application by application. On the NIBR system, applications that exited with unique Unix error codes upon encountering a license issue were easily handled, as Grid Engine can capture and act upon such information. Special exit-handling scripts were developed to transparently relaunch any user job that failed in this way.

Far more difficult to deal with was one critical commercial application with the annoying habit of exiting with a generic error code whenever any problem was encountered. These failed jobs could not be captured and restarted automatically since jobs with transient license-related failures could not be separated from jobs with more persistent problems. The solution was to write Grid Engine exit scripts to find and analyze application output files looking for the words "license not available." Only if this pattern was present was the job state captured and sent for resubmission to the cluster.

Being able to identify and restart the license-starved jobs from all the jobs with generic exit error codes was the final piece of the puzzle that allowed the cluster to pass NIBR's strict acceptance tests.



Chris Dagdigian is a self-described "infrastructure geek" currently employed by The BioTeam. E-mail: chris@bioteam.net. 



ILLUSTRATION BY LEIGH WELLS
 





For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.