YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Notes from the Lab


By Chris Dagdigian

Oct 17, 2005 | As an independent consulting practice, the BioTeam has to make a concerted effort to stay on top of the tools and technologies that enable research computing. Admittedly, putting current and emerging technologies to work in our lab is a very enjoyable task.

Our “Notes from the Lab” column will occasionally recap some of the more interesting things that have passed through the lab recently and a few that we are just getting started with.

One of the newest and most anticipated arrivals to our lab is a production server with the latest Opteron dual-core CPUs from AMD. For some time now, dual-processor single-core Opteron servers have delivered some of the best price/performance ratios for common life science informatics use cases. We wanted to see how the new multicore systems stack up against the same classic life science workloads.

Multicore technology allows chip manufacturers to build multiple CPU cores onto the same processor die. The advantages are extremely significant — more horsepower delivered more efficiently. As usual, though, the devil is in the details, and placing multiple cores on a die can introduce complications. On the hardware side, multiple cores may share common on-die components such as memory controllers that could become a bottleneck. On the software side, the operating system needs to understand processor affinity and scheduling well enough to use the multicore CPUs in a way that optimizes memory and cache access.

We installed Fedora Core 4 on our testbed system to take advantage of the latest available Linux kernel. Linux distributions such as SUSE Professional and Fedora tend to get the latest kernel versions well before the enterprise-centric Linux products from Red Hat and SUSE. Users of enterprise Linux products can certainly make use of multicore systems but may have to wait a bit for their vendor to provide kernels that contain the latest scheduler and code enhancements. Not having tested on the enterprise versions, we can’t say for sure if we got any sort of significant performance gain from the use of Fedora Core 4.

What did we find? In short, take it from us that these dual-core AMD systems are the real deal, providing excellent scaling and very impressive performance. We can’t wait to put our dual-core/dual-CPU machine up against a standard quad-CPU server. Also up for testing will be the multicore offerings from Intel.

As with all benchmarks and performance figures, the most valuable ones are those that use your own applications and data sets. BioTeam is a strong believer in internal benchmarking as a standard prepurchase methodology — external reviews and vendor benchmarks can only tell so much about how a system will perform against your own unique workload and workflows.

More in-depth technical details and excellent dual-core benchmarking data can be found in a recent report published by Scalable Informatics’s Joe Landman. A link to this highly recommended report can be found here.

Blade Systems
Dual-core machines are not the only new hardware we had the opportunity to be testing in-house lately. For the past several months we have been working on a Penguin Computing BladeRunner that has become a personal blade server favorite. Blade systems tend to be marketed for all the wrong reasons — when deploying a small system, it may be OK to concentrate on issues of processor density, form factor arguments, and network cable reduction. However, for large deployments, the real value of blade server systems is derived from how well they reduce operational and administrative burden. For this reason, our key metric for blade server evaluations often hinges on the quality and capability of the managing, monitoring, and provisioning tools provided.

BioTeam
RACK 'EM UP: The BioTeam
runs the Penguin Computing
BladeRunner through its paces
at their test lab.
The BladeRunner system from Penguin, a Linux-only shop since 1998, gets high marks all around — excellent CPU density, hardware design, and environmental characteristics with management technology that did not interfere with efforts to deploy and manage a highly customized Linux compute farm. We wiped the factory-installed software off the system and built a compute farm from scratch on the 12 dual-CPU Intel Xeon blades using SUSE Professional 9.3 as a base operating system. When we ran into issues properly configuring a private network VLAN within the BladeRunner switch module, the e-mail technical support received was both prompt and technically excellent (a rare find these days).

The intelligence built into the BladeRunner switching module allows for management and configuration actions to be performed via telnet, SSH, SNMP, or Web-based methods. The chassis supports two switching modules that can be used in tandem for failover purposes or simply to increase the Gigabit Ethernet port count. Of special interest is the availability of a 10 Gigabit Ethernet networking module — a natural way to bring high-performance shared storage into blade chassis and a method we hope to be testing soon.

An additional distinguishing characteristic derives from Penguin Computing’s 2003 acquisition of Scyld Software, one of the pioneers of cluster computing and home to Donald Becker, creator of the Beowulf cluster architecture. The combination allows the company to sell fully functional Scyld-powered “instant Beowulf clusters” directly to end users. Also refreshing is the ability to build and price basic blade systems directly on the Penguin Computing Web site. As a group that spends significant time and effort trying to wrestle real-world pricing data from vendors, we feel obligated to give special recognition to companies that actually make this process somewhat transparent and easy.

Products on the short list for upcoming testing include the latest 10 Gigabit Ethernet server adaptors from Chelsio Communications, the multicore offerings from Intel, and the shipping version of the Montilio RapidFile product we covered in a previous column (see Lab Notes: Relief for File Servers, March 2005, Bio-IT World, page 46). Drop us a line with comments and suggestions. What are your plans for multicore systems, and what new products are you eyeing for your research computing infrastructure?

Chris Dagdigian is a self-described infrastructure geek currently employed by The BioTeam. E-mail: chris@bioteam.net.

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.