YouTube Facebook LinkedIn Google+ Twitter Xingrss  

 A gaggle of maverick data storage firms is challenging staid approaches to data handling — and winning converts in the process


October 14, 2004 | James Reaney has a problem. Reaney is network and server operations manager for Harvard University's life sciences group, tasked with supporting nearly 2,000 researchers, using a mix of Windows, Mac, and Linux clusters, who tap the storage systems for file and print services, BLAST searches, and other operations.

"We're dealing with the manipulation of large databases, databases greater than 100 terabytes sometimes," Reaney says. "You can't fit [such data] in memory — you need to stream them off the storage server. If you have a four-node cluster, everyone can handle that. But when you have a 200-node system, where each node wants its own section of disk to load large genomic databases, [spare] room for manipulation, and simultaneous use, that kills us."

"I became interested in blade technology. Real estate in Cambridge is very expensive. If we tapped into blades, we could halve our footprint or double our density."
James Reaney, Harvard University's life sciences group
In the critical arena of data storage, size was the only thing that used to matter. But due to the dramatic expansion of life science databases, and the increased processing power of computational resources to crunch all those data, storage performance is now as critical as capacity when designing IT infrastructures.

Until now, most life science organizations would typically apply raw computing horsepower to data analysis. But as processing power grows more abundant, the challenge is increasingly in data handling.

"There have been great strides in high-performance computing," says Marshall Peterson, CTO of the J. Craig Venter Foundation. "For example, [Los Alamos National Laboratory's] ASCI program has done miracles in scaling high-performance computing (HPC). But it's not the HPC [we] need to get our jobs done. The tools needed for the next steps, for things like personalized medicine and looking at ways biology can be applied to improve the environment, will require managing massive amounts of heterogeneous data."

New Problems, New Solutions 
Until recently, when storage systems became strained, life science companies typically added capacity by going to one of the usual suspects of storage systems vendors, such as IBM, Hewlett-Packard, EMC, Network Appliance, SGI, Sun Microsystems, or Hitachi Data Systems.

But during the past year or so, a new breed of storage systems vendors has popped up. Companies such as 3PAR, Netezza, Isilon, Panasas, BlueArc, and Sepaton take radically different approaches and use new storage architectures aimed at boosting storage system performance.

As happens with any new class of IT product, companies are often reluctant to be on the leading edge — particularly for something as critical as storage. However, the performance issues are so extreme that life scientists are taking a keen look at these new solutions. In several cases, the credentials of the company founders lend a measure of confidence in the new product offerings.

Terry Gaasterland, head of the Laboratory for Computational Genomics at Rockefeller University, notes that one of Panasas' founders, CTO Garth Gibson, had a great track record in leading-edge storage technology. "I have a background in computer science and worked with distributed systems," Gaasterland says. "I knew of [Gibson's] contributions to RAID (redundant array of inexpensive disks) and his work at Carnegie Mellon University."

During his Ph.D. in computer science at the University of California at Berkeley, Gibson co-authored a seminal paper on RAID. Once he joined the faculty at Carnegie Mellon, he founded a storage-system research lab called the Parallel Data Lab.

In Netezza's case, Bill Blake, senior vice president of product development, is well known in the industry, serving previously as vice president of high-performance technical computing at Compaq.

Of course, Gaasterland needed to see more than a strong resume before ultimately selecting her lab's high-performance storage system.

The Gaasterland lab creates and applies new software tools for genome analysis. The tools help integrate, analyze, and visualize the output of high-throughput molecular biology experiments and compare this output to DNA sequence data, which could include whole genomes, chromosomes, or cDNA clones. Among her group's interests are projects focusing on the automated annotation of eukaryotic and microbial genome sequence data, the semi-automated annotation of gene-expression data, and the integration of the two.

"We have lots of data that are I/O-bound. So we needed a massive data storage [system] that was as cheap as possible but reliable, with high I/O."
Terry Gaasterland, Rockefeller University
"We have lots of data that are I/O (input/output)-bound," Gaasterland says. "So we needed a massive data storage [system] that was as cheap as possible but reliable, with high I/O." When she started to look for a solution, one choice was to use IDE disk drives with RAID 5. "This [approach] would be cheap and reasonably reliable," she says. "But we wanted something more elegant."

She also evaluated storage systems from two of the major systems vendors. One vendor's system was well built and reliable but did not deliver the I/O required. The other offered fast I/O and was expandable, but Gaasterland deemed it too expensive.

That's when she decided to check out Panasas. The company had an intelligent object-based storage system in the works that dynamically distributes data across multiple storage blades. The product's file system can be used to create parallel data transfers between storage blades and cluster servers. This eliminates congestion bottlenecks found in traditional architectures where there is a single data path between a storage server and all the nodes of a cluster.

The technology looked promising, but Panasas did not have a product ready at the time. So Gaasterland kept on developing her specifications for a new storage system until she was able to test an early version of Panasas' product. "Out of the box, we plugged it in and got it running [rapidly]," she says. "This is absolutely the opposite experience we had with other systems that took two days to get operational."

It was anything but a hasty decision. "[We] made a judgment call about the people at Panasas after looking at them for about two years," Gaasterland says.

ATA Meets Fibre Channel 
Harvard's Reaney cites similar experiences when looking for storage systems to meet growing data-handling needs. "About 20 months ago, we decided to virtualize our storage, divorcing the storage from the CPUs," he says. Various options were considered, including the use of a university SAN. Reaney rejected the idea, however. "The performance was fantastic, but it would be very expensive to use," he says.

Another choice was to build his own SAN, but that option was also too expensive. Moreover, Reaney felt that network-attached and direct-attached storage architectures were too limited, and ruled them out, too.

"At the same time, I became interested in blade technology," Reaney recalls. Blade systems offered a space savings that Reaney found attractive. "Real estate in Cambridge (Mass.) is very expensive. If we tapped into blades, we could halve our footprint or double our density."

While attending the 2003 Bio·IT World Conference + Expo in Boston, Reaney came across BlueArc, a company he had never heard of. The BlueArc approach to storage acts like a hybrid NAS-SAN. "It is very unlike a traditional NAS and very much like a SAN," Reaney says.

The BlueArc technology, which combines the use of ATA drives and Fibre Channel technology, facilitates management of large databases, particularly in situations where storage needs to be dynamically allocated and contracted.

Old Dogs, New Tricks 
Besides dealing with performance issues, life science managers must also take great pains to ensure that data are stored in a manner that meets regulatory requirements, including HIPAA and 21 CFR Part 11.

Read More 
After performing due diligence on the company, Reaney met with upper-level management and contacted BlueArc reference accounts. There was a lot of buzz at the time about the company's Titan SiliconServer, which was still under development. Reaney ended up buying an existing model — the Si8900 — but has since installed a Titan server.

In addition to relying on leading-edge technology architectures, the new wave of storage vendors offers another key asset: flexibility.

For example, an IT manager at a Cambridge biotech startup selected 3PAR as his primary storage vendor. The manager had previously worked with the likes of IBM and HP, but he found it easy to work with 3PAR. "At past jobs, whenever I needed to add storage, it required a committee meeting with the vendor's sales and systems engineering staff," says the manager, who asked not to be identified. "With 3PAR, I call them on the phone and place the order."

Other technology directors cite the willingness of the new vendors to tailor their products to their needs. For instance, Peterson of the Venter Foundation notes that Netezza asks him about his particular data-handling challenges. Rather than simply querying sequence data, Peterson predicts there will be more emphasis on examining data from multiple sources, adding complexity to data analysis. "It's the difference between just doing an SQL query versus doing an SQL query, running a BLAST command-line search [with] the results, and then using some Perl code to do something with the output," Peterson says.

With such insight from Peterson and others, Netezza developed a life science version of its Netezza Performance Server. The NPS data warehouse for bioinformatics can store terabyte-sized genomic databases and has dedicated hardware and software to process sequence analysis SQL queries of such databases.

Wanted: Stability, Availability 
Not surprisingly, the traditional storage system vendors are not standing pat in the face of this upstart competition.

"The key is to have a [system] that is stable and has high availability. Stability and high availability are very important — they're right up there with performance."
Michelle Butler, National Center for Supercomputing Applications
In recent months, virtually all of the big storage firms have introduced new models of their systems that are based on higher-performance processors (typically Intel Xeon chips) and new Serial ATA disk drives, which are faster than the older IDE and ATA drives. As a result, nearly all the vendors now have systems that offer roughly twice the I/O data throughput of their predecessor systems.

And the big names have enhanced their offerings to address new storage challenges, particularly in the area of safeguarding data and meeting regulatory compliance requirements (see "Old Dogs, New Tricks").

As the high-performance storage system choices increase, life science companies continue to face tough decisions. No matter which storage system they choose, it is still just one part of the total architecture. Storage systems do not work in a vacuum — it is the interaction of storage systems with the high-performance computing systems that ultimately determines how fast an application runs.

Consequently, many organizations are tackling the broader issue of how to integrate HPC, storage, and interconnection components. "The key is to have a [system] that is stable and has high availability," says Michelle Butler, technical program manager for the Storage Enabling Technologies Group at the National Center for Supercomputing Applications. "Stability and high availability are very important — they're right up there with performance." Butler is exploring some key interconnection technologies to address some of these issues.

The integration of HPC, storage, and interconnection technology will also be the focus of a new high-performance storage program called StorCloud, which will be part of the SC2004 supercomputing conference in November.

The demands of life science databases and the accompanying computational analysis require a new approach to storage. Sheer capacity is no longer enough. Performance, flexibility, and integration are rapidly becoming equally important considerations. * 

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359,