YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Grappling with Next-Gen Data Glut


Storage and data management drain resources even as sequencing speeds up.

By Kevin Davies

Nov. 12, 2008 | PROVIDENCE, RI—Vendors continue to release significant upgrades in their next-generation sequencing platforms, and the challenges of handling and analyzing all that data continues to frustrate users. CHI’s Exploring Next-Generation Sequencing meeting* was complemented with a new track on data analysis to explore this issue.

The wealth of resources under development for next-gen sequencing was discussed by Gabor Marth (Boston College)—programs for base calling, read mapping, and SNP discovery to structural variation and data visualization. Marth said new analysis methods are needed for tailoring existing tools for different applications, improving analysis pipelines to focus on key results, and improving support for downstream analysis. The most popular software from Marth’s own group includes the Mosaik Read Aligner and the EagleView genome assembly viewer.

“It’s not the cost of storage, it’s the cost of storage maintenance,” said Jacob Farmer (CTO, Cambridge Computer). Farmer discussed various storage architectures, pointing out that it could take an organization two months to fully restore 90 terabytes (TB) of data from backup. He felt there is still room for tape when used in conventional backups and that a new concept of deduplicating data—for example in sequence data—is gaining traction.

Reece Hart (chair research computing, Genentech) said the demands of storing 300 TB of data and climbing cause a fair share of operational headaches, not least because of 1-2 hard drive failures a week. He disagreed with Farmer about the value of tape: besides being frail, Hart said it took six hours to restore 1 TB from backup. “Tape is really not a winning backup strategy for anything that is business critical.” He favored mirroring storage, though hoped to move his current backup more than 20 miles away. Currently Genentech’s biggest data source is high-content screening, but that will change as the company introduces its first next-gen sequencer in January 2009.

During a lively panel discussion, representatives from several large genome centers took some heat from the audience for not releasing protocols and software. The Broad Institute’s Patrick Cahill defended his center’s efforts, saying it was working hard to release information, but it wasn’t easy in such a fast-moving field.

More, More, More
Speakers from the big three commercial next-gen sequencing platforms all presented impressive improvements in throughput. Jason Affourtit (Roche/454) discussed the company’s recently released Titanium series of reagents, which could deliver a fivefold genome coverage in two weeks compared to two months for James Watson’s genome last year. The upgrades—including coating of wells to reduce crosstalk—result in longer read lengths of about 400 bases and run throughputs of about 500 million bases. 

Not to be outdone, Illumina’s Gary Schroth noted that a recent breakthrough in the deblocking chemistry of Solexa sequencing promised to extend individual DNA read lengths from 50 to 75 and possibly 100 bases in the next few months. As will be published in November, David Bentley and colleagues have sequenced the genome of an anonymous African male. The work generated 135 gigabases (Gb) data from 4 billion paired sequencing reads, identifying millions of SNPs and thousands of structural variants.

Applied Biosystems’ Michael Rhodes said that his company has also sequenced the same African individual using its SOLiD system, generating 12-fold genome coverage so far. Analysis of the resulting data using a program called PolyPhen reveals hundreds of potentially deleterious mutations in coding genes.

Laurie Goodman, communications officer for the Beijing Genomics Institute (BGI) at Shenzen, provided a sneak peek of the first Asian genome sequence that will also be published this month. BGI has established itself as the third biggest sequencing center in the world, generating 21 Gb of sequence per day, primarily based on Illumina GA Analyzers. The Chinese researchers generated more than 3 billion 35-base reads, or 34-fold coverage of the Asian genome. The work, which was conducted last year, took two months using five instruments, and cost an estimated $500,000.

Patrice Milos, CSO, Helicos BioSciences, said Helicos is striving to increase throughput tenfold from 50 to 500 megabases per hour. The company has been improving its chemistry and extending average read lengths by testing on various bacterial genomes. Milos says Helicos intends to be sequencing human genomes in 2009. Helicos recently shipped its second HeliScope to Stanford University, where company co-founder Steve Quake is on the faculty.  

Editor’s Note: CHI’s Next-Generation Sequencing Applications conference will be held in San Diego, March 17-18, 2009.

___________________________________________________

This article appeared in Bio-IT World Magazine.

Subscriptions are free for qualifying individuals.  Apply Today.

 

 

 

 

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.