YouTube Facebook LinkedIn Google+ Twitter Xingrss  



By Malorye A. Branca

March 7, 2002 | Proteomics is facing a growing data problem. “Our ability to generate data is far outstripping our ability to analyze it,” says Ruedi Aebersold, co-founder of The Institute of Systems Biology in Seattle, and a speaker at a recent meeting on the Human Proteome Project*.

Now that the human genome has been all but sequenced, the challenge is to identify potentially hundreds of thousands of proteins encoded therein (see Paper View, Page 24). New advances unveiled at the conference offered improvements in managing the wealth of data, even as others will exacerbate it.

“Achieving ‘high throughput’ usually means buying more mass spectrometers, or trying to get more out of the ones you have,” says John Yates, an authority on proteomics at the Scripps Research Institute. A growing number of groups, including Amgen, Celera and Roche, are turning to Yates’ method of gel-less shotgun proteomics as a means of bypassing two-dimensional (2D) gel electrophoresis (the standard means of protein separation.) In Yates’ technique, proteins are digested first, creating a complex mixture of peptides, which are then separated by single or multi-dimensional liquid chromatography before mass spectrometry (MS) analysis.

Yates’ group has also developed a protocol that permits the identification of “well-behaved” (small) protein modifications in complex mixtures with a higher degree of reliability than before. “We don’t know the full reach of the technique yet,” Yates says. “We’ve been concentrating on the obvious-phosphorylation, acetylation and methylation, for example-because there is increasing evidence that these have regulatory roles [in protein function].”

Working with a half-dozen Ion Trap mass spectrometers and a 55-node Beowulf computing structure, Yates and colleagues process 1 million to 2 million spectra per week. To expedite protein identification, their program automatically excludes several contaminant spectra from the sequence searches. The group’s newest time-saver is an algorithm that accurately helps identify peptide spectra from the Ion Trap. “This method has cut our search times in a half for a large analysis,” says Yates.   

Aebersold’s isotope coded affinity tags (ICAT) represent another new proteomics tool gaining in popularity. These chemical reagents enable the use of MS for differential protein quantification across two samples, and are the first “chemical tools” for proteomics. Aebersold is currently trying to adapt ICAT to a solid-phase format. “In the liquid phase, the reagents are pretty hard-wired,” says Aebersold.

To increase the system’s speed, sensitivity and reproducibility, he is attaching the reagents to beads with the aim of producing a high-throughput tool similar to a DNA microarray, which would identify and quantify thousands of proteins at once. Early results, including studies of both microbial and mammalian cell systems, are considered promising. In one yeast-based comparison, for example, the solid-phase methodology netted three times as many proteins compared to the traditional ICAT.

The next challenge will be to optimize protein quantification software. “Since most proteins don’t change in abundance between your samples, you have to figure out which ones to look at,” says Aebersold. Tools to quantify and select out differentially expressed proteins will be in strong demand.

Amgen’s MS lab-which includes 1 MALDI/Q-TOF and four LCQ-MS/MS spectrometers-generates about 10,000 spectra (at 450MB) per day. “You can’t deposit this much data each day,” says Amgen scientist Wen Yu. “The challenge is to reduce and convert it.” To that end, Yu has developed Twin, a “housekeeping” program that groups similar spectra in the database. This allows redundant spectra from common proteins to be excluded from the analysis. So far, Amgen has accumulated more than 9.4 million spectra, and 39 GB of protein sequences. In addition to storing MS/MS spectra, Amgen has begun to process MS1 (first round) spectra, which are required for quantification studies.

2D gel analysis can also cause a data jam. “A lab may be able to run 50-60 gels in a week, but then they don’t have the time to analyze them,” says Steve Mallam, proteomics project leader at Nonlinear Dynamics, which has launched Progenesis 1.0.1 to automate the process. Analyzing a single gel with Progenesis takes significantly longer (up to 15 minutes) compared to the 20 seconds or so required with standard software. However, the new program automatically handles normalization, spot detection, and matching, thanks to “more powerful” algorithms. The speed of the system depends on the quality of the gels being run. “We’ve run as many as 30 gels in an hour, and as few as 20 in a day,” says Mallam.

Several companies, including Celera and GeneProt, are investing heavily in MS instrumentation. Whether users amplify the MS datastream using high-speed systems or add numerous conventional instruments, the bottleneck soon shifts to data interpretation. Improved lasers will only compound this challenge.

 

*The Human Proteome Project, Cambridge Healthech Institute, San Diego, Calif.; January 9-11, 2002.


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.