YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

Share the Data: Making Large-Scale Proteomics Data Widely Available

By Henry Rodriguez, Philip Andrews and Chris Kinsinger

August 25, 2010 | Expert Commentary | Advancements in science and health care are made possible through widespread access to cutting-edge research data, as was clearly demonstrated during the Human Genome Project, where researchers collaborated to create an extensive data resource for the entire community. The proteomics community is beginning to implement analogous policies and infrastructure.

The National Cancer Institute (NCI) sponsored a summit in Amsterdam (August 2008) for members of the proteomics community to define policies and practices that would govern the public release of proteomic data. The resulting Amsterdam Principles provide recommendations for rapid proteomics data release and sharing policies, and includes guidelines for timing, comprehensiveness, formatting, deposition to repositories, quality metrics, and responsibility for data release (PMID: 19344107).

It was agreed that high-quality, well-annotated raw data is needed by the scientific community. Providing access to original data sets is crucial for evaluation and the integrity of peer review, but also for accelerating progress in biomedical research through data reuse. Data must be freely accessible and well annotated, but this will only happen if technical and social barriers are overcome and the infrastructure does not slow submissions or stifle innovation. This can be done in ways that address intellectual property concerns.

Proteomics Data Repositories

It is important to distinguish between data sets containing original instrument data files and those consisting of only processed data (peak lists or protein IDs). Original instrument data sets are generally less digitally processed, consisting of mass spectra and instrument run parameters. Variations in smoothing, electronic noise reduction, peak detection, and deisotoping can affect the quality of the peak lists used for genome database searches. This has been particularly crucial for large, distributed projects (e.g., NCI Clinical Proteomic Technologies for Cancer initiative) where validation across laboratories is a challenge. Understanding the differences in results from various laboratories analyzing the same sample on identical instruments requires access to the original data sets and the instrument run parameters in the original instrument files. The latter are not reliably included in the standard proteomics file format, mzML, because the changing number of mass spec technologies employed in proteomics currently makes it impractical. However, the mzML file format is important because it greatly aids reuseability of data sets.

Several major repositories support processed data and metadata in the form of annotations: PeptideAtlas, PRoteomics IDEntifications database (PRIDE), Peptidome, Global Proteome Machine Database (GPMDB), Human Protein Reference Database (HPRD), ProteomeCommons, and Tranche raw data files. Minimally annotated data sets hold considerable value when the data sets are large and robust; however, complete annotation is needed to realize all potential uses of the data. Although there is necessarily some overlap, each resource provides value-added services, which may include quality control, enhanced annotations, and integration with various services. Several repositories have begun to work together, forming the ProteomExchange to provide a formal mechanism for exchanging data sets and their annotations between resources and providing a universal accession number for data.

Unrestricted use is crucial for realizing the full value of data sets through the value-added data resources described above. Data licenses that provide the least restrictions, such as the Open Data Commons Public Domain Dedication and License (PDDL), are most appropriate. Open Data principles provide the least restrictions to individual researchers and to online resources, allowing the greatest value to be obtained from the data. These general principles have been recently collected as the Panton Principles. In brief, Open Data requires free and open access to data on the Internet with no barriers to use.

The provenance of data sets and their proper citation is central to the research process. Assurance that authors of data sets receive appropriate attribution will rely on community mores, publication requirements, and peer review, but it also must be built into the data infrastructure. For example, the Tranche repository, which is integrated with, provides a proper citation and has licensing integrated into the system. This ongoing effort will require input from researchers, funding agencies, publishers, professional organizations, and editorial boards.

Journals on Board

The data challenges associated with evaluation of proteomics publications have been recognized by the major proteomics journals, which have developed guidelines for handling data in manuscripts. Beginning in 2010, authors who publish a manuscript containing mass spectrometry data in Molecular and Cellular Proteomics (MCP) must submit the raw data to a publicly accessible site. The revised MCP guidelines are the first of their kind to make the sharing of raw data mandatory if a manuscript is to be accepted for publication. MCP’s efforts could lead the way toward a new publishing standard.

MCP is taking the lead in this endeavor because the journal realizes there is an incentive for investigators to deposit data if it is coupled with the ability to publish their manuscript (and enhance a researcher’s reputation). Researchers who deposit data sets that prove particularly useful to the community will be more highly cited and rewarded accordingly. This could provide greater incentive than the present system of evaluation, skewed to publications in high-profile journals and citation metrics.

To fuel progress in proteomics research, data sharing cannot be voluntary. It is up to scientists, journals, and funding agencies to take steps to ensure that all parties adhere to the standards for data release. The proteomics community should be applauded for its efforts so far, but there is still work to be done. The releasing and sharing of high quality data will put the pace of proteomics research on a trajectory similar to that seen in genomics.

Henry Rodriguez is director, Office of Cancer Clinical Proteomics Research, Center for Strategic Scientific Initiatives, Office of the Director, National Cancer Institute, Bethesda MD; he can be reached at rodriguezh@mail.nih.govPhilip C. Andrews is Professor of Biological Chemistry, Bioinformatics, Chemistry, University of Michigan; he can be reached at andrewsp@umich.eduChris Kinsinger is project Manager, Office of Cancer Clinical Proteomics Research, Center for Strategic Scientific Initiatives, Office of the Director, National Cancer Institute; he can be reached at

View Next Related Story
Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.