YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Sharing Data Across the Pipeline


By Salvatore Salamone

June 10, 2005 | Data sharing, collaboration, and maximizing knowledge from data were the key themes of the IT Solutions track at the 2005 Bio•IT World Conference + Expo. While these issues are familiar, what was new was the extent to which data must be shared, as well as some of the exciting new techniques, including the Semantic Web and in silico modeling.

Discussion often focuses on breaking down silos in research to allow biologists and chemists to leverage each other’s work. Now, there is an increased need to share their data throughout all stages of drug development. Speakers said that disseminating data will be a requirement to meet new drug safety requirements, to refine new interdisciplinary R&D methods, and to establish a drug development approach that relies on the interplay among discovery, development, and clinical trials.

Jeff Miller, vice president of health industries at Hewlett-Packard, noted some of these approaches. He cited a New York University effort that looks into individual wellness and takes into account genomics, lifestyle, and environmental and care delivery factors. “Such information will help researchers better understand diseases and treatments of diseases,” he said.

But to use information from such diverse areas will require a change in basic data management techniques. “The way we collect clinical data is different than the way data is collected in R&D,” Miller said. “We need to move data forward and back between research and clinical delivery. We need to transfer knowledge between research and clinical patient care.”

Steve Walker, CIO of the UK BioBank, echoed the value of vast amounts of clinical and research data. The BioBank project aims to collect personal and family medical histories, as well as blood and urine samples, from 500,000 British volunteers and follow their health for up to 30 years.

The scope of the project covers everything from building sample repositories to creating an integrated patient call center/mailing system to recruit patients, and developing IT systems such as a laboratory information management system and a clinical management system. “Everything is new and the time scales are punishing,” Walker said.

Reaping Information from Data

John Reynders, information officer of the Lilly Research Laboratories at Eli Lilly, discussed ways to use multidisciplinary information to cut drug development time.

“Many steps in the drug discovery process are done serially,” he said. A drug candidate is typically subjected to a series of biological, chemical, and ADME evaluations, usually in sequence. He argued for integration of this relevant information: “[We] need to do things in parallel — we need to approach drug discovery using a multiparameter optimization.”

In a parallel approach, “you maintain [high-throughput sequencing], but let’s not be so random,” Reynders said. The idea is to simultaneously leverage chemical structure, sequence analysis, chemogenomics, and other information when evaluating a chemical candidate. Such data may come from the literature, public databases, internal experiments, and partner research. Making all of this data easily available is critical. “The focus [at Lilly] is on two mantras,” Reynders said. “Given a compound, tell me everything known about it. Given a target, tell me everything known about it.”

Reynders is striving to provide all researchers, regardless of their areas of expertise, with a single environment to conduct their work.

Other speakers noted the need to deal with multidisciplinary information. Simon Smith, head of bioinformatics software development at Johnson & Johnson Pharmaceutical R&D, presented a global discovery portal to help researchers find and share information. “One of the hardest things is knowing who else is working on the same target,” Smith said. “We’re looking for true collaboration across sites and companies.”

A Matter of Semantics

Miller’s theme of information sharing across the drug development pipeline was echoed in many talks.
“Oftentimes, lab data alone [are] not enough to recruit patients for clinical trials,” said Tonya Hongsermeier, corporate manager, clinical knowledge management and decision support, at Partners HealthCare System. She noted that selecting a suitable participant frequently requires information such as a physician’s observations and medical details, which are stored on paper.

Electronic medical records (EMRs) would undoubtedly help, but Hongsermeier noted that the decision-making process must also be modified. She noted that organizations that use EMRs often have a “Siskel & Ebert” approach to decision support. “We need an actionable decision support system that works in the context of [an organization’s] workflow,” she said. To that end, her colleague Vipul Kashyap, senior medical informatician at Partners, discussed some of the work the company is doing with Semantic Web-based decision support systems.

Partners is using Semantic Web standards such as the resource description framework (RDF) to help make EMR patient data — age, family, and medical history — available to computer models. Having the data in RDF format allows Partners to use the Semantic Web rules language (SWRL) to write decision support rules for treatments or selecting patients for trials. Partners can use SWRL to set criteria for using a particular diagnostic test.

Similarly, Eric Neumann, global head of knowledge management at Sanofi-Aventis, advocated the use of semantic Web technology in drug development. “There is a critical need to develop an informatics and knowledge model across the drug [development] pipeline,” he said.

He noted that the traditional approach to accessing and using data in applications has limitations. “IT tools and APIs [application programming interfaces] are great if things are constant,” he said. But this is not the case with drug development. “As applications become more complex, it is necessary to include semantics into them,” he said. He noted that RDF represents knowledge. “It’s not just facts, but assertions.” The semantic Web approach leads to what is called knowledge aggregation. “With the semantic Web, you publish meaning, not just data.”

In short, it is no longer just about the data. The keys are the information and knowledge within the data. Unlocking the information and finding suitable ways to share it within a life science organization will be the dominant IT challenges in the years ahead.

View Next Related Story
Illumina at EmTech | Sep 25, 2014
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.