YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Flexibility in Translational Research Informatics


Tools are becoming increasingly crucial to integrate and analyze translational data.

By Jonathan Sheldon

Nov. 12, 2008 | The growth of multi-site, multi-technology translational research studies (the clinical application of scientific medical research from the lab to the bedside) is placing increased emphasis on informatics capabilities to integrate, analyze, and visualize translational data. Many commonalities are emerging in translational studies, both in medical institutes and drug companies, suggesting that the time is right to look at a more product-based approach to translational informatics. While configurability is still necessary for any system, it is now a considerably reduced part of the total solution compared to earlier custom-made systems.

Although restricted by HIPAA compliance and patient security issues, medical institutes have been blazing a trail for pharmaceutical and biotech companies in their adoption of translational research. Many large research institutes are making great progress in implementing electronic, searchable patient databases. This allows them to leverage their extensive patient information and samples to further understand the mechanisms of disease and characteristics of their patient populations.

By contrast, pharmaceutical and biotech companies have historically been slow in fully adopting translational research. This is often due to data access restrictions and reflects the view that clinical data is for support of FDA submission only, not for research purposes. With any delays costing the company upwards of $1 million per day, “nice to have” data was typically not an option. However, this is beginning to change. More and more companies are seeking information and proposals about tools to support translational research.  Interest in an informatics infrastructure to support translational research is increasing, particularly for the integration of biomarker data, hypothesis generation, and ad hoc querying.

These fundamental changes are cited as a means to better utilize biomarker techniques and existing and public clinical data sets. It is also a move away from rigid use of clinical data and toward a model that supports an iterative approach to compound development. In this model, compound progression only occurs in combination with treatment selection and safety biomarkers, and a solid understanding of disease pathophysiology. This change enables drug companies to move away from linear processes toward a business model that produces safer, more effective drugs with lower development costs.

Mash-ups and Data Marts
Hypothesis generation and ad hoc querying for translational research requires the ability to flexibly integrate, analyze, and visualize data. In Web 2.0 terminology, this is a data “mash-up”—taking information from data warehouses and marts, web services, and personal data tables such as Excel, and providing a dynamic view of the data to address a particular scientific question. Mash-ups sit alongside (rather than replace) typical data warehousing approaches to enable fast responses to business questions and flexibility in hypothesis generation. This type of configurable system can also take advantage of new services and databases from programs such as caBIG and I2B2. The provision of web services and data schemas can be easily integrated into data mash-ups. Users can also feed data from legacy systems using this approach.

Another approach to data integration in translational research is using disease-specific data marts. By extracting de-identified patient data from more operational focused databases and sample status from LIMS sources, information about patients and sample status can be combined into a common view. This approach enables the use of disease ontologies and Extract Transform and Load (ETL) operations to take legacy data and make it available to new web-friendly architectures.

Decision Support for Clinicians
With the growing availability of comprehensive clinical data, researchers can select multiple cohort studies by slicing and dicing patient populations to identify the most appropriate biomarkers. New tools are now available to enable researchers and clinicians to browse clinical and integrated sample data to select disease and non-disease cohorts based on multiple criteria. Critically, these systems are designed in collaboration with clinicians to ensure appropriate interaction with and display of accurate predictive data that supports evidence based decision making.

The combination of these capabilities and trends is leading the way to a more configurable, product-based approach to translational research informatics, which takes data from multiple sources and delivers it to end users in a meaningful way. By working together, medical institutes and drug companies can learn from each other to re-engineer the process of drug discovery and deliver a major step forward in understanding the molecular basis of disease. 

Jonathan Sheldon is the CSO of Inforsense. He can be reached at jsheldon@inforsense.com.

___________________________________________________

This article appeared in Bio-IT World Magazine.

Subscriptions are free for qualifying individuals.  Apply Today.

 

 

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.