YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Strategy? What Strategy?


March 16, 2010 | The Bush Doctrine | Last August, we conducted a project to explore and define current best practices in pre-GLP safety assessment at big pharma. More than ten heads of preclinical safety (all from major pharma companies) were interviewed to get an understanding of their processes and strategies. We then produced and distributed a detailed survey to several hundred safety experts in the industry to gain further insights. Several resulting conclusions stand out as either reassuring or disturbing:

1) Companies are increasingly pushing safety evaluation ‘upstream’ and thus the emergence of discovery safety or discovery toxicology departments

2) Sophisticated informatics systems to capture, align, present, and analyze the vast quantities of discovery data on New Chemical Entities (NCEs) contribute real value for lead optimization and candidate selection processes; and

3) Not a single company (or person) reported they had a tool, or even a coherent strategy, on how to integrate all these data and make the critical determination of which lead or compound is best to move forward.

Point #3 is particularly surprising and sobering. When we asked questions such as ‘How do you decide which tests to run?’ or ‘How do you weigh results?’ the responses were simply: ‘It is up the project team. Some project teams run a lot of tests, others only a few, and some seem to run them all but use only 1 or 2 of the test results for optimization decisions.”

More than half of the senior members of the safety community felt the process was somewhat out of control, often with little reason why the project teams picked particular tests or how they used the data for decision making. The project teams themselves often felt overwhelmed by all the data and were clearly frustrated in the lack of guidance on how best to utilize or assimilate the results. It is tempting to conclude that we have spent vast sums of money on developing, implementing, and executing new screening methods (with associated informatics tools), but appear to be clueless when it comes to applying a practical, integrated interpretation strategy.

This is fairly disturbing, but on deeper reflection, not too surprising. We must realize that all in vivo biology (including toxicology) reflects a very integrated system of enzymes, cells, organs, and beings. Thus, it is very unlikely that any single test result, such as an IC50 for enzyme inhibition, could ever possibly predict clinical outcome. There are simply too many interacting process (such as absorption, metabolism, feedback loops, and transport pathways) at work to expect that a single test parameter will be predictive of clinical effects.

Systems Integration

It is thus difficult to say with certainty what the meaning of any one test result is when judging an NCE’s potential as a medicine. Its significance can only be determined in the context of the whole system. Clearly what is needed is a tool that puts the screening data in the context of the whole organism; so modeling and simulation gear immediately come to mind.

Could the emerging field of systems toxicology fill this void? Recent announcements from companies such as Genstruct and Genedata indicate that bioinformatics vendors are eagerly exploring the systems toxicology space, but I worry they too are looking at the problem too narrowly. Just because an NCE has potential to interact with a specific toxic pathway gives little more indication of its effect in the context of the whole being than does knowing what the IC50 is relative to a specific enzyme. For several thousand years now, Rule #1 in toxicology has been: “It is the dose that makes the poison.” The more modern interpretation is that exposure at the active site is critical to seeing adverse effects. Thus, any predictive model that wishes to put screening data, or systems toxicology results, in the human context must include some estimate of exposure, including tissue and organ exposure for a given dose.

Luckily, there is a fairly well developed modeling system that has been widely utilized for this purpose—Physiologically Based Pharmaco-Kinetic modeling (PBPK). The question then becomes: Who is knitting together the systems toxicology tools and the PBPK tools to develop a holistic model for predicting clinical adverse effects? None to my knowledge, but I would be very interested in learning of any such developments.

I believe what is needed is for the systems toxicology companies to collaborate with the PBPK tool suppliers to achieve a fully integrated system for predicting clinical adverse events. Such a system should improve our ability to predict clinical efficacy as well, or at least get a preliminary handle on the therapeutic index at a very early stage of discovery during lead finding/optimization (i.e. when it has the potential to easily, but productively influence the chemistry program).

Ernie Bush is VP and scientific director of Cambridge Health Associates. He can be reached at ebush@chacorporate.com


Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.