YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Combining Drug Toxicity Knowledge


By Eric K. Neumann


July | August 2006 | There are times in drug discovery when a surprise is welcome. Consider the origins of such drugs as penicillin, tamoxifen, and Viagra, whose applications were found serendipitously. But when it comes to appreciated surprises, drug toxicity is not one of them. Nearly half the drugs entering clinical trials will fail because of some form of serious toxicity that was missed in preclinical studies. These failures should not happen at such a late stage in the process.

As lowering safety standards is not an option, safety must be addressed without inflating drug costs. To that end, the FDA has stressed the importance of utilizing knowledge in its Critical Path (CP) document. It emphasizes the development of tools that “will build on knowledge delivered by recent advances in science, such as bioinformatics, genomics, imaging technologies, and materials science.”

The FDA describes toxicity in terms of adverse interactions of compounds that are significant at the proposed effective doses. It argues that drug development efforts have not benefited from recent advances in the ability to distinguish “potential from actual toxicity.” That is, scientific knowledge that is relevant to toxicity is being created, but it is not being applied effectively. The FDA adds: “The inability to better assess and predict product safety leads to failures during clinical development and, occasionally, after marketing.” This is even true at Phase III, where the cost of drug development really takes off.

 ScienceWeb.gif
Diverse contributions for toxicity
knowledge that would support
multiple applications.
So what can be done? A major theme of CP is the need to improve the use of scientific knowledge, going beyond a reductionist approach, because current knowledge of genes and gene expression “does not constitute knowledge at the level of the systems biology of the cell, organ, or whole organism...” This requirement has major implications for how we need to go beyond the current data table approach to improve our understanding, and leverage more semantic approaches.

What kinds of knowledge are required to make better predictions and better decisions? There are a wide range of disciplines that form a web across molecular and cellular biology, chemistry, and physiology, including target pleiotropic effects (one gene, multiple functions/phenotypes) that are in addition to the validated therapeutic mechanism; off-targets (secondary) effects on other proteins and processes due to limited primary target specificity; tissue-specific toxicity that may be observable through molecular and cytological biomarker effects; knowledge of the structure-activity relationships between a compound and the aforementioned effects; drug metabolism and pharmacokinetics mechanisms and variations between species and within human populations (genetic backgrounds); and drug interactions that derive from a systems level.

Most contemporary applications focus typically on only one of these items. By combining such knowledge, much more accurate predictions on toxicity can be made. Furthermore, it is hoped that translational research, as proposed in CP, will help us utilize all forms of translational toxicity knowledge and do so across all drug development stages, over all projects. The CP states: “There is hope that greater predictive power may be obtained from in silico (computer modeling) analyses such as predictive toxicology.” Estimates of as much as 50 percent reduction in overall drug development costs are considered possible through the extensive use of new forms of informatics.

Many Web-based resources also exist, for a growing number of compounds including National Library of Medicine’s ToxNet (1), NCBI’s PubChem (2), and EPA’s DSSTox (3). However, these are still isolated databases that need to be connected to other forms of knowledge in meaningful ways, such as biomarker phenotypes and associated pathways. Simply having lists of biomarker signatures to drug toxicity effects does not help distinguish what is the cause of toxicity and what is a secondary, nontoxic effect. Such convoluted information makes it very difficult to understand which effects seen in animals will indeed impact of toxic responses in humans. Connecting all forms of knowledge for multiple purposes requires a data semantic approach as specified by the Semantic Web (4), which will serve to improve the analysis and predication of toxicity studies in both animals and humans. “As Louis Pasteur once said: “In the field of the observation, chance favors only the prepared mind.”

 

E-mail Eric K. Neumann at eneumann@teranode.com.

 

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.