YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Solve the Big Problems First


By Ernie Bush

November 10, 2009 | Bush Doctrine | Hacking and hackers did not always have the derisive connotation that they have today. Currently, we think of hackers as someone capturing our personal information or defeating software copy protection algorithms. But there were hackers long before widespread use of these systems, and the term made reference to software developers that were unstructured and unsophisticated in their approach to code generation. 

Hackers of yore threw code together quickly, often to solve some immediate problem, or to simply see if they could make it work. I was one of them and proud of it. During my graduate school years, computers were just starting to expand into virtually all scientific instrumentation. Back then (mid 1970s), most programming courses were focused on teaching Cobol or Fortran IV on the campus mainframe. So if you wanted to leverage the proliferating DEC or Data General mini-computers—or better yet, the new micro-computers like the Apple I or II—you pretty much had to teach yourself. Luckily there were various user and enthusiast communities that could help guide your efforts towards being a full-fledged next generation code writer (i.e. hacker). 

For those who recall entering boot-strap code through toggle switches on the front of the computer, you will also remember these were relatively slow machines with scanty memory. So a major part of hacking was constantly rewriting the code to use less memory and to make it run faster (does anybody do this now days?). And, rule number one of software performance optimization was: Solve the big problems first. Why optimize a section of code if you only spend 0.1% of the CPU time executing that section? Doesn’t it make more sense to optimize the code where you spend 90% of your CPU time?
 
The Biggest Problem
Let’s apply that principle to today’s productivity issues in Pharma R&D, and hypothetically pose a similar question regarding the discovery and development of new medicines. Ask 1000 pharma executives (and their FDA counterparts) to list their biggest problems in pharma R&D, I’ll bet the following answer appears on most lists: “Preclinical safety studies do not adequately predict clinical adverse events”

We have all heard of the huge losses incurred by pharma when drugs are withdrawn or dropped in late stage development due to unexpected safety issues. Yet the perceived inability to predict clinical safety is not the major reason for drug failures—it is efficacy. As an industry we have not yet confronted the fact that our discovery efficacy models are actually far worse at predicting clinical outcome than our toxicology models. The second reason for compound failures is non-clinical safety, which one could argue is really what you want, i.e. keeping the bad actors from getting into the clinic or off the market. The third source of failures is “business reasons”, i.e. for various reasons, the company decided it was no longer worth pursuing this compound as a profitable marketed drug.

To make matters worse, reason #5—pharmacokinetics/bioavailability—dropped from ~34% in a similar poll 25 years ago to just 6% today. And yet overall attrition rates have not improved dramatically during that time. Eliminating one obstacle simply exposes other hurdles on the road to a new drug. Therefore, even if we could completely eliminate clinical safety as a reason for termination, the likely impact on overall attrition rates would probably be negligible because it would simply increase the numbers of compounds that fail for other reasons. 

My point is that we should not be labeling current preclinical safety evaluation practices as poor—the numbers actually suggest they are pretty good. Even if we were perfect in predicting clinical adverse events, it will not solve the R&D productivity problems facing pharma. Preclinical safety prediction is not the big problem—and by analogy to software performance optimization, it should not be the top problem we are trying to solve.

Does this mean we don’t need to improve our preclinical safety evaluation practices? And more interestingly, if improving our clinical predictivity is not going to improve attrition rates significantly, why does pharma spend so much time and money trying to achieve this objective? We’ll tackle that issue in my next column. If you have thoughts on this topic, please drop me a line.

Ernie Bush is VP and scientific director of Cambridge Health Associates. He can be reached at: ebush@chacorporate.com.


This article also appeared in the November-December 2009 issue of Bio-IT World Magazine.
Subscriptions are free for qualifying individuals. Apply today.


 

View Next Related Story
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.