YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Modeling is Powerful BUT Has Far to Go


January 22, 2009 | Editor’s Note: Many readers wrote to comment on my recent article, Modeling & Simulation Still Need a Push, in the Predictive Biomedicine e-newsletter. One of the more thoughtful and extensive responses came from Frank Tobin, former head of a modeling initiative at GSK and now the chief computation officer for start-up, Strategic Medicine. He suggests advancing modeling inside pharma is complicated, identifies key obstacles yet to be overcome, and agues the process won’t be rushed unduly. -- JR 

By Frank Tobin

I read the recent article on modeling and simulation with interest. I'd like to give some perspectives after having run the Scientific Computing and Mathematical Modeling department at GlaxoSmithKline (GSK) for many years and after having used modeling for many different kinds of biopharmaceutical problems. My department is gone now, dissolved in one of the endless rounds of downsizing so trendy these days. 

This department was a noble experiment in using advanced mathematics to solve pharmaceutical problems, research, development, even manufacturing.  It was different from traditional PK/PD or molecular modeling groups, focusing on mathematical modeling, not only of biological systems (diseases and drugs), but medical devices and other things of interest to pharma (formulations, powder behavior from first principles, etc).

This was a world-class mathematical modeling department, with staff consisting of superb applied mathematicians or their equivalents from other scientific disciplines. We focused on strategic problems where the modeling return was significant enough to justify the effort, no matter how complicated the problem. The nature of the problems dictated our efforts and it was not an issue if we had to deal with  hundreds or thousands of interacting elements (e.g. large biochemical networks) or extremely non-linear behaviors (just about everywhere).

John Russell’s article discusses the success of modeling and how limited it seems to be within pharma. My group was quite successful, up to a point. I distinguish between technical success and organizational success. I disagree with the point in the article about measuring success. We could do that, because we worked carefully with our drug development colleagues to develop these success milestones at the beginnings of projects - explaining phenomena, developing plausible mechanisms-of-action for a compound, predicting new dosing regimes, etc. Really, no different than you might expect for any milestone-based project management approach. 

Before starting any modeling, we had three simple questions that needed to be answered.  If they couldn’t be answered, you could almost guarantee a project would fail – from the organizational viewpoint not the technical one:

(1) Why do you want to do modeling? Is there a business objective or is this an academic exercise?

(2) How will you know if you succeed?

(3) What will you do with the model once you have it?  For what decisions will it be used or what confirmatory experiments will get performed?

You’d be surprised, how many industrial modeling projects can’t answer those questions BEFORE the project starts.

One of the problems, at least at GSK, was that in a resource constrained environment it can be very hard to make a difference on the time scale needed by the larger organization.  So, while you can achieve your technical milestones, and achieve them brilliantly, the project may be past the point where your collaborators can use your results. The resource constraints might be a lack of enough people to do the modeling and getting experiments performed. We lost multiple battles in resource meetings with other, non-modeling groups.

Another, perhaps more subtle problem, is getting time from experimental colleagues. If they are on tight schedules, it may be near impossible to get experiments. Even with an agreement in advance to have these experiments performed – part of the project compact – the reality is that they may not get done when the time arrives. We had one instance where we had predicted an extraordinary and non-obvious improvement in the efficacy of a compound through a non-intuitive and cheap dosing regime. It was well received by the clinical project team, yet when it came time to get the confirmatory rat experiments to prove the hypothesis, we couldn't get the rat experiments prioritized high enough to get them performed in the time frame for the decisions that were needed. Experimental drug development projects won the resource battle.

Even if you can use modeling to make a difference, compound after compound, you may not succeed. Our experimental colleagues may feel that if they're going to do the experiments anyway, then why should they get involved with the modeling? This happened to us at GSK with cardiac electrophysiology liability studies. These are required for regulatory purposes. That we could use the modeling to explain individual animal variations or come out with better ion channel mechanisms of action just wasn’t enough to overcome this organizational inertia. I’ve heard the exact same complaint from an industrial modeling company trying to enter this area. Large corporations can sometimes be composed of silos, optimizing for local, not global, goals. Scientific quality does not always win out over turf.

There are endemic problems in the industry with respect to modeling, with variations in detail and severity from company to company:

1. I believe that the proper uptake of modeling is a generational issue with pharma management. At the academic and government funding levels, there is an explosion of interest in both the biological and mathematical communities in doing quantitative modeling for biology.

2. In my experience, sometimes the people doing the modeling at pharma have either come out of PK groups or been biologists by training; or they may be pharmacologists or medicinal chemists working on mechanisms of action. There is craft in modeling, just like any other area of science. So without the modeling craft, they have not had the proper mathematical and numerical training to do the work right or to be able to handle the complexity of the biology. I saw a talk once by a pharmacologist trying to model the allosteric interactions of a new molecule. Instead of solving the differential equations and being done with it in less than a day’s effort, he spent months trying to get a analytic (i.e. closed form) answer. Why? Because that was the only way he had learned in school and had never seen any other approach!

I've seen too many presentations where people had numbers from solving equations, but had no idea if they were accurate or not.  For example, using some simplistic integrator (e.g. Euler’s method) for differential equations and not worrying about stiffness and the fact they may not have actually solved the equations, even though nothing died and numbers were produced. Some of the commercial visually-oriented simulation tools suffer from this problem. Or having everything as first order. Or everything is linear.  I’ve seen talks where the biology has been ‘modified’ to fit the (simplistic) computational tools because they could not handle the non-linearities. [This] happens with those who always use the same 'tools' instead of having the skills to pick or develop the computational approaches necessary for the problem. The world is neither linear nor low dimensional!

3. There is quite a bit of hype that needs to be overcome, especially coming from some of the commercial modeling companies. I have heard pitches that "we can do any problem you have", when you know damn well that the problem is brutally hard, both biologically and mathematically. When you challenge them, the house of cards comes crashing down.

4. Some problems are intrinsically hard and simple solutions (e.g. the typical PK approach of first order compartments) just don’t cut it. Sometimes the best solution is not to even start the work, rather than being too simplistic to be even close to accurate.  However to do it right means that problem can get involved - for example, getting accurate parameters from using multiple experimental data sets simultaneously is a difficult numerical analysis (i.e. optimization or inverse) problem. While some of the practitioners may not realize it, oftentimes there are unsolved mathematical problems that need to be overcome, especially determining parameters in high-dimensional, noisy spaces from simultaneous multiple experimental datasets. You can't just generate numbers and not even think about whether or not they are right.

5. Some of the academic models out there are too simplistic. They have been developed to explore the mathematics more than to solve a biological problem. When you see a lymphoma model with only two variables, you have to ask if this is being done to simplify the problem enough because there is only interest in doing stability analyses?  The models don't translate well into a real biological situation. Or, if they are used, they won't do a good job because too much biology is simplified or just missing. Or, the parameters are chosen somewhat arbitrarily.

6. Model parameters are a whole area of concern. I have seen many talks, industrial and academic, where key parameters are taken from literature, even when there are experimental datasets to use. The literature values are not questioned. If they were published, that’s good enough.  Frankly I’m puzzled why this happens. Laziness? Lack of understanding?  Lack of experience? Lack of proper training? In our work, we have found that literature parameters are often wrong and do not allow our models to do a good job of matching experimental data. Especially if you want one parameter set to match all the experimental data. Sometimes the model must be extended to include hidden experimental conditions, such as unmeasured temperature or pH, and those parameters backed out during the optimization. Literature values are good starting points, but they need to be re-determined from the available data – all the data - simultaneously.  Having the most accurate parameters is critical to model quality, especially in an industrial setting. Too many models are built with little care to optimize the parameters.

7. Nothing new here, but scientific communities do not talk to each other - especially between the PK/PD world and mathematical biology or numerical analysis. So there is sometimes ignorance of what has already been done. Or some of the incredible new advances in numerical analysis in the last two decades are ignored.

8. (A personal peeve):  pharma views of modeling, even the article’s use of "Modeling & Simulation" comes from the PK world. Yet, there is a far larger and more sophisticated world of modeling - in applied mathematics, mathematical biology, computational science & engineering (CSE), and other areas of science and engineering. For example, there are chemical engineers who do modeling of bacterial metabolism for the purposes of improving fermentation or of pattern formation of gene expression during development.

9. Modeling in pharma is funny.  It can be "feared" or sometimes it can be "magical".  I rarely had biologists ask the details of how our models were constructed; on the other hand, engineers would drive us crazy until we had satisfied their quest for all the details.  I once gave an R&D senior management presentation where we had one equation on a slide. It wasn’t there except to illustrate a minor point. Before I could even discuss the slide, there was audible twittering through the room as if I was going to overwhelm them by explaining the math. I’ve learned never to underestimate math phobia. Modeling is a technology like any other and has its place, its advantages and its restrictions. Perhaps this comment is harking back to both the generational and experts in craft views.

The article talked about trying to get lists of models. There are already efforts underway to have standardized modeling exchange formats (e.g. SBML and CellML and others in the simulation community) and some modeling repository initiatives already exist – e.g. the CellML repository at the University of Auckland. Why should there be duplication of existing efforts?  If industrial modeling is any good, let the models be submitted to any of the public repositories.

Having a list of pharma projects will be inherently incomplete because often you can't talk about them. Take my own department's projects, some I can talk about, some I can't even acknowledge their existence. Or, I can talk about them, but I can’t tell you what data I used, because the information is proprietary.

Just listing a model is not enough because you need the details of how the model was built – entities, interactions, assumptions, etc. Unless you know most of the details, it's useless. Saying you have an osteoporosis model doesn't help because it doesn't tell if it has genetic elements (a gene regulatory model), a tissue model (bone remodeling), a bone structural model (2D or 3D?), a signal transduction model (PTH signaling), etc.  And then, is the model any good - how was it calibrated, how were parameters determined, etc?

Modeling has a long history in science although this is not always appreciated in pharma. D’Arcy Thompson’s seminal work, “On Growth and Form”, was first published in 1917.

I truly believe the future is healthy for modeling in biopharmaceuticals, but it will take time. Sure, some companies have modeling groups or individual modelers. And, there have been successes. But not on the scale that is needed for a revolution in the way drug discovery and development can be improved. Nor on the scale for which modeling can help. Modeling, when done right, is an impressive technology. However, like all technologies it must be nurtured and used properly. It will take time. Time for generational change in managements.  Pharma is on a long down cycle right now and some companies (like GSK) just won't listen with all the other distractions. It will take time for more professional modelers to be trained and make their way into industrial positions; time for more advances diffusing in from their academic origins; time for more potent mathematical tools to be developed.

But, it will happen.

Author Dr. Frank Tobin is the Chief Computation Officer of Strategic Medicine Inc. and can be reached at ftobin@strategicmedicine.com
_

This article first appeared in Bio-IT World’s Predictive Biomedicine newsletter. Click here for a free subscription. 

Click here to login and leave a comment.  

1 Comments

  • Avatar

    Rarely do I save online articles of this nature, but Dr. Tobin's essay is obviously from a master of the art and has insights worth remembering next time I'm in the job market. I come from a mixed experimental, statistical, and mathematical (mechanistic) modeling background, and the statement that scientific communities don't talk to each other is so true. This includes the statisticians and the mathematical modelers. When I was a visiting scientist in the theoretical biology group at Los Alamos I had a conversation with a physicist there who had the attitude that statistics was mostly just good for t-tests and the like! It seemed to him to be a stagnant field. Another felt that a qualitatively accurate model that suggested new insights or understanding was enough to be valuable, even if actuall parameter values were way off. Having since joined a biostatistics department, I now appreciate that models made without consideration of identifiability are, in a real sense, worthless. Statistics is a very active field of research, not stagnant, but mathematicians rarely read statistical literature or attend statistical meetings. Case in point: Dr. Tobin even use the word “statistics” in his essay, despite the fact that analyzing data is pretty basic, and statistics is all and only about analyzing and presenting data. Mathematicians are not statisticians! In short, mechanistic, mathematical models and statistical modelers need to work together: mechanistic models do provide believable insights which may be experimentally infeasible or hard to come by in any other way, and statistical models don't offer these, but conversely, without statistical models for the error structure of the data, rigorous inference is not possible and a model which cannot be tested against real data isn't worth much. We need mathematical modelers for mechanistic insights and statistical modelers for correctly handling variability in data – which all data have.

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.