YouTube Facebook LinkedIn Google+ Twitter Xingrss  

The Promise of Predictive Modeling in Drug Discovery

Predictive computational modeling is already advancing drug design.

By Mike May

Sept. 5, 2008 | Surveying the use of predictive computer modeling in the pharmaceutical business, Howard Asher, CEO of Global Life Sciences (GLS), says, “This field is almost in an embryonic state compared to what it could be.” He adds, “Lots of convergence of knowledge must still occur.”

Asher, the former director of global life and health sciences at Sun Microsystems (see, “IT Needs to Do its Part for Compliance,Bio•IT World, Dec. 2002) began watching the convergence of IT and pharmaceuticals in the 1970s with Pfizer and Bayer. More recently, Asher established the non-profit LSIT Global Institute to create trusted IT—something that GLS now pursues commercially (see, “Good Practices Key to Pharma,” Bio•IT World, August 2005).

Given Asher’s background, it is no surprise that his idea of convergence depends on better simulations—building and animating 3-D protein models and watching them interact as a disease progresses, regresses, and reacts to therapies. “This should show the potential side effects or unanticipated effects of those drugs at the molecular level,” Asher says.

But not everyone sees companies clamoring to put predictive computer modeling to work. “Many large pharmaceutical companies, especially in drug discovery, still don’t seem to take full advantage of predictive modeling,” says Christophe Lambert, CEO of Golden Helix. “Lots of that has to do with the difficulty of changing a culture, and there is lots of inertia in today’s rather rigid pipeline.”

Solid predictions depend on getting good data, but Lambert does not think that today’s approaches necessarily provide the right kind of data for making the best predictions. “There’s a real disconnect between what happens in multi-well plates and what goes on in an organism, so there’s plenty of work that needs to be done in assay development.”

Some pharma companies are already embracing predictive modeling. But Simon Kearsley, who leads Merck Research Laboratories’ predictive computer modeling group acknowledges that “modeling also goes hand-in-hand with experimentation, so it’s hard to pick out the specific impact of modeling alone.” Merck uses predictive modeling routinely to simulate the docking between a compound and therapeutic target. “High quality high-throughput screening provides a wealth of information, and we can integrate that with our computer models,” says Kearsley.

Tomorrow’s Tools
Among the vendor tools for predictive computer modeling is Golden Helix’s ChemTree and Simcyp’s ADME simulator to model pharmacokinetics based on in vitro data.

Some companies even provide collections of tools. “A variety of methods we provide in Discovery Studio fall into [the predictive modeling] category,” says Paul Flook, head of life sciences R&D at Accelrys. For example, this software provides molecular-dynamic simulations, ligand docking, ligand profiling, and other features. “Clearly, the effectiveness of these methods depends on who you talk to,” says Flook, “but their broad and continued application in the industry is a compelling argument for their value. Certainly, we see increasing interest among researchers in combining different predictive methods to study more complex and diverse systems.”

Much more work is required to get the most from predictive computer modeling. Some experts hope to combine advanced simulation capabilities with enormous databases, packed with information about thousands—probably hundreds of thousands—of patients, including their genotypes, disease state, reaction to therapy, and so on. “We need that before we can get too serious with predictive computer modeling,” says Asher.

Many companies are involved in predictive modeling in some capacity. Here are a few of the vendors.

• Accelrys
• Biopredict Inc.
• Entelos
• Golden Helix
• OpenEye
• Schrodinger
• SimBioSys
• Simcyp
• Simulations Plus
• Tripos

With all of that information, Asher believes that the pharmaceutical business could avoid lots of what he calls the misinformation generated in animal studies, and even Phase I and II clinical trials. In fact, Asher believes that the power behind predictive computer modeling might one day allow new drugs to jump right to Phases III and IV. “Then, these Phases would be used to confirm what the computer said,” Asher explains. “It could eliminate 8 of the 15 years of investment in developing a new drug.”

Making drug development rely that heavily on predictive computer modeling will take more than software. “You will also need stable chemistries amenable to automated mass production of diverse, drug-like compounds,” says Lambert. For the foreseeable future, Lambert expects such predicted compounds to get tested in animals and humans. One day, though, he thinks that science might develop fully formed organs that could be tested.

Creations from Combinations
Beyond cultural issues, tomorrow’s predictive modeling must find new ways to combine information, including new and existing types of data. “The most predictive resources in drug discovery in the public domain are the screening data from the Molecular Library Initiative data in PubChem,” says bioinformatician Thomas Girke from University of California, Riverside. “Tools to analyze these data sets efficiently will be highly relevant for the field.”

The best combinations might even come from existing technology. According to Sean Ekins, senior VP computational biology at Arnold Consultancy & Technology, the most important advances in applying predictive computer modeling to drug development might not come from a tool or technology, but perhaps “a better integration of methods and approaches that already exist, or at the very least smarter ways to decide which tools are going to be the most appropriate to answer a specific question at the right time.”

Moreover, predictive modeling needs ways to integrate different kinds of data, such as numbers from experiments and unstructured information, like text. According to Kearsley, “Merck is already steering toward advances in database mining and knowledge sharing.” He also envisions the possibility of advanced tools that simplify tomorrow’s modeling. “You can imagine compilers that can automatically parallelize algorithms to take advantage of parallel machines. Now, you must do that by hand.”

Ultimately, pharmaceutical scientists want ways to combine a wide range of data and overall knowledge—focusing all of the information into specific predictions. That could make future drugs more economical, more effective, and safer. It should also open much more of the potential drug landscape to the clinic. As Kearsley says, “The more you know, the more you can discover.”


This article appeared in Bio-IT World Magazine.

Subscriptions are free for qualifying individuals.  Apply Today.




Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359,