By Mark D. Uehling
"Innovation is at an all-time low. The customers' attitude is hostile. Politicians are losing patience. There are some fundamental problems in the industry."
-- Jan Leschly, Care Capital LLC, Aug. 6, 2002
September 9, 2002 | Despite his tone, Jan Leschly is a friendly observer of the drug industry. He was CEO at SmithKline Beecham for six years and is now chairman of Care Capital, a venture firm focusing on the life sciences.
And while Leschly's comments to the seventh annual Drug Discovery Technology World Congress in Boston struck a pessimistic note, there were some promising technology developments at the conference. As bad as recent news might look, with drug companies under attack on Wall Street and in Washington, there were a variety of intriguing technological approaches that seem promising enough to produce results. The open question remains: When?
One especially promising track of the conference was exclusively devoted to IT. Its focus: integrating data from a variety of scientific endeavors and technologies in an effort to streamline the present chaos of instruments and software that can't communicate with each other. "We have a format nightmare," says Robert DeWitte, director of marketing for Advanced Chemistry Development Inc. "We have a data integration problem."
Speakers from the largest companies were surprisingly forthright in talking about solutions. They offered sanitized but tantalizing glimpses of major IT projects aimed at efficiently analyzing towering piles of raw information to accelerate drug development.
For Peter Smith, director of discovery research at Wyeth Research, the familiar buy-vs.-build dilemma in IT has turned out to be especially painful for the pharmaceutical industry. When a drug company builds its own software, Smith says, "as soon as you have the stuff in production, it's now a legacy."
But buying can be worse. "The software you buy is often generic," Smith says, "and so has little competitive advantage for you. You spend most of your time customizing it to your organization."
Smith proposes a more modular, nonproprietary approach in which all of a company's tools run on a network using standard file formats. "No one vendor can supply all the pieces you want with all the variety you want," he says. He prefers an a la carte approach, ticking off modular products from SciTegic and Spotfire. "I didn't buy all of Spotfire," Smith says. "I just bought the visualization component."
Smith says he won't buy any more applications that do not support industry-standard formats. For example, he loves Java, which kicked off the component revolution in software.
At Bristol-Myers Squibb Co., Deborah Loughney is director of computer aided drug design. She too has begun to reassess IT and the entire drug discovery process. Unimpressed with existing tools for visualizing data, which may be little more than Excel spreadsheets, she put together a team that developed a Web-based method to analyze biological molecules.
The Bristol-Myers' program is called a Structure Modeling Analysis Research Tool -- nicknamed a "SMART Idea." It's a prototype that gives scientists both wet-lab experimental and in silico computational data. Bench scientists like the program so much that network performance has declined slightly under heavy usage. "We have more informed decision-making," Loughney says. "We think the architecture and the application will support collaboration among scientists."
Simplicity and ease of use were the drivers. "People don't want to spend time reading manuals," Loughney says. "They have to be able to get in and figure out how to use the system readily." Broad participation, however, was even more important. Says Loughney: "This was designed by scientists for scientists. You need to have commitments from all team members."
One of the most jaw-dropping presentations of the week was by David Hodgson of Pfizer, who is the company's director of discovery research for informatics. He sketched the company's proteomics work almost apologetically, as if it were mouthwash research.
But Pfizer has clearly neglected neither proteomics research nor the related chore of uniting the vast quantities of data pouring out of that work. His 15-member IT team, working with the proteomics scientists, has knit together four completely different applications -- three commercial and one internally developed.
The first and most mundane program is a scientific workflow application from Cimarron Software that tracks samples and prompts lab technicians about what to do next. Says Hodgson: "You really want the scientific staff focused on interpretation, not nitpicking day-to-day management issues."
The next commercial tool is image analysis software to analyze 2-D gels. These abstract images of proteins' constituent molecular parts are dotted with streaks, dots, and blobs. To analyze the patterns, Pfizer is using Z3 from Compugen and ProteinMine from Scimagix -- not surprising, as these are two leading tools.
Now hundreds of images a month are fully integrated into the Cimarron product, analyzing the gels first in ones and twos, then in much larger numbers for patterns previously beyond hope of detection. Putting all of this together is what may make Pfizer a bit more productive than its competitors, Hodgson suggests.
Some of the utility of the project traces to two summer interns who performed the tedious task of loading 5,000 gel images into an Oracle database. Now Pfizer scientists all over the world can search the gel database for images of unknown compounds and compare them to known proteins.
Another key was getting the software vendors to cooperate with each other, rather than compete against each other. "We made the decision to outsource our software, but we wanted these guys to play nice. Thankfully, they do," Hodgson says.
For its in-house effort, Pfizer has developed a program called Pathways XP. It's a tool that combines RNA profiling and statistical analysis of gene expression. Here again, the Pathways XP workflow is fully integrated into Pfizer's three tools from Cimarron, Scimagix, and Compugen.
"This is where it works out perfectly," Hodgson says. "Folks in the lab were literally staring at gels. They would be dreaming of spots. That is super low throughput. You're relying on people to remember a spot they saw on one gel to compare it with another. This speeds up the process 10-fold."