YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

Working Out the Flow

By Mike May

Sept. 18, 2006 | A couple of years ago, GlaxoSmithKline (GSK) needed help keeping all of its data and applications working together smoothly. This pharmaceutical giant turned to InforSense. “Data and application integration is still a real issue,” says Jonathan Sheldon, chief scientific officer at InforSense. As many biopharma companies continue to fight similar battles, an army of informatics solutions are offering the weaponry to solve those very problems.

“[GSK] had for a while a variety of data sources and lots of different applications from various vendors,” Sheldon says, “but lacked the glue to tie it all together.” To provide that glue, InforSense taught GSK’s scientists how to use InforSense KDE, which stands for knowledge discovery environment. Sheldon says, “We find with large pharmas they are very keen to be self sufficient. So we trained them in the software so they can do the integrations themselves using the SDK [software development kit] and API [application programming interface] we provide.” After that training, for example, scientists at GSK could wrap new applications into the workflow environment.  >>>

 Jake Leschly

Jake Leschly, president and
CEO of Ingenuity Systems,
asks: “How do the scientists
make decisions in this work-
flow so that they do not lose
compounds late?”

Just what is workflow in biopharma? It can be lots of things: big-picture items-such as tracking a compound from discovery to market — or more specific issues — such as getting data from an instrument into a database. In all of these processes, says Jake Leschly, president and CEO of Ingenuity Systems, companies must consider a key question: “How do the scientists make decisions in this workflow so that they do not lose compounds late?” He adds, “The worst case is losing a compound after a Phase III trial.” To avoid such expensive losses, biopharmas try to improve their knowledge — say, getting a better understanding of the biology behind a disease — at every step in a drug’s life.

To keep pharmaceutical work flowing smoothly, important information must also be combined at key steps, even during clinical trials “If you can make those associations of data,” says Carol Kovac, general manager of IBM Life Sciences, “then you can start to streamline processes of data management in clinical trials, regulatory submissions, and even monitoring after the drug has been released.” With so many rules and regulations to follow, biopharma companies seek streamlining wherever possible.

Automating the Analysis
Today’s biotechnology and pharmaceutical scientists often look like jugglers. They juggle instruments to collect data. They juggle programs to look at data. Despite so many advances in software, scientists still struggle with analyzing information, and data can create logjams. “Researchers spend lots of time performing manual, error-prone, data-manipulation tasks,” says Pat Hooper, senior director of professional services at Tripos. But how much time does this really take? Dave Lowis, senior director of product management at Tripos, says, “We generally see average researchers spending 10 to 20 percent of their time using data tools and trying to get the view of the data that they want.” Even then, case studies from Tripos show that scientists in biotech and pharma often lack the information needed to make the best decisions.

 Smart-Idea technology from Tripos
Making the best decisions depends on combining information, especially data from different assays or instruments. Darryl Gietzen, product manager of bioinformatics at Accelrys and SciTegic, says: “Oftentimes companies are not looking for cutting-edge algorithms anymore. Instead, they want to ask more of their data through better utilization.” Much of that involves getting the data integrated. Without that, says Gietzen, “You end up with people coming to different conclusions about the same problem.”

Gietzen should know. Before working with SciTegic’s Pipeline Pilot, he worked on a data-processing pipeline based on XML and PERL and suffered through trying to unify data from various sources. When he was first exposed to Pipeline Pilot, his reaction was: “Wow! I wish I’d had this a couple years ago.”

 Pipeline Pilot
In essence, Pipeline Pilot automates assay analysis and data integration. Gietzen says, “We focus on chem- and bioinformatics, but you can build it to go beyond that.” By dragging and dropping icons, scientists can build pipelines. “Modules can be fit together in nearly any combination to meet specific needs,” says Gietzen. “It’s also fun to play with it.” It lets a scientist add cycles to a process, all without needing to involve a software developer.

It does, however, take some tweaking to run with in-house applications. To add an internal application to the Pipeline Pilot process, says Gietzen, “you need to do some work there. But SciTegic provides tools that integrate other code into Pipeline Pilot.”

Other companies also aim to improve the flow of information from one stage to the next-say, from discovery to development. Thermo Electron addresses that problem with a collection of laboratory information management systems (LIMS). Dave Champagne, vice president and general manager of informatics at Thermo, says, “Pharmaceutical laboratories support various functional areas, such as lead identification, analytical development, manufacturing, and so on.” He adds, “We address the needs of each unit in pharma’s value chain.” For example, Galileo LIMS focuses on ADME (absorption, distribution, metabolism, and excretion) and toxicology research, and Watson LIMS provides bioanalysis.

“We embed workflow in our products and embed the connectivity,” explains Champagne. This way data can move form one stage to the next. For example, WuXi PharmaTech — a Shanghai-based company that provides research and development services for biotech and pharma — recently adopted Watson LIMS to keep track of data from early discovery through preclinical.

Passing Along the Improvements
In many cases, just giving a company new software does not solve every problem. Consequently, many software companies develop custom solutions for clients. For example, Gietzen describes as instance of working with a major pharmaceutical company that wanted an architecture that combined chem- and bioinformatics. “They wanted it all to exist in one interface and to be able to connect applications in a complex processing pipeline,” Gietzen says. This pharma wanted to run all of this information on one computer cluster and make sure that the scientific end-users could get any information that they needed. To do that, SciTegic created a workflow pipeline of the company’s chem- and bioinformatics applications — based on Pipeline Pilot — and set up advanced components so that it would all run on the company’s cluster. Better still, this team made that pipeline available to users through a Web service.

Gietzen and his colleagues have put together other pipelines to solve common problems. For instance, Gietzen says that companies starting with a list of genes might want to connect that list to pathways. “From that,” he says, “they discern what metabolites go with that pathway. Finally, they determine the chemical structure of the metabolites for use as possible biomarkers.” Similar workflows — bridging biology and chemistry — help companies find structures that could make good drugs.

Ultimately, biotech and pharmas want to build bridges that connect all of their information. Instead, biopharmas often find that their software addresses only a single set of problems in a specific domain — say, biology or chemistry. In recalling a collaboration with AstraZeneca, Sheldon of InforSense says that company “was thinking beyond one problem. They wanted a platform to go from biology to chemistry or from biology to clinical data.” By using Infor-Sense KDE — and some of InforSense’s other products, such as InforSense IOE, which provides integration for Oracle applications — Sheldon’s team helped AstraZeneca and GlaxoSmithKline work across all of R&D in a truly translational-research capacity. “We created a silo breaker,” Sheldon says.

Improving Innovation
Some tangible losses can be attached to inefficient workflow techniques for handling data. As Hooper and Lowis of Tripos found, researchers are spending upward of one-fifth of their time manually manipulating data. Just as important, though, workflow inefficiency creates intangible losses. “Lots of decision makers cannot use all of the data that was generated because it is too hard to get it all,” says Lowis. “Also, all of that time working with spreadsheets could have been applied to making the next great compound.” He concludes: “There’s lots of innovation to be gained from making the tedious stuff easy.”

IBM’s Kovac also expects better workflow to trigger more innovation. “Better feedback — say, from clinical trials to discovery scientists — could lead to selecting better compounds from the start,” she says. Such benefits prove hard to gauge, but certainly are plausible.

Likewise, biopharma researchers often struggle to track data toward the end of a drug cycle. “Looking at workflow from a big-picture perspective,” says MaryJo Zaborowski, senior vice president and global head of global research informatics at Roche, “I want to ensure that we have systems in place to collect and manage adverse events.” To improve Roche’s management of such data, Zaborowski and her colleagues developed a proprietary system that handles adverse events from pre- and post-market periods.

At InforSense, scientists already see a bridge extending from workflow to knowledge. Sheldon sees headway being made on integrating data and applications, and the next step is understanding how to exploit the knowledge that the workflow captures. “What does the way you put a workflow together tell you about your internal research processes?” Sheldon asks. After figuring out that integration, he says, “Workflow can go from integration to an environment that supports decision making.” Making that move requires ways to store and manage workflows, mine workflow processes that are captured, and deliver that information to end users in a form that they can easily understand and that is instantly reusable on the next related problem. Sheldon expects significant advances in these areas in the next year.

Demand for Better Decisions
In the end, biopharma executives and vendors agree on the need to make better decisions and make them sooner. The scientists at Tripos help such decision making with their Smart-Idea technology. “This provides users with simple, self-service access to all data,” says Hooper. For example, Smart-Idea technology can gather data from various experiments and then display the results in an intuitive format.

Early in 2006, Tripos launched a collaboration with Accenture and Wyeth, which dubbed the collaboration the “Next-Generation Discovery IT Project.” Steve Howes, senior director of bioinformatics at Wyeth Research and project leader, says: “We are trying to recraft our infrastructure to reduce the time that scientists spend trying to access, find, and analyze data.” Howes expects elements of the new system to go online early in 2007. Hooper and Lowis of Tripos anticipate that this project will simplify data sharing and decision making about compounds in the discovery phase at Wyeth. In addition, Smart-Idea technology works on open-standard software, which Hooper claims, “adapts to the needs of the pharma industry.” In the end, Howes hopes for one major result: “We want to give scientists more time to do science.”

Getting time to do science also involves dealing with regulations and expectations from the FDA. Consequently, many areas of drug development and manufacturing must comply with FDA rules. To manage data under those circumstances, IBM developed its Solution for Compliance in a Regulated Environment (SCORE). This package provides document management with a twist. “In the past,” says IBM’s Kovac, “people looked at document management like a repository, like a warehouse for documents.” Instead, IBM wanted more from it. “We provided the warehouse but also linked it to a flexible workflow solution,” she says. So in addition to storing documents, SCORE can also lay out a workflow — such as a protocol for a clinical trial — and then link the relevant documents to the right steps in the workflow. “Then, if you change a document or change the workflow,” says Kovac, “SCORE maps the changes in a seamless way.”

If, for example, some step in a clinical trial does not produce the desired data, scientists could rework the protocol and SCORE would keep the right documents in the right places, which could save the need to start over in document management. Biotronik, a medical-devices company with headquarters in Berlin, uses SCORE as a companywide, document-management system and to integrate that management with a variety of business processes.

 Ingenuity Pathway Analysis
Some companies generate even broader collaborations — such as trying to integrate all of biology. At Ingenuity Systems, Leschly says, “Over seven years, we have built a knowledge model of biology.” That model consists of information about genes, biomarkers, and diseases from the biomedical literature. “Our model uses computation to generate biological insights that allow faster and smarter decisions at all steps in the drug discovery and development workflow,” says Leschly. Users can apply proprietary knowledge and information from this model to understand their data in the context of biological mechanisms, pathways, and models of disease.

In an October 2005 paper* in The New England Journal of Medicine, Harvey Pass of the New York University School of Medicine and his colleagues used Ingenuity’s Pathways Analysis (IPA) to establish serum osteopontin as a new biomarker for mesothelioma, a cancer often caused by exposure to asbestos. Leschly says the discovery of new biomarkers can be used in a wide range of applications: pharmacodynamics, patient stratification, and toxicity and efficacy markers-all leading to better decisions in research and development. In addition, new biomarkers can provide clinical diagnostic and prognostic markers.

Lessening Logjams Ahead
Still, some problems can exist in collaborations. As Zaborowski of Roche points out, “The industry is not that large, and it creates an adverse situation for companies that create products.” The small size of this market can force vendors to generate revenue by using proprietary systems and requiring customers to purchase upgrades and add-ons. The proprietary aspects can impede the knowledge workflow in biotech and pharma. As a result, the informatics group at Roche often creates in-house solutions that make data available inside the company. For example, Roche writes its own code for mining data and then links that with products from vendors. “Ideally,” she says, “vendors should use open-source code, but they usually don’t.” More open-source products would greatly improve the possibilities of improved workflow across all biotech and pharma companies.

Lowis of Tripos sees four specific benefits that pharma could gain from improved workflows in general. The first is productivity gains for researchers. “Better workflow can help researchers understand data better and act on it faster,” he says. Second, a company can achieve overall gains in efficiency, which could lead to faster “go” or “no-go” decisions. Third, Lowis says that improved workflows can reduce internal information-technology costs. “Right now,” he says, “some pharma companies spend a lot of money on IT and gain little benefit. Better workflows can make sure that users get easier access to data and do so with less IT overhead.” Fourth, Lowis says that the most exciting result will be “really empowering users to get the most from their data.” He says, “The researchers will spend less time struggling with Excel and gain more time to think.”

Better management of workflow issues in biotech and pharma could even change fundamental aspects of these sciences in the near future. Thermo’s Champagne says, “People will want to compare real data and in silico data to see how modeling and simulation can predict results.” Once all of the data — wet-lab results, in silico simulations, past results, and so on — can be accessed together, scientists will have a much better idea of how much lab work, clinical trials, and so forth will be required for new compounds. Champagne says, “This is opening new opportunities for the scientists to accelerate drug discovery and development.”

*Pass, H.I. et al. “Asbestos exposure, pleural mesothelioma, and serum osteopontin levels.” N Engl J Med 353, 1564-73; 2005.

Photo by Seth Affoumado

View Next Related Story
Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.