Not bad if you believe that attempting to demonstrate ROI for new technologies has led many reckless souls down blind alleys from which they never returned. The endless and inconclusive efforts to cost-justify the PC is a great example. In the end, the PC's proliferation is its best argument. Maybe that's best. Bland statistics often fail where concrete examples succeed — which is sort of the idea behind the Best Practices program.
Starting on page 60 of this issue, you will find a complete listing of all the entries in this year's competition, and overviews of the three grand prize winners (Millennium Pharmaceuticals, Baylor College of Medicine, and Solutia) starting on page 55. Here, however, are some observations from a random walk down BP lane, which should provide a flavor of the impressive range of technologies being used by these companies and the problems they are seeking to solve.
Let's start with data mining. Can it work? The Abramson Family Cancer Research Institute of the Abramson Cancer Center at the University of Pennsylvania ran a pilot program to answer that question. Using statistical analysis tools from SPSS (LexiQuest Mine), researchers developed a text data-mining tool that was used to extract knowledge of disease progression, disease presentation, and co-morbidities when comparing sporadic and genetically linked breast cancer (e.g., BRCA1 or BRCA2). It did work.
The Abramson entry notes: "It's impossible to read every paper published ... Lexi-Quest resolves this issue by 'reading' the papers and presenting their concepts and relatedness, enabling researchers to use 'all of the data' when drawing conclusions from journals." Abramson plans to extend this approach to other diseases and research areas.
Stats, LIMS, and Voice Response
Applying the power of statistical analysis to the flood of data was a minor theme among entrants. Translational Genomics Research Institute (TGen) has built a tool to compare gene expression data from different manufacturers' chips. Using liver and heart tissue, TGen conducted expression experiments using chips and analysis platforms from Affymetrix, Agilent, Amersham, and Mergen.
TGen's entry stated: "We hope to have much more confidence in the data obtained by our scientists, and greater power in hypothesis testing while still being able to provide data to public databases quickly, easily, and accurately."
Another interesting entry was submitted by Genentech, which is using interactive voice response (IVR) technology and statistical modeling to keep clinical hospitals stocked with the right number of "clinical trial" kits. Genentech wants to control manufacturing and distribution costs while keeping patients supplied. Patients enroll by phone, and the system automatically models demand and schedules manufacturing and shipping.
Several entries demonstrated the dramatic impact even less glamorous technology can have on research. Aventis recently implemented a sophisticated LIMS (Nautilus from Thermo Electron) in its Cambridge (Mass.) Genomics Center, which serves Aventis researchers throughout the world. The system is up and running in the United States and being rolled out to Germany. Researchers place work orders and track projects over the Web.
According to Aventis, "One unanticipated benefit from the reporting capabilities coupled with the long-term data storage that the LIMS provides is the ease with which samples can be returned to disease groups when needed for further characterization even many months after the samples have been processed and archived away in freezers."
The amassing of enormous computing capacity was evident in many entries. The State University of New York at Buffalo is creating a giant cluster with 2,000 Dell servers that lifts it to #22 among the Top 500 supercomputers. Unraveling structural biology knots is the target of all this computer power.
Predictably, projects often don't unfold as planned, and some companies were quite forthright. Abbott Laboratories' project to create a global database for preclinical biological testing data came to a screeching halt after a few months. Its existing system was a muddle of heterogeneous databases that communicated poorly and needed to be replaced. "As everyone is well aware, projects can spin out of control. Suffice it to say, the first set of rollouts were close to disastrous," the entry reported.
Early planning wasn't the problem. The RFP was detailed. Started in 1999, the project was expected to take two years. Unrealistic assumptions, however, were problematic. After regrouping — one lesson was "Listen to the effort estimates of the staff, not manager projections or vendor predictions" — the project is nearing completion, with about 90 percent of Abbott scientists using the system and a forecast of $6 million in annual savings.
Clearly, a field of 50 entries contains far more material, and some of it will turn up in articles over the next few months. But our attention has already turned to improving next year's competition.
How can we better practice what we preach? Send me your thoughts on ways to improve Bio·IT World's Best Practices Awards at firstname.lastname@example.org.
John Russell is executive editor of Bio·IT World, and his column, The Russell Transcript, will alternate with The Dodge Retort.
PHOTO BY WEBB CHAPPELL