By Mark D. Uehling
February 10, 2003 | Officially, the new drug application (NDA) was withdrawn at the request of the large pharmaceutical company that submitted it. The NDA had been rejected by the FDA for deficiencies in “data quality.” This phrase was news to the company, which then dispatched an executive to attend a conference on the topic.
Data quality could be a bonanza for consultants. Experts in the field believe that those who embrace their ideas could clean genomic databases riddled with inaccurate sequences and improve how a wide variety of scientific information percolates from one type of researcher to another. “The pharmaceutical companies should try to manage their information as a product and establish procedures accordingly,” says Richard Wang, a professor at MIT and principal investigator of its information quality program.
In just five years, Wang notes, titles such as vice president for data quality and director of data quality have become common. Wang is trying to find money for research to create software that would filter out problems in data quality the way that treatment plants purify water.
But the secretive ways of the pharmaceutical industry, not to mention the death grip its scientists maintain on their own data, could make it difficult for companies to raise data quality. In general, Wang notes, “the more people that have access to data, the more accurate it is.” As an example of very high-purity data, he cites payroll stubs, which meet the needs of employees, employers, and government.
Tom Redman, president of Navesink Consulting Group, was one of the first consultants to explore the topic. Reluctance to confront the problem is natural, he says. Many companies insist their data-generating methods are so unique and esoteric as to defy correction by outsiders. But that may be changing. “Virtually everyone I’ve talked to, in their heart of hearts, has known that the data is bad and that that’s costing them money,” says Redman, who spent 15 years at Bell Labs and worked on data quality for AT&T.
The fact that drug company chemists are probing the boundaries of scientific knowledge -- that they are not assembling flawless Toyota minivans -- is irrelevant to Redman. The key is to let downstream users of a particular type of information assess its quality. Instead, most companies adopt an empty metric of data quality, some safe indicator accepted only by the people who generated the data in the first place. Says Redman: “The things you need to do to make data quality better have been proven in industry after industry. If it turns out that these techniques are not relevant, it will be the first time after the million times people have offered excuses.”
Larry P. English, president of Information Impact International Inc., a Brentwood, Tenn., consultancy, says he has watched personnel at a large drug company enter data from a clinical trial. “I’ve watched errors as they were being keyed, without being caught,” English notes ruefully. “In many organizations, the way they’re tackling information quality is reactively, realizing they have major problems in databases and data warehouses.”
That approach, English says, is both expensive and doomed. He notes that, as manufacturing guru W. Edwards Deming taught Japanese carmakers, quality designed into a process is cheaper than fixing defects after the product is sold. So it is with data. Smart companies will have no choice but to embrace data quality, English says. “Leading organizations like Aera Energy LLC and Intel Corp. are making a significant economic difference in eliminating ineffective processes in information.”
There are also regulatory reasons to worry about data quality. A federal law, informally known as the Information Quality Act, was passed in 2001. A related U.S. Office of Management and Budget regulation, “Section 515,” went into effect in October 2002. It is either an overdue attempt to correct federal rules based on bad facts -- or an ingenious method to overturn environmental regulations. Either way, the regulation could affect the FDA and the companies that send it data by the truckload.
That’s because Section 515 requires every federal agency to create new procedures under which anyone can challenge anything that an agency deems “influential” data. The Washington lobbying organization of the major drugmakers says it is not tracking the issue. In theory, however, Section 515 may let Drug Company A challenge the data quality in any FDA publication based on information supplied by Drug Company B.
Still, no one at the FDA is agonizing about Section 515 just yet, says Dr. Steven K. Galson, deputy director of its Center for Drug Evaluation and Research. “If we weren’t paying attention to the quality of the data, we and the public would be in deep trouble already,” Galson says. That focus on data quality is hardly new to regulated companies, he says. The one dark note he will utter is, “The drug industry needs to pay attention carefully to this guideline and realize that the quality of their data is going to perhaps be held up to more public scrutiny under the guidelines than before.”
‘Collapse or Breakdown’
Ken Sloan is more worried about Section 515. Once data quality has been challenged, the founder of Synergeist consultancy says, it is late in the day to tinker with the underlying processes. But he’s more optimistic about investments in data quality paying big returns.
The catch: The short term gets ugly. “Every major information quality project,” Sloan says, “comes out of some kind of collapse or breakdown or crisis, and the first steps are to assess scientifically how bad things really are. This is not only painful for the people in the company, but it can be damaging to the company's image if this information is leaked externally.”
Trust in the Data?
Sloan notes an irony: The managers of manufacturing at pharmaceutical companies adopted total quality management (TQM) long ago. It’s the white-coated crowd in labs who is behind now. “What about the mountains of data that are being created and referenced as a part of the drug development process?” Sloan asks. “Is anyone applying TQM techniques to measure, assess root causes, and correct problems in this data? The answer appears to be no.”
“What benefits might we get if we could incrementally and continuously improve the correctness and completeness of this data? How many recent difficulties in new drug applications have boiled down to the single phrase from the FDA (spoken or unspoken): ‘We don’t trust your data’?”
IT vendors such as Ascential Software, Firstlogic, Group 1 Software, Innovative Systems Inc., and Trillium Software are offering applications or services to improve data quality. Drug companies are buying them. But the tools tend to be designed more for customer relationship management (CRM) than for the laboratory. One key need: applications to monitor the quality of entire databases or specific annotations.
To be fair, the consultants interviewed say several Big Pharma companies are sending executives to data quality conferences in the United States and Europe. But none of those people were willing to be identified when contacted by a reporter. For the pharmaceutical industry, it appears, going to data quality meetings is like attending Alcoholics Anonymous. When that attitude shifts -- when scientists and drug companies discuss data quality as openly as Ford Motor Co. -- real progress will not be far behind.