YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

Adaptive Clinical Trials: Separating Fact from Fiction

Part of eCliniqua’s continuing series on adaptive clinical trials.

By Deborah Borfitz

April 23, 2008 | For sponsoring companies, adaptively designed studies are not an easy segue from the traditional drug development approach where data analysis and decision-making happen on the back end of a clinical trial. Different attitudes and infrastructure -- and a whole new set of ground rules -- must be put into play.

Statisticians hold a special position in this Brave New World, as they’re often the “gatekeepers of all this is quantifiable,” says Trevor Mundel, MD, global director of the immunology and infectious disease business franchise of Novartis Pharmaceuticals. “The unfortunate thing is that [statistics] are a black box to a lot of those who run studies, implement them, and make decisions about committing resources to studies.” Thus, a fair amount of decision-making power gets handed to the number crunchers.

This is problematic only if in-house statisticians are rigid about “statistical purity” in all they do, says Mundel. “The truth is, some [statistical] techniques have been carefully worked on and the theories are well understood and usually lead to the right conclusion…but in the real world, there’s no proof that they actually work.” These include the routine practice of “analyzing data repeatedly over time when patients are dropping out of the study for various reasons.”

Adaptively designed drug trials are not inherently “good” or “bad,” and thinking of them that way can lead to tunnel vision, says Mundel. Statisticians who are fervently pro-Bayesian and push for every study to be adaptive could lead companies to make impractical investments in data collection technologies. “I see a lot of organizations are attracted to adaptive designs because they believe they will yield results faster. But it can take companies a long time to get these kinds of studies started. The notion that an adaptive clinical trial [ACT] can get rid of the white space between study Phases II and III is the worst reason to do them.”

The white space simply gets moved to the front end of a trial, working out the details of study design, achieving consensus with regulatory authorities, and getting the necessary information technology in place, says Mundel. “Time savings are fictional.” On the other hand, endless months of acrimonious debate about whether or not to use an adaptive design can also be a real time-waster.

“From a design perspective, there’s an extensive up-front investment in thinking time and to construct documents, such as a simulation report summarizing operating characteristics,” says Michael Krams, MD, assistant vice president, adaptive trials, clinical development at Wyeth, the industry leader in adaptive dose-ranging studies. “This is in addition to the protocol, interim and final statistical analysis plans, and DMC [data monitoring committee] charter.”

The expertise necessary to design and run ACTs is in relatively short supply, says Professor Donald Berry, head of the division of quantitative sciences and chairman of the department of biostatistics at the University of Texas MD Anderson Cancer Center as well as an independent adaptive design consultant. Over the past couple of years, Berry has helped design about 50 ACTs for two dozen pharmaceutical companies.

Quickly producing new randomization probabilities based on incoming data is a capability limited to a handful of companies, including Cytel, Tessella, and United Biosource Corp., says Berry. Among clinical research organizations (CROs), “maybe five percent” can handle ACTs. “They’re learning by doing, and we [Berry Consultants] are teaching them.” But someone at the CRO still has to remember to “throw the switch” when, for example, patients begin to be randomized adaptively rather than equally across treatment arms.

The information technology (IT) required to do ACTs include an interactive voice response system to accomplish adaptive randomization as well as electronic data capture to collect adaptive parameters in real time, says Mundel. The structure of the database also has to be finalized up front rather than mid-study.

Drug supply software is likewise a necessity, especially for adaptive dose-finding studies, to ensure there are sufficient quantities of the correct dose formulations at investigative sites, and “this has to be worked out well before the study starts,” says Mundel. Current drug supply technology is far from ideal due to lack of uniformity. “I think it’s the number one cause of delays across the board, not just for ACTs.”

The “dream” at Wyeth is a UPS-like setup for real-time drug supply chain management that is part of a fully integrated system housing all trial-related clinical and financial data, says Krams. “We do a good job of integrating all the different functions on individual trials, but we want to develop the IT infrastructure to do it in a way that’s scalable.”


Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.