By Courtney Andersen
March 29, 2011 | In an age of increasing financial and logistical pressures facing biotech and pharmaceutical companies, optimizing R&D success is a critical task. But in a commentary published last year in Drug Discovery Today, Andrew Chadwick of Tessella and Matthew Segall of Optibrium argued that psychology could be hindering the productivity of drug discovery teams, even suggesting that the traditional approach to discovery is actually blocking progress.
The Tessella/Optibrium collaboration arose from a mutual interest in understanding how improved decision-making could help productivity in pharma R&D and a shared vision of how decision analytics could improve the process. Drawing from Chadwick’s experience in evidence-based medicine and from examples in the psychology literature, Chadwick and Segall identified what they considered the most important psychological obstacles to the decision process.
“We looked at those, first of all to understand what they meant, secondly to see if there was any evidence for these biases in the drug discovery process, and also to look at potential solutions to overcoming these,” explains Segall. The authors say their collective goal is to help people do “smarter” drug discovery, reducing late failures and ensuring a good flow through the pipeline.
Chadwick likens drug discovery to finding a needle in a haystack. “Scientists can find it quite difficult conceptually to think about all the factors that come into play,” he says. Sources of risk are numerous, as are the places where projects can go wrong, and these can be challenging to researchers to navigate when making discovery decisions. Researchers rely too heavily on their gut instinct to guide the decision-making process, jumping at promising hits, leads, and candidates that arise early.
“We tend to gather evidence that supports our ideas, not really to actively seek evidence that is likely to refute [them],” says Segall. This tendency is closely associated with over-optimism and overconfidence in a project, one of the four “cognitive biases” that Chadwick and Segall suggest impede R&D success. The other factors are poor calibration on predicting reliability, too much attention to recent information, and an excessive focus on certainty with regards to sources of risk. These biases lead to other problems including post-hoc justification of decisions, holding onto an idea for too long, neglect of prior information that may hint at the probability of certain occurrences, and deciding against pursuing a candidate based on fear of failure.
To combat these biases, Chadwick and Segall advocate for broader preliminary screening to facilitate choosing drug candidates based on evidence. Rather than relying on a set of standardized rules, they believe research teams should have a set of flexible guidelines that can be tailored to individual projects. While a desire exists, especially in larger companies, for a relatively standard process, there is a need for more diversity in the way that teams approach projects.
The commentators also recommend the use of probabilistic models of candidate success. This is where software—Optibrium’s StarDrop is one such example—may add value. The software helps researchers take into account uncertainty in measurement, attempting to help scientists bring together predictive and experimental data to identify well-balanced compounds. In terms of forecasting the reliability of a given project, Chadwick and Segall suggest short-term feedback to improve accuracy of predictions. They also remind researchers that it is essential to apply lessons learned from past projects to those that are still developing.
According to Chadwick and Segall, all of the biotech and pharma companies they have spoken with think that objective decision-making is a good idea. The cognitive biases that challenge this are human nature, wherein lies the difficulty in overcoming them. The key message is this: there is a lack of information on risk and reliability available and a lack of ability to pool the existing knowledge. Part of improving is retrospect determination of what factored into the success or failure of a given project. Was it due to chance or planning? Such a question can only be answered when evaluating a portfolio of projects.
Tessella’s Chadwick stresses the need for “accelerated learning” to help project managers overcome the excessive cycle times for feedback on performance in an R&D environment. He recommends giving people feedback in simulated environments that would allow them to form better patterns of decision-making and overcome any personal biases.
Optibrium is trying to encourage a database of actual reasons for failure and distributions of properties. “I see this as a real opportunity for sharing information,” say Segall. “You don’t need the actual compound structures, so it’s almost completely free of [intellectual property]. I’d like to encourage people to take advantage of that so we can all learn from it.” •
This article also appeared in the March-April 2011 issue of Bio-IT World Magazine. Subscriptions are free for qualifying individuals. Apply today.