‘The Process Is the Product’ Is No Longer Enough. Now, Data Is the Deliverable

June 1, 2023

Contributed Commentary by Marc Smith, Director of Strategic Solutions at IDBS; and Sadik Kassim, Ph.D., Chief Technology Officer of Genomic Medicines for the Life Sciences companies at Danaher Corporation 

June 1, 2023 | In biomanufacturing, particularly with respect to advanced therapies such as gene and cell therapies, there is a saying: "the process is the product." That insight is still important, but it's also insufficient. As the industry adapts to increasingly complex manufacturing processes, we'd like to propose a reframe: the data is the deliverable. For personalized medicine to reach its potential, leveraging process data to drive learning will be key. 

The traditional idea behind "the process is the product" is simple: to ensure product quality, the manufacturing process should be the same for every batch. A 1000-liter vat of liquid produced in the same way every time is a valuable drug. A vat of liquid produced haphazardly is an unusable vat of apple colored liquid.  

In traditional, batch-based drug manufacturing, "the process is the product" speaks to the importance of consistency. It also hides mysteries. We don't always know why the process works—we just know that if we do it the same way we get the same result. Once we know something is working, that lack of deeper insight can lead to a fear of change.  

For today's more complex processes, a surface-level understanding is not enough. Modern cell and gene-based drugs have a level of complexity that is orders of magnitude above traditional biologics.  

A small tweak that might go unnoticed in traditional batch sampling could have big impacts in personal medicine: a temporary change in pH, for example, could reshape a molecule so that it no longer binds to an individual's receptors. That could reduce the impact of the drug with potential implications for efficacy and safety.  

From a single batch of a single patient's CAR-T therapy, there are really thousands of parameters that one could be measuring that enable you to control the process. Unfortunately, those things are not measured, or if they are measured, they're not really being annotated or maintained in a way that would be actionable. 

"The process is the product" reminds us to measure what happened to make sure it happened right—but it doesn't prompt us to predict or to understand. To design newly complex processes and to ensure consistency where it matters most, we need deeper insights about many more variables. 

There's opportunity to come up with the analytical methods that can be predictive of overall manufacturing outcomes or of the potency of the product. Excellent data management can unlock insights, compensate for an industry-wide shortage in much-needed skills and function as the thread that reduces the learning curve for transitioning from research to manufacturing. 

Collecting thousands of data points can help us predict and identify the process variables that matter most. But too often, data collection is haphazard and ineffective: ten years of data, structured poorly, can fail to yield any insights at all. 

In addition, the prevalence of ‘dirty data’ due to transcription errors means that data scientists trying to do retrospective analysis, for example to compare batch-to-batch variability, may spend 90% of their time cleaning and annotating data and only 10% actually analyzing data. Digital technologies can shift that equation. Spending more time analyzing data helps get to breakthrough insights faster. 

That's where "the data is the deliverable" comes in: it's the mantra needed to prioritize good data management as a competitive advantage. Making data a core deliverable, rather than an afterthought, means getting the right people in the room, collecting the right information, and connecting seemingly disparate data points through a well-structured data backbone. 

First: pick the right team. Traditionally, data management projects are driven by IT with supplementary “usability” support from scientists. Those stakeholders can make sure tools work well for end users and meet existing needs. But to unlock learning, data decisions need to be driven by business questions. To decrease time to market and create effective drug products, analysts and executives must view well-structured data as their deliverable as well.  

Second, collect the right data. That starts with collecting data on failures. Manufacturers mostly collect high-quality, high-resolution data after tech transfer, when processes are already well-established and running on successful batches. But failure is how we learn. An algorithm trained on success will offer greater bias and little insight into which variables matter most; in process design, learnings come from edge cases. Collecting data well from the start of product development can yield insights to incorporate later. 

Finally, store data logically. In continuous processing, controlling for quality is about more than sampling a batch. Now, manufacturers need to continuously monitor bioreactors, process parameters, and analytical parameters: many more data points to consider. When designing data architecture, be strategic about where data lives—not for expediency, but for ease of analysis.  

It is important to ensure that data points are associated to the right objects—process steps, samples, or instruments. Then, a data backbone can tie disparate sources together and make correlations clear. In today's world, the process is still the product: but well-organized process data is the product that matters most. 

 

Marc Smith is Director of Strategic Solutions at IDBS and is accountable for the development of the IDBS Polar BPLM and associated solutions. Marc is an expert in laboratory and operational technology, laboratory informatics and system integration and leads a team of technological and domain experts committed to the development of Polar Solutions. He has over 15 years of experience working in the pharmaceutical sector implementing and leading digital solutions and strategies. Before joining IDBS in 2021, he worked at Lonza Biologics and Angel Biotechnology.  He holds a BSC (Hons) in Pharmacology from the University of Sunderland. He can be reached at msmith@idbs.com

Sadik Kassim, Ph.D. is a scientist and executive with extensive experience in the biotechnology industry with a specific focus on cell and gene therapy bioprocessing and translational research. Currently, he serves as Chief Technology Officer of Genomic Medicines for the Life Sciences companies at Danaher Corporation, with a focus on Genomic Medicines. Most recently, he was Chief Technology Officer at Vor Bio where he built the technical operations team responsible for process development, analytical development, supply chain and manufacturing support of a CRISPR gene-edited HSPC product and oversaw the company’s CAR-T efforts. Prior to Vor, Sadik served as Executive Director at Kite Pharma and led the development of manufacturing processes for autologous CAR-T and TCR-based cell therapies. As the Chief Scientific Officer at Mustang Bio, Sadik managed the foundational build-out of the company’s preclinical and manufacturing activities. Earlier in his career, he was Head of Early Analytical Development for Novartis’ Cell and Gene Therapies Unit and worked on research teams at the National Cancer Institute with Dr. Steven Rosenberg, the University of Pennsylvania Gene Therapy Program with Dr. Jim Wilson, and Johnson and Johnson’s Immunology Discovery group. Sadik and his teams have contributed to the successful BLA and MAA applications for three of the commercially available CAR-T therapies: Kymriah, Yescarta, and Tecartus. Sadik holds a bachelor of science degree in Cell and Molecular Biology from Tulane University, and earned his PhD in Microbiology and Immunology from Louisiana State University.