Vision-Casting for a Digital Twin Future

November 15, 2022

By Allison Proffitt 

November 15, 2022 | Digital twins are at a tipping point. Many of the enabling technologies are in place; what’s left now is to build a global community around enabling them. That was the challenge to the audience at last month’s Bio-IT World Conference & Expo, Europe.  

Peter Coveney, director of the Center for Computational Science and professor at the University College London, gave a keynote address tracing the growth, so far, of modeling and simulation across many fields from biomedicine to climate modeling.  

There are still fundamental clashes in how we do modeling, Coveney said. A deductionist model deduces consequences from your model to see how they work. The inductionist approach denies the need for a model altogether: just collect a lot of data and make statistical inferences. The two sit in an uneasy relationship, Coveney said, but the clash should be viewed as an opportunity rather than a disaster.  

“Here’s the opportunity to see if we can actually bring these two approaches to study biological systems together in the most effective way,” he said. “One of them to, perhaps, gather information initially to build these more mechanistic models on, which ultimately will be the ones we need to use in explanatory purposes in disease.”  

The field is far from mature, Coveney emphasized. “At this stage, no one is trying to claim we’re near a state of the art where a virtual version of me will come in and displace me at this podium and give my talk. It’s an enterprise which, in one sense, is all about the agenda for medicine, and biomedicine in this century and beyond, an organizing structure to make it more scientific. And each patch of that quilt framework at whatever scale we’re working on can be recognized as a digital twin at that level. In the long run we’re looking to integrate all of these things together.”  

As for current “patches of the quilt,” Coveney outlined several scenarios where digital twins can be currently useful: patient-specific models to predict the impact of specific treatments, protein structure folding, virtual drug testing, and virtual drug development.  

“We heard about some of this yesterday,” Coveney said, referring to the plenary presentation by Richard Law of Exscientia. “I particularly like Exscientia’s way of approaching the drug development cycle because they talk about active learning. There’s a so-called AI component in it, but it can’t run in any meaningful way without some modeling and simulation involved. It’s about trying to get as much data together and be able to perform the tests as effectively as possible. The old fashioned high-throughput screening is, I think, pretty much moribund now and it’s extremely expensive, but some forms of virtual filtering, screening, evaluation, and learning are possible in computers we have available in the modern era.”  

Digital Twins for Cancer Patients  

In a session room, Erick Stahlberg got granular on both what’s been done and where the hard questions lie for digital twins.  

Stahlberg is the director of biomedical informatics and data science at Frederick National Laboratory. He’s been working to apply digital twins to predictive oncology for the past few years, and he knows that success will depend on a mature ecosystem.  

Digital twins represent a shift in the standard approach to oncological care, he said. Instead of drawing broad conclusions based on many patients, a digital approach calls for many virtual substantiations of an individual patient to inform precision conclusions.  

The spike we’re seeing in digital twin conversations and publications in the life sciences is thanks to a convergence of the needed technologies, Stahlberg said. We now have the ecosystems we need to manage the data effectively; we can get and use information in near real time; and we have the ability to bring the data together in the cloud.  

He agrees with Coveney that we have far to go. Biomedicine is challenging and complex, Stahlberg said, and the concept of a digital twin is not yet at the patient level. Instead, he argues for thinking about it as part of a learning system.  

“We want to maximize the use of available knowledge, available data, and then use that to basically identify what is specific for that individual patient. That creates our ability to predict forward, so we’re really doing predictive analytics. But what we want to do is actually couple that prediction with the actual patient response and now we have something we can learn from,” Stahlberg said. “This is going to look like an active learning loop for those who are familiar with machine learning.”   

The digital twin paradigm should enable this active learning loop at greater and greater complexities, eventually passing an individual patient, a disease cohort, and modeling at the population level. But to get there, we need a flexible ecosystem that can manage very large amounts of data.  

Challenges 

There are key challenges for digital twin paradigms, Stahlberg said, across the data, the models, and the community. 

In a paper published just days before the Bio-IT World Europe event, Stahlberg and his colleagues reported the findings of a review of five recent cancer patient digital twin projects modeling pancreatic cancer, melanoma, non-small cell lung cancer, and two pan-cancer projects. They asked the teams on these projects what they would need in the next 10 years to further develop and mature their digital twin approaches.  

There were several recurring themes, Stahlberg reported. The teams repeatedly called for a common framework and multi-scale longitudinal patient data. AI will be very important, they agreed, though the precise role of AI requires clarity. Finally, they all emphasized the need for interdisciplinary teams for successful progress.  

“It’s not simply one type of a scientist and one type of engineer; we need all types of scientists and engineers and really integrating personnel across the care spectrum,” the groups said.  

But an interdisciplinary team with well-organized data will also not be enough. There are some challenging ethical questions facing digital twins, Stahlberg said.  

For instance: Who owns individual patient data? Is it the patient, without whom the data don’t exist and who stands the most to gain from it? Is it the healthcare system that generated the data and is held liable for data breaches? Is it the government, which in some cases paid for the data? Or is it academia and industry, which make the data useful and more valuable? 

Beyond that: Who owns the patient biomedical digital twin? The patient is again the most impacted—for good or ill. The healthcare system likely invested the most in the twin’s development and implementation. Or academia and industry may have created it and can likely use the data most influentially in the future.  

These are not new questions, but they are important ones on which we have not come to a consensus. “We have a lot of open questions yet to be explored,” he said.  

Building the Bridge 

There are also technical elements that are still in their early stages and will mature along with the field. We need FAIR (findable, accessible, interoperable, and reusable) ecosystems, Stahlberg said, and qualified and evaluated models for the digital twins. We need standards and conventions as we proceed, and ways to track and audit data models as they mature.  

We also need input from fair (lowercase) and ethical organizations, Stahlberg said. “In this case ‘fair’ is not the capital FAIR—we’ve accomplished that technically—‘fair’ is involving the parties and giving them a spot at the table.” That means building datasets that are inclusive and diverse, and sharing those data flexibly.  

“The technical elements and the organizational elements have to come together so we have the bridge in place,” Stahlberg said. “That bridge is what’s going to allow us to get the patient, so they are involved, informed, invested, incentivized and in control, and, obviously, safeguarded and protected. Without the patient involved we have no digital twin.”  

What A Community Looks Like 

Stahlberg and others have been building computational communities—particularly the Envisioning Computational Innovations around Cancer Challenges group—and those existing communities are now moving toward digital twins. He also highlighted other groups: the Digital Twin Consortium, PerMedCOE, the International Cancer Knowledge Alliance, and others.  

Coveney has been community-building as well; he was involved in the VECMA project that has now been succeeded by the SEAVEA project to share open-source toolkits to test various models. 

“What matters in [any digital twin] is our ability to get our simulations certified as reliable,” Coveney said. Now known as VVUQ--validation, verification, and uncertainty quantification—"this is the acid test of whether your predictions agree with your experiment, usually.”  

The SEAVEA toolkit establishes a platform for verification, validation, and uncertainty quantification (VVUQ), providing tools that can be combined to capture complex scenarios, applied to applications in disparate domains, and used to run multiscale simulations on any desktop, cluster or supercomputing platform. 

But while toolkits and projects are converging, “My biggest concern is we don’t have champions in the biomedical sector on this, Coveney said. “There’s too many people like me coming in from elsewhere, seeing the opportunities and trying to develop things.” 

That, too, is starting to change. At Bio-IT World Europe Stahlberg hosted a roundtable one evening of interested thought leaders, and he is building a workshop on the topic for the Bio-IT World Conference & Expo next May in Boston.   

The most fitting tagline for the topic may well be: stay tuned.