TetraScience Announces GxP Solution, $500M Investment in Scientific Data Cloud

July 20, 2022

By Allison Proffitt

July 20, 2022 | Today, TetraScience announces the expansion of the Tetra Scientific Data Cloud to include manufacturing and quality control (QC) data and pledges $500 million over the next five years to further development.

The Tetra Scientific Data Cloud comprises productized API-based integrations from the Tetra Partner Network, the open, cloud-native Tetra Data Platform that re-engineers raw or primary data into FAIR, harmonized “Tetra Data,” and use case-based Scientific Applications. The Tetra Data Platform runs natively on Amazon Web Services (AWS) and is available as a SaaS solution in AWS Marketplace. 

“Prior to this year, we wouldn’t really have a formulaic GxP consideration inside of manufacturing and QC. That’s a whole different level of rigor, a different feature set, everything,” explains Mike Tarselli, Chief Scientific Officer. The new offering, Tetra GxP, ensures the capture of data provenance through a comprehensive audit trail, disaster recovery, control matrices, and software hazard analysis. 

The Scientific Applications are also a significant addition, Tarselli says. “These scientific applications… these are something we think are new in the industry,” he told Bio-IT World. The applications will reduce time-to-value by addressing the challenges of specific scientific lab operations and workflows. Examples include data flow automation in high throughput screening and batch release and stability testing.

In both cases, the updates focus on the data.

Making Connections 

Since the TetraScience reboot in 2019 (after which the company divested its original IOT business), the company has been a “pure play, data-only business,” Tarselli says. It’s been paying off. The company boasts 13 of the top 25 global biopharmas as customers and has doubled in employee size in the past year. In April 2021, TetraScience closed an $80 million Series B funding round. 

The TetraScience platform consists of custom, point-to-point integrations between hundreds of data sources, instruments, and informatics systems. Everything from plate readers to CryoEm, from contract research organizations to LIMS are now connected to data consuming applications including ELNs, data science and ML tools, visualization tools, data lakes and more.

“If all you did was connect them, [companies are] already doing this manually today,” concedes Bill Hobbib, Chief Marketing Officer. “The problem is, [the connections are] manual, they’re custom, and they’re brittle. What happens when something changes? You don’t just want to connect it and move the data. You want to get insights from the data that helps you understand how to solve particular problems.” 

With the Tetra Partner Network of vendors, the company has productized API-based integrations for hundreds of assays, workflows, lab functions, analytics, data visualization tools and more. 

Setting up the Scientific Data Cloud involves a company’s IT team and TetraScience walking through the various instruments, tools, and systems to configure instruments and the cloud. That process is faster for any instruments, tools, or systems already in the Tetra Partner Network. Pricing is aligned around the number of instruments connected and the nature of the data engineering that customers want to do.  

Once launched, the data platform collects and archives the data coming from the data generators, extracts metadata, harmonizes content, verifies and enriches it, and then publishes “Tetra Data” back to the tools and systems researchers use to consume and analyze their data. The advantage of simple connections, Hobbib says, is, “Bringing all this together in a cloud where you can query it, where you can have compliant data, it’s actionable. You have all the history and the lineage for auditability purposes.”

For some end users, this simply means richer, more useful data in their ELN. Others can dive into the Scientific Data Cloud, which is hosted on AWS in JSON, and query. 

“What science has been missing a lot in most biopharma scenarios is being able to learn from different parts of the value chain,” Tarselli says. “We don’t want to say, ‘Hey research, here’s your data systems and ELNs and stuff.’ And then, ‘Hey Development, here’s your LIMS and your LES and your other secretive pharmacology databases.’ And then say to Manufacturing, ‘Here’s your MES and your suite and your GxP.’ And then have those all passing a tech transfer packet made of paper between one another.” 

Instead, Tarselli proposes a single data lake for all of the data—not divided into research or manufacturing or development—that is all freely-accessible, FAIR, and searchable.

“The world’s scientific data should be treated as a first-class asset, as it’s the most important data in the world. But it’s been treated more like a second- or third-class asset. We’re bringing that data together in a cloud, in a standardized, harmonized, uniform vendor-agnostic way to let people leverage the full strength of that data… and leverage that data end-to-end in the value chain from R&D through to manufacturing,” add Hobbib.   

Lucky and Good 

After a TetraScience reboot in 2019 (in which the company divested its original IOT business), the company has been a “pure play, data-only business,” Tarselli says. 

The timing, he believes, is a combination of being lucky and being good. 

“We believe there’s a sea change in the biz right now. Most data inside of a pharma—… your routine high throughput screens, your routine DMPK things, your routine manufacturing things—should be available for perusal by anybody who wants to see it, including business leaders!” he says. He expects some change management ahead, but also says, “We already have lots of evidence that companies are willing to do it… We are seeing the change come from within.” 

And TetraScience is investing in that change. Over the next 5 years, the company plans to invest $500 million in the current products and offerings as well as in accelerating, “highly sophisticated scientific data taxonomies and ontologies, and our ML/AI layer, which will exploit these advanced scientific data models,” according to a statement from company CEO Patrick Grady. “We will also be adding continuous verification systems for GxP and ISO compliance and expanding into new modalities inside BioPharma,” he adds. 

Grady dismissed any notion of tech acquisitions, saying, “they almost invariably require rewriting the acquiree’s code base, slowing overall innovation, and impairing quality.” But he did add that the company plans to invest in 3rd party tools to, “enable the broader democratization of [the company’s] capabilities.”

The goal is, “a sort of extra-hyper automation of the future,” according to Tarselli. “When you put humans in the loop there’s good parts of that—the contextualization—and there’s bad parts of that—some messy, manual stuff and errors. We want to take out the bad.”