Best of Show Awards Highlights New Products in Bio-IT
By Bio-IT World Staff
May 3, 2022 | Bio-IT World is pleased to announce the competitors in the 2022 Best of Show competition with the Bio-IT World People’s Choice Award.
The Best of Show Awards offers exhibitors at the Bio-IT World Conference & Expo an opportunity to showcase their new products. A team of expert judges views entries on site and chooses winners based on the product’s technical merit, functionality, innovation, and in-person presentations.
But the judges aren’t the only ones with a vote. Bio-IT World presents a People’s Choice Award as well, which is chosen by votes from the Bio-IT World community. All of the Best of Show entries are eligible for the People’s Choice Award. Voting will open at 5:00 pm ET on Tuesday, May 3, and will remain open until 1:00 pm ET on Wednesday, May 4.
Awards chosen by the judges as well as the People’s Choice Award will be announced at a live event on the Bio-IT World Expo floor at 5:30 pm on Wednesday, May 4.
We are excited to have the community’s input again this year on the best new products on display at Bio-IT World. Watch the Bio-IT World Twitter account @BioITWorld and #BioIT22 for the voting link on Tuesday, May 3, at 5:00 ET.
Product Name: Ataccama ONE, v13.6
Ataccama ONE can help you automate the delivery of high-quality data with active metadata. Learn in detail how three effective steps to approaching Data Quality—Understand, Control, Improve—can drive enhanced data practices and accelerate business success. Join our Ataccama solution expert in an interactive demo as he covers how: • Understanding data and data issues can inform better control with DQ rules, access, and the ownership process • Implementing knowledge-based strategies that identify issues to cleanse, standardize, and enrich can ultimately build trust in data • Cutting-edge solutions can address common hurdles, such as automated data quality and capturing metadata (and what to do with it) The latest release introduces multiple exciting features: • Time series analysis, which allows data stewards to detect anomalies on aggregated data for a specific time instance. • SQL catalog items, which allow users to easily combine and transform catalog items, directly in the web application. • Advanced DQ rules in web UI, which allow data stewards to configure and test complex data quality rules in an intuitive interface.
Product Name: Omics Playground 2.8.5
Omics Playground is an online self-service analytics platform that empowers life scientists to perform complex omics data analysis and visualization by themselves, without the need to learn how to code. The platform focuses on the tertiary analysis (i.e. interpretation) of transcriptomics (e.g. micro-array, RNA-seq, scRNA-seq) and proteomics data, providing users with highly interactive visualization tools to understand their data. The functionalities of the platform include unsupervised clustering, differential expression, gene set enrichment, drug connectivity, biomarker discovery and single-cell analysis, among others. In the new version of the platform the following upgrades have been made: • Add Shinylogs code • Sever crash log handler • Revamped batch-correction module • Private data folders for users • New gene/geneset UMAP visualization (in Clustering) • New 'Compare Datasets' module (beta) • Enlarged drug L1000 database (in Drug Connectivity) • Gene-specific perturbation database (in Drug Connectivity) • User authentication using Google or email • Chatbox for online support
Product Name: CellPort Cell Culture Suite
Commercially launched in Q3 2021, the CellPort Cell Culture Suite is a low code, domain-agnostic Software-as-a-Service product uniquely focused on cells, including cell culture, cell manufacturing, and cell banking as well as related upstream and downstream data and processes. The role-based system includes modules for inventory management, equipment management, reagent management, cell management, protocol/SOP management, and user management. The product combines the end-to-end laboratory inventory and process management of a Laboratory Information Management System (LIMS) with the highly repeatable QC and manufacturing focus of a Laboratory Execution System (LES). Built upon the Microsoft Azure cloud computing platform, it contains validated, highly configurable browser-based applications for common cell manufacturing operations using a multi-tenant cloud model for scalability across global laboratory locations. The product includes three key features that allow it to be quickly configured to meet each customer’s specific needs: (1) Configurable Object Models that allow the out-of-the-box data definitions to be precisely mapped to a customer’s vocabulary and data standards, (2) Configurable Data Entry/Display Forms that allow the out-of-the-box forms and reports to be adapted to each organization’s needs, and (3) Configurable Workflows that allow SOPs and protocols to be digitalized according to the organization’s best practices and ensure that they are followed. The validated system is 21 CFR 11 / Annex 11 compliant and includes a full audit trail for complete traceability and transparency.
Product Name: Cerebras CS-2 Artificial Intelligence System (second-generation)
Cerebras’ CS-2 Artificial Intelligence System is the world’s fastest AI supercomputer. With every component optimized for AI work, the CS-2 delivers more compute performance at less space and less power than any other system. The CS-2 more than doubles all of the performance characteristics of Cerebras’ CS-1, including the transistor count, core count, memory, memory bandwidth, and fabric bandwidth. The second-generation Wafer Scale Engine processor (WSE-2) boasts 2.6 trillion transistors and 850,000 AI optimized cores, which is 123x more cores and 1,000x the performance on-chip high memory than GPU competitors. Accelerating training by 1000X requires a fundamental rethinking of not only the processor, but all aspects of the system design. This includes the compute core, memory architecture, communication fabric, Input/Output subsystem, power and cooling, system architecture, compiler, and the software tool chain – and this is just to name but a few of the elements that need to be optimized and tuned for performance gains that measure in orders of magnitude. Cerebras undertook all of these challenges in the development of their CS-1 & CS-2 AI systems. Finally, to have maximum impact on the industry, extraordinary performance must be coupled with ease of use. And this is where the Cerebras Software Platform rises to the fore. It has been co-designed with the WSE and allows researchers to take full advantage of all 850,000 cores while using industry-standard machine learning (ML) frameworks like TensorFlow and PyTorch. This produces extraordinary performance without requiring any changes to the user’s workflow.
Product Name: Lumin Bioinformatics v 1.4
Lumin Bioinformatics is a dynamic analytic and visualization platform that empowers biologists and bioinformaticians to harness the power of computational science in a convenient and sophisticated tool. Lumin Bioinformatics allows users to explore and perform analysis on rich and unique multi-omic datasets from over 25,000 cancer patients, including thousands of clinical treatment responses not available in any public dataset, and access to publicly available cancer datasets in a standardized format. Lumin Bioinformatics offers over 2 billion data points to drive biological hypotheses, including multi-omic datasets such as Next Generation Sequencing (WES and RNASeq), quantitative proteomics, and phospho-proteomics are the backbone of the Lumin Analytic Engine. Lumin offers access to advanced analytics and AI using predictive and interactive gene networks, the ability to explore correlative and causal relationships to generate biomarker signatures while allowing the user to incorporate their analyses and code using Workspaces for discovery research. Our latest release of Lumin Bioinformatics includes UI/UX feature upgrades and the addition of proprietary PDX proteomics datasets from AML (Acute Myeloid Leukemia), H&N (Head and Neck Cancer), NSCLC (Non-small cell lung cancer), Lymphoma, Ovarian, Prostate, Melanoma, and new NGS data from 93 human cancer cell lines across various cancer types. The latest release also includes our new ex-vivo analysis module offering the ability to process multi-well plate reader data, including study design and layout, IC50 calculation, and combination study analysis. Ex vivo results can be loaded directly into the pharmacology module for further analysis, such as differential gene expression between responders and non-responders.
Product Name: Cytocast Cell Simulator0.3
Our solution is a computer simulation ("artificial cell") based service package that simulates the function of untreated and treated cells and provides a platform for predicting the effects and side effects of a drug/therapy. The Cytocast drug testing platform can be used to predict the effects and side effects of new drug candidates and combinations of existing ones and can also be used to predict the use of approved drugs to treat other diseases (repurposing). Our software uses different types of molecular information, called multi-omics data, collected from diseased, healthy and drug-treated cells. The tool integrates this data and simulates how molecules within the cell interact with each other to form structures called macromolecular complexes, which ultimately perform the main functions in the cell. By examining cell behaviour at a large scale, we can identify global differences between drug/ drug candidate treated and untreated cells, what cannot be typically observed by examining a limited number of components of the cell. Our simulation tool allows us to predict the phenotypic effects (and side effects) of drugs much more accurately than any previously existing method. To reach this, the data from the simulation is compared with the described effects and side effects of previously approved drugs and a machine learning algorithm is used to provide qualitative and quantitative predictions.
Product Name: Egnyte for Life Sciences (ELS)
Egnyte for Life Sciences (ELS) is a SaaS platform that unifies collaboration, data lifecycle management, and compliance across the Life Science R&D value chain. ELS empowers emerging biotechs to achieve operational & scientific excellence with an unprecedented time-to-value. ELS customers collaborate across geographies and their partner ecosystem while providing real-time transparency and integrity to all data captured within the platform. ELS provides: ● Process automation engine, with built-in tools for collaborative modeling and analytics ● Templates for standard tasks and workflows ● eTMF to support FDA compliant processes ● Risk dashboard for audit trails, and compliance ● Lifecycle management of regulated quality documents and study data in a secure and compliant repository ● Validation as a Service (VaaS) to enable flexibility and cost savings ● Native integrations to cloud computing services as well scientific and operational applications including automated data transfer of lab research data to the cloud. ● Multi-level controls for enforcing authorized access and collaboration
Product Name: Mastermind Disease-Specific Curated Content
Mastermind Disease-Specific Curated Content (DSCC) is a new and novel capability that solves the challenges described above by providing access to provisional pathogenicity calls and evidence for variant interpretation that is initially gathered through AI-based techniques to ensure maximal completeness of the data, and then manually reviewed by expert variant scientists to ensure the utmost accuracy of the final determinations. When users of Genomenon’s Mastermind Genomic Search Engine search a variant for which there is curated content, DSCC presents them with a notification ribbon across the top of the screen. Users see a provisional pathogenicity call (based on ACMG criteria), along with a link to a trusted source for additional information about the disease. From there, they will be prompted to explore further with the option to “View Interpretation,” which opens detailed variant data in a new window, including ClinGen classification, population data, ACMG calls with relevant literature citations, in silico prediction models, and data intrinsic to the gene. For ease in reporting, these results can be copied, exported, and/or printed by selecting the “Export to Report” button on the bottom left of the detail page. We predict that access to this evidence will increase clinical diagnostic rates, and further, notification of clinical trials and available therapies will increase clinician awareness of disease and appropriate treatment of patients.
Product Name: Hammerspace 4.6.5
Hammerspace is software designed to be installed in virtually any IT environment. It installs as a complete ISO and runs on bare metal servers or virtual machines. Hammerspace is deployed with two components: The Anvil metadata control nodes, and DSX nodes, which are the scale-out data mover nodes. With its most recent version 4.6.5 software release Hammerspace added incremental improvements across all layers of its architecture. These include greater multi-protocol accessibility, I/O performance improvements, added cross-platform data services and expanded multi-vendor storage compatibility. Hammerspace software architecture is logically organized in the following layers: • Universal Data Access Layer: Which provides multiprotocol NFS/SMB access to all data on any storage type, including object/cloud. • Parallel Global File System: This is the metadata control plane that can span multiple locations to provide global file access to distributed users/applications as though they were local. • Automated Cross-Platform Data Services: This enables global control of data services across all storage types, eliminating the complexity of managing multiple vendor-specific point solutions. These include global snapshots, versioning, undelete, WORM, DR and much more. • File-granular Data Orchestration: This is where user-defined policies are executed, enabling live, non-disruptive data movement for tiering, replication, data migration, DR, and to stage files across multiple data centers, or to public/private Cloud and Cloud regions. • Compatibility with Any Storage Type: Hammerspace leverages any vendor’s storage including new or legacy block, DAS, NAS, or object storage, and multiple public Clouds, all with global access via the Parallel Global File System.
Product Name: IQVIA Human Assisted Review Tool (HART)
IQVIA NLP Human Assisted Review Tool (“HART”) is a new web-based tool to review and curate information extracted using NLP from medical, scientific, and clinical free text to generate clean, structured, normalized data. HART provides data scientists, annotators, and analysts with an effective method to further increase the precision of their natural language processing (NLP) workflows to remove noise from the dataset. HART allows users to manage exceptions and outliers surfaced from within unstructured data by removing them from critical outputs. It provides a mechanism to drive efficiency and accuracy by feeding exceptions back into iterative processes that continually improve the NLP algorithms. Each iteration leads to an improvement of the training data with higher quality features and values. HART enables users to manage multiple result sets for project work that encompasses several inputs. The user can dip into any result set, to accept or reject the automatically detected hits. Each hit is presented with a snippet of information to provide context for the simple-to-curate cases; for more complex hits, seeing the terms highlighted in the context of the full document ensures that the curators have all the data needed to make the right choice. Partial matches can be corrected by changing the normalized term and/or the span of the match and new features can be created by adding new matches with their own normalized terms. HART is available for installation in Linux or Docker environments. It is fully compatible with other IQVIA NLP offerings (such as IQVIA NLP APIs).
Product Name: Sartorius - Cell Insights
Cell Insights bioprocess simulation combines intuitive user interface with a guided workflow between the 3 parts of the application: 1. Model Configuration: a comprehensive list of parameters that the user configures 2. Model Fitting: after the model is created, the user can fit the model and observe on a series of plots how experimental data for all defined and calculated parameters correlates with the fitted model • Several plot windows with drop down menus allow the user to investigate and compare parameters of interest • The application supports the user adding custom functions for metabolites and titer fitting 3. Simulation: the user can simulate an experiment based on the fitted model. Two types of simulations are available. • Perfusion Cell Line Selection fits a separate model for each batch • Model limited to main growth kinetics and a simple productivity model • No metabolites or inputs • Virtual Bioreactor means fits a single model for all batches • Full model with maximum flexibility • Ability to model substrate limitations, influence of process shifts (such as temperature, pH and feeding strategy).
Product Name: MemVerge Memory Machine
MemVerge Memory Machine is the first in a new class of Big Memory Software that virtualizes DRAM and persistent memory so the pool of lower cost memory can be accessed without changes to an application. The software builds on its transparent memory service with in-memory data services that allow terabytes of cell data to be managed at the speed of memory. Memory Machine Standard Edition • Transparent Memory Service – virtualizes DRAM and PMem and makes the pool of mixed memory appear to the application as familiar DRAM. • Memory Tiering Service – Dynamically places hot data in DRAM and warm data in persistent memory. • Memory Quality of Service (QoS) – Allows “noisy neighbor” applications to be isolated and guarantee memory bandwidth to most important applications. Memory Machine Advanced Edition • All of the features in Standard Edition. • ZeroIO In-Memory Snapshots – in-memory snapshots copy data from DRAM to either persistent memory or storage, and the foundation of in-memory data services that allow data management at the speed of memory. • AppCapsules – In-memory snapshots that capture an entire application state including registers, cache, GPUs, and storage. Allows an application to quickly re-start after a system crash, burst to the cloud or move from cloud to cloud. • Local and Remote Memory Replication – In-memory snapshots used to instantly replicate up to terabytes of data for disaster recovery, or to clone data for parallelism. • Rollback – With frequent snapshots, researchers can easily change a parameter in their pipeline by rolling back their application to a snapshot at a specific point-in-time.
Product Name: metaphactory 4.5
metaphactory is a knowledge democratization platform that supports collaborative knowledge modeling and knowledge generation and enables on-demand citizen access to consumable, contextual, and actionable information. It enables everyone in the enterprise, from knowledge graph experts and data engineers to business users and domain experts, to participate in the knowledge generation, maintenance, and consumption process. metaphactory builds on semantic Knowledge Graph technology and the FAIR Data principles and allows customers to integrate various meta-layers across various dimensions in a single graph, based on an open and standardized technology stack: * Data models in the form of ontologies (OWL and SHACL) and shared terminology (SKOS). * The data itself or virtualized access thereto. * Dataset descriptions (or data catalogs) based on open and extensible W3C standards (e.g., DCAT) to make the data discoverable, accessible, and traceable. Ultimately, this transforms the creation of Knowledge Graphs into a streamlined, end-to-end process where all relevant stakeholders are equally involved. Additionally, the fact that metaphactory is based on open standards allows for the reuse of public ontologies, vocabularies, and datasets, such as the ones published by EMBL-EBI. With metaphactory 4.5 – released in April 2022, the platform provides an even tighter integration between ontologies and vocabularies by allowing for the restriction of OWL classes to specific SKOS vocabularies and supports truly connected knowledge graphs.
Product Name: Modak Nabu 2.5
Modak Nabu is an integrated data engineering platform that converges data ingestion, data profiling, data indexing, and data exploration into a unified platform, driven by metadata. This metadata-driven approach helps to automate and augment many repetitive tasks. Modak Nabu uses data spiders to automate capturing technical metadata, for both structured and unstructured data sources. Modak Nabu provides 1. Automated data pipelines – Simplifies the process of onboarding data from a variety of sources to different cloud environments. 2. Automated data discovery and profiling – Democratizes access to data assets by making them accessible and understandable. 3. Monitoring and Visibility – Provides a real-time view of the progress of data management tasks for different stakeholders, from operations to the executive team. 4. Self-service data management – Complex data management tasks can be executed using a simple intuitive interface, with adequate governance controls. 5. Tags in data connections and data pipelines: To add context to data engineering activities, Nabu 2.5 provides the ability to add tags to data connections and pipelines.
Product Name: Nanome
Nanome's VR technology helps increase R&D productivity and innovation by increasing global cross-collaboration, facilitating 3D understanding of molecular structures, decreasing risk of information loss, decreasing costs to identify optimal solutions, and democratizing structural analysis. Nanome users enjoy earlier market entries, leading to a stronger market position, more productive R&D drug discovery labs, significant additional sales, and increased profitability. A substantial number of big pharma are already using Nanome.
Product Name: DISQOVER
Find, connect and activate relevant data fast with DISQOVER by ONTOFORCE DISQOVER is a powerful knowledge discovery platform for life sciences and healthcare organizations. It solves complex use cases over many heterogeneous data sources with our simple, intuitive, and customizable dashboards. DISQOVER filters results in real-time and visualizes the data using responsive and interconnected charts. The data ingestion engine transparently merges and links siloed data sources, including third-party and public data sources into a single, densely linked knowledge graph, creating links between data entities and inferring new links based on existing ones. Aggregate data from unlimited sources, including public and third-party data, thus integrating your own data with what’s available in the public domain. DISQOVER comes with 140+ integrated data sources that are ready to use and is updated with new data sources regularly. DISQOVER’s data ingestion engine uses a unique proprietary technology to efficiently process extensively linked big data, relying on a partially denormalized triple store using column-oriented storage. The engine is designed for ultra-fast bulk linking and inferencing. Each action is executed as a sequence of full table scans, leveraging fast-block sequential I/O and temporary in-memory indexes. As a result, on equivalent hardware, DISQOVER can integrate and link data into a semantic knowledge graph much faster than conventional technology, such as relational databases or triple stores. The DISQOVER platform embraces an open architecture for data and functionality and is designed to be deeply integrated into a rich and complex enterprise IT solutions ecosystem.
Product Name: Cerella 1.0
Cerella is a proven artificial intelligence platform for active learning in drug discovery, supporting small molecule drug discovery from early hits to nominating a preclinical candidate. Its deep learning approach and the models and insights it generates accelerate discovery projects and increase the rate of success. Cerella connects to a corporate registry, ingesting data for compounds and endpoints across projects. Learning the relationships between endpoints, Cerella generates high-quality models for endpoints where conventional approaches fail, and Cerella automatically updates as new data become available. Cerella insights: • Proactively highlight high-quality compounds, effectively working with noisy drug discovery data with missing values • Increase confidence in decision making, identify hidden opportunities, flag outliers and false negatives • Translate AI insights into the planning of experiments, focussing on the most valuable measurements Cerella is a cloud-native platform that scales from individual project data sets through to global pharma repositories. Via a flexible API, Cerella seamlessly integrates with user-facing frontend applications, fitting into existing workflows and facilitating quick adoption. A key advantage of Cerella is that it considers all available endpoints to build models and learns the relationship between endpoints to improve predictive power. The latest version of Cerella provides access to the ‘Importance Matrix’, which visualises key data that drives model performance for any individual endpoint, helping hypothesis-building on what impacts a compound’s performance, particularly in complex late-stage assays. Where many other AI approaches are black boxes, Cerella offers a degree of transparency, shedding light on the inner workings and building users’ confidence.
Product Name: PIES (Pangaea’s Intelligence Extraction and Summarization)
Pangaea’s Intelligence Extraction and Summarization (PIES), provides first-of-its-kind unsupervised AI product which combines new synthetic data generation capabilities, pre-trained deep learning models, automatic machine learning (AutoML) framework, validation by human experts (clinicians) and a federated privacy preserving deployment methodology. Unlike other supervised NLP or NLG solutions and services, PIES has proven to extract and summarize new intelligence from textual health data in a privacy preserving manner with transformatively high accuracy without the need for large volumes of input data. As validated and scheduled to be presented (at Bio-IT 2022) by US oncologists, 26 features were extracted with 97.3% accuracy (100% accuracy for 14 features) through PIES in spite of a small input dataset for 21 patients. Additionally, through a second study (also scheduled for Bio-IT 2022 by UK oncologists) new knowledge through PIES has helped identify 6 times more cancer patients with cachexia (muscle loss), including those who were undiagnosed, miscoded and at risk. This will halve the cost of treatment and save $1 billion annually for the UK. Additionally, PIES helped clinicians to summarize patient records thereby saving 90% in time which was typically spent reviewing the entire records manually. This novel combination of unsupervised extraction and summarization also helped researchers from Genentech and Roche to automatically generate clinical narratives for pre-clinical and regulatory reports based on tables, figures and listings, which helped save 90% in time and was presented during Bio-IT 2021. PIES has also demonstrated success through autoimmune diseases, cardiovascular disease, mental health and rare diseases.
Product Name: Signals VitroVivo 3.1
PerkinElmer Informatics Signals VitroVivo 3.1 allows users to design data processing workflows in a low-code/no-code platform. It starts from reading the plate data and goes on by adding metadata specific to each read-out and additional parameter of a well. Data normalization and first data quality control steps are then executed on the data allowing for cross-campaign standardization of results. In one or more consecutive steps the resulting data is fed into automated individual outlier detection using e.g., GESD and curve fitting in an engine coming out of the box with all industry standard multi parameter curve fit models while still being extendable by the scientist. Resulting curve fits are then stored and indexed in Signals Data Factory for subsequent SAR analysis. Data processing workflow design is made possible through PerkinElmer Informatics’ own application data framework on top of TIBCO Spotfire. It is capturing all steps and calculations of the workflows as JSON definitions. To scale up the data processing, QC and calculation the user can from this release on transfer the entire workflow from within Signals VitroVivo into Signals Data Factory data pipelines. It consists of a Kubernetes cluster of Spark, Elastic and Mongo to run at scale. The Spark engine performs the exact same data processing and calculations designed before in the interactive GUI and publishes the results for further use in SAR analyses, which are again interactive and part of Signals Inventa, which sits on top of Signals Data Factory as well.
Product Name: cunoFS v1.0
By far the dominant majority of applications, including scientific ones, only deal with file-based storage. Even applications that talk object, usually talk with just one API, such as S3, but not Azure. cuno lets organizations run POSIX-compatible tools and workloads on their object storage with scalability and performance characteristics that can exceed file-based storage. Running with cuno on AWS, applications have been benchmarked reading from and writing to S3 at up to 50Gbps each way, per instance - far exceeding EBS and EFS throughputs. Scaling up to many instances, cuno has been benchmarked with IOR at 3.5Tbps on AWS S3. Even applications that natively support object storage, such as samtools, run up to 10x faster with cuno instead. Importantly, cuno does not deploy a gateway or need servers. It doesn’t scramble or modify content, which means files stored with cuno are directly accessible to object-native applications, and visa-versa. Performance scales with instances, up to the limits of the object storage itself. It doesn’t require admin access, a kernel module, or a FUSE mount, and can be easily injected into Docker/Singularity and other containerised environments. Changing access-control permissions or ACLs on the POSIX side, can even update access control on the native object API interface (currently only available for fully S3-compatible object storage). cuno unleashes the power of object storage, enabling organizations to: • Run workloads on local object storage • Burst or migrate workloads to the cloud • Run across multiple cloud vendors • Perform devops with ease • Save significant time and money
Product Name: Pluto (version 1.0)
Making its Bio-IT World Conference debut this year, Pluto is the collaborative, cloud-based life sciences platform that makes raw experimental data and all downstream results interactive, searchable, and securely shareable. Built for life science organizations of all sizes and industries, Pluto delivers an intuitive home for managing all of your organization’s projects and collaborating securely between internal collaborators and external vendors. Pluto stores all of your lab’s experimental data (including all raw data files) for any high- or low-throughput assay your lab runs, and allows wet lab scientists to run bioinformatics analyses in a few clicks without needing any of your own code or cloud infrastructure. Pluto supports 35+ R&D pipelines out of the box, but unlike “bioinformatics tools,” Pluto has been designed from the ground up to accelerate collaborative experimental workflow management. In Pluto, program managers can plan experiments and assign them to the scientists generating the data. Scientists can drag and drop in raw data and run powerful bioinformatics analyses with a few clicks. Most importantly, program- and executive-level stakeholders can view real-time progress reports on operational and scientific metrics, query experimental findings over time, and organize the results by program to demonstrate the company’s impact over time. Pluto offers enterprise-grade security, and includes optional integrations with your organization’s existing ELN, LIMS, project management software, and SAML SSO.
Product Name: The Quantori Flow Platform (v1.0)
The Quantori team, working together with one of its flagship clients, Neumora Therapeutics, is creating a data processing and analytics engine, the Quantori Multimodal Machine Learning (QMML) platform which will be proprietary to Neumora. This platform is the first framework for writing machine learning pipelines using Python-like scripts instead of defining computational graphs (like in tools like Kedro or Sagemaker). As a result, QMML allows the pipelines to be easily readable and analyzable by different verification tools developed for Python. In addition, by creating such a high-level abstraction, the framework enables data-scientist to write pipelines in a cloud-oblivious way and, as a result, elevates the requirement that data scientists need to know about the inner workings of specific providers. Finally, the modular structure of the framework allows people with different domain knowledge (such as people specializing in genomics, imaging, or proteomics/transcriptomics) to work together and reuse developed tools easily. As a result, the QMML platform enables Neumora to investigate what up until now was not just a computationally challenging problem, but to do it in a fraction of the time it previously took – while granting the data scientists significant improvements in processing and insight generation. This allows for more studies to be done in less time, enabling Neumora’s pursuit of new cures for brain diseases.
Product Name: ActiveScale Cold Storage
Quantum’s ActiveScale Cold Storage is an advanced solution based on the ActiveScale Object Storage Platform. By combining advanced object storage software with hyperscale tape technology, ActiveScale provides the industry’s most advanced cold storage archiving solution with unmatched levels of performance, durability, and storage efficiency. As software-defined storage, as an appliance, or as a fully managed service, ActiveScale is the industry’s first and only object storage platform and service offering architected for both active and cold data, deployable wherever your data lives – whether that’s in an in-house data center, colocation facility, or hosted IT environment. These offerings provide biomedical organizations a new level of capability not only to store these growing data sets but also to: • Securely maintain in-house control of these massive digital assets, from petabytes to exabytes • Easily access this data to unlock and enrich its value • Be confident in the preservation and protection of data over years and decades ActiveScale Cold Storage provides: • Easy, affordable accessibility of S3 and S3 Glacier Class-compatible storage • Unlimited scalability of both active and cold data sets • The ability to easily restore objects from cold to active storage at scale in minutes, not hours or days • 100x to 1Mx greater durability and up to 40% better storage efficiency than two copy solutions • Reduced cold storage costs up to 80% less compared to all-disk solutions • 30+% savings relative to public cloud cold storage services • Outsourced storage operations and simpler management without access fees • An AWS Outposts-validated solution with S3-compatible replication for hybrid cloud environments
SciBite, an Elsevier company
Product Name: CENtree 2.1
CENtree, a centralized, enterprise-ready resource for ontology management, transforms maintaining and releasing ontologies. CENtree recognizes and ingests multiple publicly available ontologies. Ontology browsing is oriented around a tree view tailored to visualize biomedical ontologies, their entities, and their relationships. CENtree 2.1 expands SciBite’s commitment to open standards via support for importing and exporting terminologies in the W3C Simple Knowledge Organization System (SKOS) format, to add to our existing support for the Web Ontology Language (OWL) format. CENtree enables the creation of custom ontologies from public and proprietary ontologies; the steps for creating these application ontologies is stored and replayable for ease of update. Adding new classes and editing/enriching existing classes is simplified enabling broader audience contributions. Edits are captured using open standards to make the resulting ontology reusable elsewhere. A deep learning component helps users edit by suggesting possible relationship connections for a given class. Provenance is captured for every edit action, with user comments to explain edits, and the ability to reject/undo an undesired change. A roles-based governance model controls the ability to affect changes. CENtree employs a Git-like approach to managing change with our custom versioning engine, empowering an organization to keep current with the latest public versions, whilst internally adding new content such as extending with new classes or fixing errors. The CENtree interface is founded on a flexible API which can be directly integrated into existing systems. As a SaaS solution, CENtree is secure and simple to deploy, Professional Services support integrations, ontology curation and advice.
Product Name: CellAssist 50
Thrive announced in March the CellAssist 50, (the CA50) an advanced imager which automatically builds databases with extensive quantities of time-series images of its 50 cell culture plates. The CA50 provide previously unavailable capabilities: The CA50 automatically images 50 plates around the clock and remotely on any schedule defined by the researcher. This capability enables critically important time-series studies so that researchers can characterize biologic processes. • Captures many 1,000's of 5-megapixel images at up to 100 focal planes, each 2 µm to 50 µm apart (user-selectable), with a z-range up to 3.5 mm. • Users track single cells and colonies over time with excellent image registration (within 10’s of micromillimeters across scans). • Rapid acquisition of images (approximately 100 and 250 milliseconds per image) • Each scan captures 500 to 30,000 5-megapixel images 1.5 to 90 gigabytes after compression) at up to 100 focal planes; some customer sites are generating 2 - 3 Terabytes per week. CellAssist Software, provides cell visualization and analysis capabilities, precisely tracks single cells and colonies over time with metrics including growth rates, confluence, colony and cell counts, and sizes. Thrive in the last year has introduced the following new features, among many others: • Advanced project management to better organize, define, copy projects • Upgraded barcode documentation system with improved time-stamping of logs, associated in the database with its plate • New algorithms for very accurate measurement of cell confluence (density) • Email/SMS notifications to users of error conditions
Product Name: TileDB Cloud
TileDB Cloud is a universal database that optimizes the analysis of all data types, for all applications. Manage tables, images, video, genomics, ML features, metadata, even flat files and folders — all in a single powerful solution. The database is delivered as a managed service that promotes security and collaboration, eliminating infrastructure overhead and offering massive TCO savings. TileDB Cloud allows users to retain the ownership of their data. Users simply connect their object storage account to TileDB Cloud, and the cloud-native format of TileDB arrays lets them analyze data in-place, without additional indexing or data movement. This approach lets users share large datasets alongside their code, facilitating collaboration while TileDB Cloud ensures security through access control settings that are logically scoped to analysis workflows. TileDB Cloud is 100% serverless and deploys compute resources automatically, no instance upgrades or memory management required. Users can optimize processing at massive scale by distributing operations in task graphs. This architecture enables out-of-core computations on large datasets in an efficient pay-as-you-go model. Since the bulk of processing happens on object storage, egress costs are minimized and users are billed on the amount of time spent running TileDB Cloud’s compute resources. TileDB Cloud offers the ability to version any data asset and the ability to logically group assets together to simplify management of large projects. Data versioning also includes the ability to time travel across datasets, Jupyter notebooks, UDFs, ML models and files for increased data governance and auditability.
Product Name: R&D Intelligence Assistant (RIA)
Vivpro RIA platform has connected and curated 40+ data sources that are structured and unstructured across 200K+ drugs, biologics, and devices approved in the US and EU. The platform has 750K+ documents delivering natural language service, ARIA, for users to obtain contextual intelligence at their fingertips. The more users use it, the smarter it gets. It associates the answer with the question and the features of the questions. It highlights the contextual nature of the information. Based on user experience, ARIA has the following value proposition: 1. Drug development strategists, investors and regulatory professionals can find valuable competitive product information in seconds without the burden on the user to use the exact search terms. ARIA is trained in pharma nomenclature. For example, a search for differences in efficacy results across regions lists all products (along with links to regulatory documents) for which this was an issue. 2. Busy regulatory professionals can find precedent-setting regulatory decisions in seconds, which are almost impossible to find manually even during key team meetings. For example, pediatrics, single-arm, rare disease, etc. 3. Regulatory professionals are busy and face information overload. ARIA results allow these professionals to not only access the regulatory documents but take them to the exact location within a large document so users can focus on regulatory strategy. 4. Drug development teams can search FDA Advisory Committee transcripts for specific topic discussions, and committee member style questions to assist busy regulatory professionals to prepare their teams for Advisory Committee meetings.