Bio-IT World Announces Best of Show Finalists, People’s Choice Contenders

May 9, 2023

By Bio-IT World Staff

May 9, 2023 | Bio-IT World announced the companies contending the for 2023 Best of Show Awards today. Thirty-three new products will be exhibited on the floor for attendees to view at the Bio-IT World Conference & Expo, May 16-18, at the Hynes Convention Center in Boston. The Best of Show Awards will honor several new products live at the event during the Best of Show Awards Reception beginning at 4:40 on Wednesday, May 17.

The People’s Choice Award is open to all entered companies and will be voted on by the Bio-IT World Attendees. All entered companies have described their new products (below) and will have them on display for attendees to learn about and vote on. Voting will take place via code available at each exhibitor’s booth.

The other Best of Show awards are chosen by a panel of peer judges. These judges have already reviewed the field of entries and have narrowed down their choices to 12 finalists. They will visit these finalist companies during the event and choose their final awardees. The finalists for the judges’ prizes are:

  • DataCicada’s Tympana
  • Deloitte’s ConvergeHEALTH CognitiveSpark for Clinical
  • Deloitte’s Miner Evidence
  •’s Platform for Life Sciences
  • Genomenon’s Mastermind Discovery
  • Genomenon’s Disease Prevalence
  • GitHub, Inc’s GitHub Copilot
  • Monomer Bio’s Monomer Cell Culture Platform
  • Ontotext’s Target Discovery v1.1
  • Starfish Storage’s Starfish 6.5
  • TetraScience’s Tetra Scientific Data Cloud

The Hyve’s Fairspace

Here are the product descriptions for all 33 Best of Show contenders:

Capish Reflect Version 1.2
Booth: 723

Capish Reflect is a web-based yet server-agnostic software for the exploration and visualization of scientific data. Queries are not scripted but instead made from interactive graphics such as lists and charts. Capish as a company also provides the related consultancy services of retrieving the client's trial data from various storage providers, CROs, etc. Our in-house developed tools for mapping data can offer fast and secure harmonization of different data to a unifying format, albeit with trails to the original data. What is new for version 1.2 is that we have two turnkey applications tailored for exploring clinical trial data: 1. A FAIR data repository configuration optimized for exploring and comparing completed studies 2. An empirically designed best practice configuration for monitoring trends, pace, and performance in ongoing trials.


Code Ocean
Product: Code Ocean Visual Pipeline Builder, 1.0
Booth: 308

Code Ocean’s Visual Pipeline Builder empowers researchers to focus on their core scientific discovery goals by offering a High Performance Computing experience in the cloud with a simple, graphical interface. Researchers can easily create Code Ocean Compute Capsules, then ‘wire’ them together in a seamless manner. This opens the power of the cloud and large scale parallel computing for scientists without the time and technical demands that were previously required. Code Ocean’s Visual Pipeline Builder creates an environment where researchers can utilize one of the more popular development environments, such as Jupyter, RStudio, or VS Code to visually edit a pipeline. The Visual Pipeline Builder makes it easy to build a pipeline, even if researchers don’t have deep coding knowledge. Operating within a familiar, open source development environment, researchers can manage access to data. All work conducted within this environment is accessible, interoperable, and reproducible. By using open-science tools, the Visual Pipeline Builder enables work and results to be imported and exported in a form that is compatible with the larger open science ecosystem. All data is secure and shared only with designated, approved collaborators. Code Ocean’s Visual Pipeline Builder makes building pipelines simple and easy by connecting data assets to a graphic user interface (GUI) to manage processes, make desired changes, and view results. This is accomplished by using an automated, open source Nextflow script in a common framework that can be shared with colleagues and utilized in collaborative efforts.


Copyright Clearance Center (CCC)
Product: RightFind XML
Booth: 811

RightFind XML takes the complexity out of using scientific articles for AI/ML by providing a consistent set of text and data mining rights and associated full-text XML content. With built-in rights and normalized files from a wide variety of publishers, CCC simplifies compliance and content management so leading companies can drive science forward. RightFind XML includes: - Access to 50+ publishers’ content - 8,000+ journal titles - 13 million+ full-text articles in normalized XML format - 20 million+ MEDLINE citations and abstracts All content within the RightFind XML corpus comes with pre-agreed terms from participating publishers that allow organizations to access, store and mine that content for internal business use. This saves time, energy and money which can be better spent analyzing the insights projects generate. Companies can choose between access options, including: - Access to entire-full text corpus via a search interface which includes multiple options for creating corpora of content for download – ideally suited to time-bound focused research projects. - Bulk data in XML based on defined periods reaching back years with ongoing updates as new content is released to our feeds – which supports larger enterprise scale projects that scale over time. This option is new this year. - Access to a curated Open Access (CC-BY) collection via API (including 350K articles NOT in PMC) – which can support the training of machine-learning models at lower cost. This option is new this year. RightFind XML provides the most comprehensive coverage of full-text article content and text mining rights available on the market today.


Product: Torx 2.0
Booth: 215

The cloud-native Torx platform provides chemistry teams with a centralized location to create, share and manage compounds, synthesis, and biological testing. By coordinating activities across internal and external teams it streamlines the drug discovery process whilst ensuring that only mission critical information is shared. Torx® V2.0 introduces new features that extend the scientific reach and improve the usability of the platform. We have extended the molecular docking of newly designed compounds to include covalent ligands, enabling medicinal chemists to create a covalently bound 3D pose alongside property predictions and MPO scores to rapidly assess their suitability for progression. The new integration with CAS SciFindern enables easy interrogation of synthetic feasibility, reagent availability, and patentability issues. Torx enables project teams to understand and receive live updates on the synthesis of every compound across internal and outsourced teams without pdfs and PowerPoints. This release brings improvements that enable users to find critical information quickly, such as using pre-configured quick filters or finding unassigned resources. As compounds are synthesized, Torx automatically submits them to the integral test request system. This release adds dashboards that enable both visualization of existing and scheduling of new tests for individual, sets or all project compounds, providing unparalleled visibility on test requests. Connecting to corporate data repositories, Torx enables immediate feedback of the latest data into new designs. By merging data from multiple sources including the original design data, molecule designers have all the information required to gather outcomes and progress to the next round of DMTA quickly.


Product: Tympana
Booth: 924

DataCicada is proud to announce the launch of Tympana, a powerful web-based platform that brings Machine Learning tools to the hands of the domain experts. Tympana assists scientists in building high quality Machine Learning models for their complex data, allowing them to scale their expertise and serving as a multiplier in efforts. Designed to be user friendly, Tympana requires no programming or coding skills on the part of the user. Tympana integrates active learning, explainable AI, and generative AI to create robust models and datasets that can be exported for scientific workflows. Tympana currently has proof-of-concept use cases, in Protein Sequences, Image Object Detection and Sensor Signal detection. These use cases serve to demonstrate the capabilities of the Tympana platform. The active learning strategies utilized by Tympana have demonstrated a reduction in data volume requirements in initial experiments that will save scientists time in achieving comparable results. While it can be easy to instruct users at diverse education levels how to recognize a specific object within an image, it requires years of training to properly evaluate a sensor reading from a DNA sequencer, an X-ray, CT scan, or a particle accelerator. These domains require an inherent body of knowledge and experience that does not lend itself to annotation by the general public. Tympana enables scientists to comb through large volumes of data and teach the computer what features are important, putting scientists in the driver seat of their own analysis.


ConvergeHEALTH CognitiveSpark for Clinical
Booth: 202

Digital Study Designer - GA Release: Digital study design is a key building block towards achieving a fully connected digital data flow, enabling interoperability across multiple functions leading to improved data reuse, increased transparency, and accelerated time to impact. Our GA release allows clinicians and clinical data scientists optimize and accelerate creation of a digital study design, which is then shared seamlessly to enable disciplined downstream automation (seamless and structured eDC build automation, reuse of digital study design to author protocol documents) by embracing standards aligned digital exchange models i.e., ODM, USDM, ALS etc. The overall solution is built on modern technology stack leveraging advances in technology - Graph Based Metadata Repository (for enabling semantic integration between established and evolving CDISC standards); AI/ ML algorithm store to harness intelligence from historical protocols and infuse intelligence in user facing workflows; seamless connectivity across business functions.


ConvergeHEALTH Miner Evidence, Release 4.5
Booth: 202

ConvergeHEALTH Miner Evidence is a market-proven, cloud-based platform designed to help transform how patient-level data (e.g., real-world data (RWD), patient registry, clinical trials, omics/biomarker data etc.) is managed, analyzed, and shared across the product life cycle. This enables organizations to dramatically decrease the cycle time for insight and evidence generation, foster collaboration, and amplify the impact of these insights and evidence. Miner Evidence announced the latest product release (v 4.5) in March 2023. One of the key capabilities in this release is an artificial intelligence (AI) based capability called ‘Automated RWD profiler’. This capability automates data characterization and profiling of RWD regardless of the type and format, dramatically reducing the time and effort to do this and provides researchers valuable information at their fingertips to inform their data set selection. Now, researchers can be more confident in identifying the right data, at the right time for the right research question without conducting any ad-hoc work to profile data. The ML based profiler is delivered as a containerized service and is designed to run on top of AWS cloud based RWD data lake or hub. It leverages AWS services such as Elastic container registry (ECR), Elastic Container Service (ECS), lambda, Relational Database Service (RDS) and Elasticsearch to produce profiling results and index them for faster retrieval by the researchers.


Elsevier, Inc.
EmBiology v1.0
Booth: 702

With its biology-based investigational framework, EmBiology allows researchers to explore cause and effect relationships associated with biological processes, allowing faster and more confident interpretation of experimental results in the context of published information. - Get the targeted biological relationship data needed, clearing a path through vast amounts of research. - Gain invaluable insight about cause and effect relationships in published experimental results, extracted from the full text of high-impact journals. - Make more informed decisions in disease biology and bring effective and competitive therapies to the market faster. EmBiology enables researchers to filter though millions of biological relationships mined from multiple publishers: • Elsevier full-text articles (5 million from 936 journals) • Third Party full-text articles (2.2 million from 939 journals) • PubMed abstracts (34 million from 14,224 journals) • (430 thousand) This relationship data can also be exported for use with bioinformatics tools for even more insights. 
Product: Platform for Life Sciences
Booth: 210

The Platform for Life Sciences transforms health and scientific data into knowledge and insights in real-time. The Platform for Life Sciences enables solutions using a hybrid AI approach that includes machine learning, natural language understanding, along with standard and curated ontologies and knowledge models to support solutions for drug discovery, clinical trial insights, key opinion leader identification and scientific publication insight (Nov 9, 2022:, and large language models such as GPT (Feb 15, 2023: Life Sciences and Pharmaceutical teams rely on to: - Extract connections between biomedical entities in literature for in-depth causality analysis to support researchers; - Monitor clinical trials and social media sources filtered by any combination of indication, drug, mechanism of action, sponsor, or geography to gain insight for clinical trials; - Accelerate the quality control process of preclinical reports prior to their submission to regulatory bodies; - Identify experts and influencers beyond your network to drive therapeutic awareness with both established leaders and rising stars; - Scan the latest scientific and biopharma news on drug approvals, trials, conferences and more to ensure users get instant updates for their topics of interest; - Validate scientific claims against trusted knowledge sources; - Analyze safety signals on adverse events in medical cases and comparisons with known side effects as reported to regulatory authorities.  


Product: Open Data Manager, Version 1.53
Booth: 824

Genestack’s Open Data Manager is an innovative, award winning, cloud based data hub that enables automatic cataloging, curation and optimisation of ALL your scientific data. The previous iteration of the system (Omics Data Manager) was only able to handle specific data types (low throughput, gene expression, flow cytometry and genomics). Our latest release opens the system up to ALL scientific tabular data. Scientists can connect their data sources and map to common data models that they control, automatically harmonize and curate data to ontologies and vocabularies of their choice. All the data is deeply indexed and optimized for search and streaming for a variety of purposes: online analytics, visual dashboards, machine learning and more. By centralizing and harmonizing disparate data sources with ease, ODM allows users to create a single source of truth for all scientific data, streamlining data management and improving data quality. With streamlined, actionable data, scientists can easily explore new ideas, generate hypotheses, and drive innovation up to 50% faster. ODM gets data flowing into powerful analytics and visualization tools straightaway to derive insights and make data-driven decisions with confidence. ODM has already had a significant impact, each year saving millions of dollars, and thousands of person-hours and unnecessary lab work for our clients.  


Genomenon Disease Prevalence (curated content)
Booth: 703

For companies targeting rare diseases, this new curated dataset and report provides a more complete understanding of the genetic prevalence of autosomal recessive (AR) diseases. 

It combines Genomenon’s Mastermind® Genomic Language Processing (GLP) technology and proprietary curation tools, and exhaustive knowledgebase of human genomic evidence, with our highly specialized genomic curation services to estimate the genetic prevalence of AR rare diseases. Following a rigorous approach, we:

  • Aggregate and classify all variants in the causative gene of interest. Our Mastermind Genomic Language Processing (GLP) identifies, extracts, and standardizes all published variants from the medical and scientific literature. Each variant is then interpreted according to gold-standard clinical guidelines by our genome scientists.  
  • Select variants to include in the genetic prevalence calculation. Pathogenic and Likely Pathogenic variants, as well as relevant Variants of Uncertain Significance (VUS) are included in the prevalence calculation based on understanding of the associated disease, published variant classifications, allele frequency, and data in clinical and functional studies.
  • Calculate the estimated genetic prevalence. Overall and population-specific allele frequencies of selected variants are downloaded from the Genome Aggregation Database (gnomAD). The Hardy-Weinburg equation is used to estimate the frequency of a disease-causing genotype. Multiple estimates are produced to present a spectrum based on the level of confidence that the included variants are truly disease-causing.

Deliver a comprehensive disease prevalence report. This report provides an executive summary of genetic prevalence estimates, a summary of the gene and disease of interest, any corresponding considerations for interpreting estimates, and a publication-ready description of methods.    


Product: Mastermind Discovery
Booth: 703

Genomenon introduced Mastermind Discovery in April 2023.  Mastermind Discovery is an AI-powered knowledgebase of genomic data, parsed from the entirety of published scientific literature, is accessible both through a web-based search application, as well as an API, making it suitable for and accessible to both biologists and data scientists. 

Mastermind Discovery doesn’t simply index keywords found in abstracts. Key genomic concepts (e.g., variants, genes, etc.) are extracted from every article, capturing nuances and variability in nomenclature, ensuring no terms are missed or misinterpreted. Associations between entities are mapped, creating a web of interconnected semantic data useful not only for unearthing specific articles of relevance, but also analyzing broader scientific questions, such as gene-disease relationships.

Mastermind Discovery knowledgebase includes full text of >9 million articles as well as supplemental datasets from >3.5 million articles, and information about >22.4 million identified gene variants. It is updated continuously, adding ~10,000 new papers each week. Small variants and CNVs are identified and normalized against genomic coordinates.

The API provides unlimited access to the knowledgebase, and features endpoints for every vector the Genomic Associations Web, which also includes diseases, phenotypes (HPO), therapies, categories such as functional studies, as well as free text. Detailed metadata is also accessible for every article in the database, allowing for flexible, customized ranking of articles, depending on the requirements of a particular scientific project. This metadata also serves as a set of useful signals for language models and related machine learning methods.    


Genpro Research
Booth: 926

Machine-assisted Intelligent Authoring (MaiA) tool deploys human-in-loop AI technology and automates key process nodes in evidence synthesis workflows. Its diversified content ingestion pipeline integrates with PubMed, Google Scholar and multitude of heterogenous biomedical literature sources. An embedded document parser yields accurately classified and relevant information from the ingested content and optimizes data extraction process for scoping reviews and other knowledge synthesis projects. Auto-filter feature does relevance-based crawling and reduces cognitive overload caused by sifting through irrelevant articles. NLP highlights for key attributes such as race, gender, prevalence, or incidence enable users to quickly skim and scan through full-text articles. Its collaborative authoring ensures seamless access of tool to multiple users without waiting time.    


GitHub, Inc.
GitHub Copilot
Booth: 627

GitHub Copilot is an AI-powered pair programmer that provides autocomplete-style suggestions to developers while they code. This tool offers two ways for developers to receive suggestions: by starting to write the code they want to use or by writing natural language comments that describe what they want the code to do. Using contextual information from the file being edited, as well as related files, Copilot offers suggestions within the IDE. This helps developers stay in flow and avoid cognitive overload caused by switching out of their development environment to search for answers. Copilot can even handle boilerplate code, freeing developers to focus on solving business problems. GitHub Copilot is based on OpenAI Codex and is trained on all the programming languages used in public repositories. It is available as an extension in Visual Studio Code, Visual Studio, Neovim, and the JetBrains suite of IDEs, making it accessible to both individual developers and businesses. GitHub Copilot works with any programming language, however developers that will benefit most are those who write code in the best-supported languages in open source: Python, JavaScript, TypeScript, Go, Ruby and Java.


Hammerspace, Release 5
Booth: 923

Hammerspace is a powerful scale-out software solution designed to automate unstructured data orchestration and unified file access across storage from any vendor at the edge, in datacenters, and one or more cloud providers globally. With Hammerspace customers can now enjoy global file access across any on-premises storage/compute resources and cloud providers, to cut costs and rapidly adapt to changing requirements without interrupting users. Unlike solutions that shuffle file copies across incompatible storage types and distributed environments, Hammerspace creates a vendor-neutral high-performance parallel global file system that seamlessly spans on-prem and cloud resources, bridging silos, locations, and clouds so users everywhere are working on the same datasets. Customers can continue to leverage existing or new storage from any vendor. But with Hammerspace, data orchestration between platforms and locations is non-disruptive to users/application workflows as a background operation. This enables files created on instruments, or output from post-processing analysis or other sources to move through different phases and across platforms transparently based upon automated workflow objectives, without interruption to user access. Hammerspace also enables rich custom metadata to be automatically applied to files. The new Hammerspace Metadata Plugin provides users direct access to define custom metadata tags from their existing desktop. No client software is required. With automatic metadata inheritance, files are automatically tagged, eliminating human error. Hammerspace makes data a global resource, freeing content from vendor or geographic lock-in to provide secure content access for high-performance workflows, multi-site collaboration, to enable on-prem, hybrid cloud, and cloud-only use cases.    


Booth: 724

The InfiniGuard is a high-end enterprise-focused purpose built backup modern data protection and cyber storage resilience platform . Included with each InfiniGuard at no additional charge is Infinidat’s InfiniSafe cyber resilience software: the most comprehensive in the industry. Compatible with all the major backup vendors: Veeam, Commvault, Veritas, IBM Spectrum Protect, NetWorker, Oracle RMAN etc., the InfiniGuard features unmatched backup performance. It has a comprehensive licensing model – all software is included. Finally, it is a much lower cost solution for cyber resilience – other industry players' solutions require up to 2 full appliances to have a fenced forensic environment and InfiniGuard does it with ONE – saving end users CAPEX, OPEX, and complexity. Technical advisors are assigned to all accounts at NO charge providing a superior customer experience.   


InfiniBox SSA II
Booth: 724

Launched in April 2022, InfiniBox SSA II is a second-generation solution. When coupled with front-end DRAM caching that exploits Infinidat’s patented Neural Cache, provides the fastest enterprise solution: “ESG, a leading analyst firm validated that the InfiniBox SSA II can achieve 35-microseconds read latencies consistently with the support of Neural Cache. By examining storage latencies, IOPS, and throughput obtained from live customer data from a Fortune 10 and a Fortune 100 company, we verified that the InfiniBox SSA II can boost application performance.” InfiniSafe® on InfiniBox SSA II has the key requirements for cyber resilience: immutable snapshots, local/logical air-gapping, fenced forensic environment, and instantaneous recovery. It uses the same software-defined storage as Infinidat's InfiniBox enterprise storage solution, full-featured and customer-hardened. It has four guaranteed SLAs: a) guaranteed immutable snapshots, b) 1 minute or less guaranteed immutable snap recovery, c) 100% Availability guarantee, d) Performance guarantee. Finally, it supports all Infinidat acquisition models: FLX (STaaS), Elastic (OPEX and CAPEX), and traditional purchase models.    


ThinkStation PX
Booth: 122

The ThinkStation PX offers two 4th Gen Intel Xeon Scalable processors with up to 120 cores, 4TB of memory, PCIe Gen 5 technology to support up to 4x Nvidia RTX 6000 Ada generation GPUs and superfast NVMe storage. The PX also has dual PSUs that allows for redundant power which mitigates the risk of downtime when running mission critical tasks.     


Linguamatics / IQVIA NLP
IQVIA Labeling Intelligence Hub
Booth: 827

The IQVIA Labeling Intelligence Hub allows users to quickly search, review and compare drug labels from documents including FDA, EMA, UK, French and Spanish drug labels. The Hub incorporates award winning natural language processing (NLP) for effective accurate search. Users can access drug, disease, symptoms and other ontologies and search terms can be seen in context of the full drug label. Labels can be presented side-by-side to speed up comparisons, including identifying terms used in the search. This highlights differences between two documents to expedite comparison of similar labels, for example different labels for the same drug. Labeling, regulatory, safety, and medical affairs teams can: Search regulatory precedents to understand their origin; Ensure consistency in addressing safety, regulatory and quality topics with industry approaches; Perform competitive analysis and inform strategic claims; Assess the probability of regulatory success of new claims. The Hub runs on a secure, hosted environment with the data refreshed on a weekly basis. This product was first launched in February 2022, and in the past year, significant functionality and feature updates include: side by side label comparison and the ability to clearly highlight differences between two labels; additional label content, including labels from UK and EMA; enhanced search features to improve precision and recall, including adding region and section searching for new label sources, and ontology navigation (via a novel Ontology Browser) to allow users to choose specific concepts (e.g. drugs or diseases) or navigate the hierarchies to access groups of related concepts, e.g. drug classes.


Melax Tech
Booth: 2

AILA has two major components that target two types of users in the observational studies in an SLR lifecycle: 1) An intelligent SLR workbench for literature reviewers who conduct routine literature reviews, 2) A living literature data dashboard for researchers and analysts who focus on analyzing SLR data and keep up to date on new evidence. The intelligent SLR workbench integrates AI technologies and an SLR workflow management system to support literature collection, screening, and data extraction. The living literature dashboard continuously searches and updates the SLR allowing users to interactively navigate the updated literature and develop new insights.


Memory Machine Cloud Edition v2.1
Booth: 8

Memory Machine Cloud Edition v2.1 from MemVerge is a Cloud Automation 2.0 Platform available today in the AWS Marketplace and certified as Spot Ready. The software visualizes compute and memory usage, as well as cloud resources, so users can easily see if there are opportunities to right-size resources and save. Memory Machine automates complex deployment of a single, to hundreds of job runs. And the software decouples workloads from VMs so they can surf to larger and smaller resources based on the need of the workload; provides observability into resource usage. The software runs in a VM in the user’s account, acts on containerized workloads, and consists of a scheduler module, an operations center, and an app library. WaveWatcher can be used to profile an app’s need for resources before a production run, and to see the workload surf resources after a run is started. Users can use the Memory Machine scheduler, which is integrated with many Memory Machine services, or use existing schedulers such as Nextflow. The Operations Center is where policies for cloud surfing, performance, and availability are configured. The AppLibrary is a registry pre-populated with popular bioscience applications, and where new apps can be added. Once an app is in the registry, it can be launched with a few clicks from the GUI or with a few commands from the CLI.  


Millipore Sigma
SYNTHIA Retrosynthesis Software v.6.5
Booth: 218

SYNTHIA delivers complete step by step pathways for target molecules, highly customizable searches to tailor results to chemist’s needs & intuitive navigation of results. New in SYNTHIA 6.5: •Inclusion of published reactions in result pathways •Create lists of commercially available starting materials from results, that can be easily exported to procurement or a Sigma Aldrich account •Addition of classic enantiomeric resolution as a potential integrated step in chiral compound synthesis •Tags and filters for search analyses to allow easy retrieval.   


Monomer Bio
Monomer Cell Culture Platform
Booth: 518

Monomer provides a modern software solution that powers fully-automated cell culture workcells. We combine a best-in-class experiment execution platform with a purpose-built cell culture data management platform that makes it easy to manage culture data, view images, track progress, and make decisions about how to proceed. The Execution layer features instrument drivers, a dynamic scheduler, experiment orchestration layer, and a domain-specific protocol language for translating cell culture protocols to automation steps. Monomer’s Execution layer turns readily available hardware into fully integrated workcells that can operate continuously with minimal intervention. The Data management layer automatically ingests microscope images, computes confluence and growth curves, and flags and surfaces process anomalies, helping scientists make decisions about how to proceed with experimentation. Monomer provides an AI-ready framework for managing and processing thousands of culture images. Customers can easily integrate ML image analysis pipelines to help streamline decisions like when to passage. With this innovation, Monomer customers report a dramatic reduction in manual work, including weekend shifts, along with an increase in experiment capacity and reproducibility. Customers can culture 100s of concurrent plates, 24/7, while capturing high quality, curated datasets.


Mresult Corp
DecisionStream v8
Booth: 419

DecisionStream is a comprehensive and advanced platform that empowers organizations with a wide range of capabilities for data-driven decision-making. With its robust features and technical specifications, DecisionStream provides a cutting-edge platform for businesses to harness the power of data effectively.

  • Robust data pipeline: Enables efficient DataOps & MLOps with data ingestion, cleansing, enrichment, aggregation, and normalization.
  • Data science lab: Collaborative environment for building, training, and deploying machine learning models. It also provide Model as API.
  • AutoML feature: Automates model development without extensive coding or data science expertise.
  • Data virtualization: Real-time data access to database without duplication, reducing storage costs and ensuring consistency. Use can create data sets and publish data as API.
  • Data observability: Auditing, monitoring, and tracking of data quality, integrity, and accuracy.
  • Robust data security: Encryption, access controls, and user authentication ensure compliance with privacy regulations.
  • Powerful dashboard: Interactive visualizations, real-time KPI monitoring, and data analytics. Self-service reporting: Ad-hoc report generation and sharing for self-service analytics.
  • Seamless migration & version control: Facilitates content migration between environments, enables team collaboration and version maintenance.
  • Extensive customization: Custom data connectors, workflows, analytics, and visualizations for unique requirements.
  • Flexible deployment: On-premises, cloud, or hybrid deployments for security and adaptability.
  • Scalability: Designed to handle large data volumes and growing analytics demands.
  • High-performance: Empowers organizations to make effective and efficient data-driven decisions.

Booth: 219

DISQOVER is a powerful data and knowledge discovery platform for pharma and life science organizations. DISQOVER enables the consolidation of information across public, third party, licensed, and internal data sources. It solves complex use cases over a multitude of heterogeneous data sources with simple, intuitive, and customizable dashboards. DISQOVER filters results in real-time and visualizes data using responsive and interconnected charts. The data ingestion engine, with the use of the pipeline editor, efficiently merges and links structured and semi-structured data from siloed data sources. When configuring the integration of source data via the data ingestion engine, users can manage the data ingestion process by building a visual pipeline, using a wide range of powerful reusable components, making the process easy, flexible, and efficient. The consolidated knowledge is an excellent framework for reasoning for scientists and applications. DISQOVER powers up data democratization, making data exploration achievable and successful for every type of user. ONTOFORCE is now providing a new AI module to the DISQOVER platform, based on text mining and a machine learning engine. It allows large amounts of text to be analyzed for insight extraction from unstructured data. These insights will complement information available in DISQOVER’s knowledge graph. This new module will allow the augmentation of the knowledge graph comprised of the 140+ public data assets processed by ONTOFORCE and will also enrich a company’s internal and external data repositories, leading to more impactful insight generation to support data-driven decision making.


Target Discovery v1.1
Booth: 527

The AI-based solution provided by Ontotext enables scientists to access information about biomedical entities (such targets, diseases, drugs, and many others) easily in one central knowledge base. Target Discovery empowers researchers to stay up-to-date with the newest discoveries by automatically extracting insights from more than 80 million documents, including patents and clinical trials. It’s easy to discover new insights, analyze data visually or with powerful algorithms and leverage a vast network of knowledge for improved decision-making. Researchers are also empowered to provide more context to their proprietary data - omics or sequencing data, by integrating it in the tailored network of knowledge, making all data discoverable and analytics-ready. Any scientist can quickly gain an overview of a disease or target with customizable visual analytics and dashboards over any type of data and source. With the new release, the application also provides advanced analytics and target selection methodology, which leverages a library of over 60 different approaches for multicriteria decision-making.

The analytics and insight generation are transparent, provided by comprehensive provenance and confidence metrics. Target Discovery can be easily tailored to the customers’ needs, based on their therapeutic area or modality and equips them with the necessary tooling to efficiently analyze and extract insights from all public and proprietary data, be it structured or derived from text via AI. Biomedical experts can find new discoveries with transparent evidence, utilize advanced analytics and AI-based predictions, all from day 1 of onboarding to the system and without any technical skills.


Starfish Storage Inc
Starfish 6.5
Booth: 6

Starfish: Empowering Users to Participate in Unstructured Data Management and Curation at Scale. Starfish is a software platform with a unique approach to unstructured data management, addressing both the technical and the human challenges associated with housekeeping and curation. The Technical: Starfish is the most flexible and powerful solution for managing unstructured data at scale. We conquer the world's most demanding file environments by combining DISCOVERY with EXECUTION. The DISCOVERY side of Starfish is a SQL-based data catalog that enumerates files and directories across any number of file systems (HPC, scale-out NAS, tape, etc.) and buckets. The catalog is enhanced with metadata in the forms of tags and key-value pairs. It is used for reporting, analytics, search, user portal, workflow, and policy enforcement. Most metadata operations are automated or delegated to end users. The execution side of Starfish is a scale-out batch processor and data-mover that acts on the result set of a query to the catalog. In other words, you make a DISCOVERY and you EXECUTE a job to do something about it. Jobs are parallelized across multiple servers in multiple geographies. For the humans: Starfish presents relevant sections of file system contents to designated users, so that they can visualize their collections and provide simple inputs to guide data deletion, archiving, and any other automations. Starfish provides users with essential tools that enable them to be the stewards of their own data. Starfish is Linux software with Linux and Windows agents. Version 6.5 added research data workflow management.
Product: Data Science Platform, Version: 1.1.0
Booth: 712 enables organizations to securely derive insights using data-as-products in a data mesh architecture. It consists of data hypervisor with data mapping layer, embedded algorithm interface and an application Layer (See: data products combine data sets, smart APIs, statistical and machine learning algorithms into decentralized data products to discover insights using FAIR Principles. 

Platform Capabilities:  The platform provides a search and discovery catalog, CI/CD framework, horizontal and vertical scaling using containers, domain centric-apps, cohort builder and a developer studio. It supports data product lifetime management policies.

Data Mapping and Data Harmonization: supports seamless integration of data from diverse sources, including genomic, proteomic, metabolomic, and clinical data into a scalable in-memory data model orchestrated as microservices. The platform provides pre-built connectors to common data sources such as  FHIR, OMOP, cDISC, mBIO, and SAS formats.

Machine Learning: provides an end-to-end Machine Learning interface using developer studio 

Privacy and Security: provides a range of features including SSO, AuthN/AuthZ, encrypted communications, private registries and strict access controls for data products and apps.

Cloud-based: is a cloud-based microservices based platform deployed in AWS, Google Cloud Platform, and Microsoft Azure.

Open-Access: has partnerships with several academic institutions and research organizations with open access policy to support scientific research & discovery via public data products such as TCGA, Phase III (Pfizer Published) JAVELIN Renal 101 trial, and Phase II (Roche Published), Urothelial Clinical Trial, as well as PD GENEration and PD Clinical Exome with Parkinson Foundation.


Tetra Scientific Data Cloud
Booth: 322

The Tetra Scientific Data Cloud is the first and only scientific data cloud built for biopharmaceutical companies. It connects life science instruments and data applications, automates data collection and storage, and engineers that scientific data so it can be used by analytics applications and AI/ML programs. With this technology, companies can eliminate manual data handling, centralize their storage in the cloud, and harmonize their data, creating generationally significant advances in data workflow efficiency, accelerating discovery, and improving scientific output. Connect: The Tetra Scientific Data Cloud connects instruments, informatics applications, and analytics applications using secure, validation-ready integrations. This eliminates costly and error prone manual data migrations and allows data to automatically move throughout workflows, providing scientists and data scientists near-real-time data access. Collect: The Tetra Scientific Data Cloud automates data collection and metadata enrichment. Companies can completely eliminate ad-hoc file sharing and replace it with centralized cloud storage that provides a searchable (SQL, API, or UI) single-source-of-truth. No more lost data, repeated experiments, or lost scientific context. Additionally, with features such as automatically generated audit trails, assured accessibility, and rigorous controls, adhering to GxP guidelines becomes considerably more straight forward. Engineer: The Tetra Scientific Data Cloud automatically engineers raw, disparate data formats into the harmonized, compliant, liquid, and actionable Tetra Data. Tetra Data provides companies with vendor-agnostic data that flows seamlessly between instruments, applications, and departments, and provides data analytics and AI/ML programs the clean, comparable data they require to provide insight and drive decisions.


The Hyve
Booth: 720

Fairspace is an open source research data management platform that adheres to FAIR principles. It offers a collaborative environment to manage any type of research data and serves additionally as a metadata repository. Several characteristics of Fairspace makes it the right solution for solving the challenges mentioned above. Firstly, the tool uses a (semantic) metadata model developed by an organization or by The Hyve to facilitate semantic data integration. This can be used to map data to source-specific fields and entities to classes and attributes in the ontology. In addition, The Hyve developed a SHACL data model based on user requirements. Secondly, The Hyve can implement a set of ETL processes in order to fetch data from several public sources and load it into Fairspace. The data is mapped to the common data model as defined in Fairspace, using selected ontologies and vocabularies. Lastly, a JupyterHub environment integrated with Fairspace allows users to directly analyze their data and metadata using R, Python or Julia without logging out from Fairspace. After the metadata selection, users can move to this environment and run predefined scripts for data pre-processing and analysis. Users can access the harmonized metadata through SPARQL and custom API and (file) data.  


Vivpro Corp
Booth: 609

ChatRIA is an innovative chatbot designed for R&D purposes that uses recent advancements in language models to provide high-accuracy answers to complex questions. The latest release of The new release has the following key features. Increased Accuracy: Conversational chatbots are known to hallucinate and give inaccurate information with confidence. ChatRIA's RIA and RIA+ modes make it more responsive and intuitive for users as well as RIA mode is designed for the highest accuracy removing hallucinatons. Superior natural language understanding: includes the ability to perform complex calculations, generate tables and summarize documents with ease, translate documents and answer questions in multiple languages, and Professional quality report: At the end of the session, ChatRIA provides presentation-ready reports in PDF or Word formats. Latest technology: ChatRIA uses a GPT-4 language model, which has been optimized for accuracy and incorporates tweaks to ensure responses are based on provided documents. ChatRIA also has access to Vivpro proprietary data model for external data, including 1.6M documents with 13M pages for enhanced learning. ChatRIA uses React.js for its front-end and Python for its back-end, providing a seamless user experience. In conclusion, ChatRIA is a powerful and efficient chatbot that streamlines the research and development process. Its advanced language model, combined with its intuitive user interface and new features, make it an essential tool for researchers and developers worldwide.   


WEKA Data Platform (Fourth Generation)
Booth: 402

The WEKA Data Platform provides a new approach that enables a level of file storage performance and scalability that was not previously possible – both on-premises, or in any cloud. While WEKA is well-known for incredible performance, with the release of the fourth generation of the platform, customers can now have multi-protocol access across POSIX, NFS, SMB, S3, and NVIDIA GDS to the same sets of files, which eliminates multiple copies of data having to move across different portions of a data pipeline. In addition, this capability is now available across all major cloud providers: AWS, Azure, GCP, and OCI. Greater cost efficiencies are also gained by integrating transparent tiering of filesystems to an S3 compatible object store with the data still being presented as a single namespace.    


Xybion Digital
Labwise XD
Booth: 320

Xybion Labwise XD is a cutting-edge digital laboratory information management system (LIMS) that is unified with QMS, ELN, DMS, lab safety, and data analytics into one integrated digital platform. It is designed to streamline laboratory operations, from sample management to data analysis, while ensuring compliance with regulatory standards. One key feature that sets Labwise XD apart is its all-in-one total lab operations management system. Leveraging a highly configurable low-code technology stack, Labwise XD can manage all aspects of laboratory operations in a unified platform, providing greater efficiency and visibility throughout the laboratory while facilitating full 21CFR Part 11 compliance. It is also highly configurable, allowing users to tailor the system to their specific needs. Labwise XD offers a range of technical specifications to support laboratory operations, including sample tracking, data management, reporting, and analysis capabilities. The system integrates with other laboratory tools and instruments, such as laboratory equipment and laboratory information systems, enabling seamless data exchange and real-time monitoring. Labwise XD also includes a robust set of safety features, such as electronic safety data sheet (SDS) management, chemical inventory tracking, and safety incident reporting. These features help laboratories to ensure compliance with safety regulations, reduce the risk of accidents, and improve overall safety performance. Labwise XD is a comprehensive digital laboratory management system offering various technical specifications and safety features. It is designed to help laboratories streamline operations, enhance productivity, ensure the highest quality and safety, and maintain compliance in one unified platform.