How HPC Systems Are Making Things Easier

March 10, 2026

By Bio-IT World News Staff 

March 10, 2026 | High-performance computing (HPC) systems are considered essential research tools now, which means that there needs to be a way to make them user-friendly for researchers of all backgrounds. In the newest episode of Trends from the Trenches, guest host Jessica StLouis, senior scientific consultant at BioTeam, speaks with Jarett DeAngelis, director of scientific computing, and Shane Corder, senior HPC engineer, both at the Moffitt Cancer Center. Their conversation centers on how Moffitt is redesigning HPC infrastructure to reduce friction in research collaboration, streamline data sharing, and expand access to advanced computational resources. 

One of the challenges that Moffitt faces is collaboration obstacles. Onboarding external collaborators requires lengthy and complex administrative processes, which can cause delays. Data sharing poses additional hurdles: limited network bandwidth, synchronization problems, and compliance issues on protected health information (PHI). Balancing security, permission management, compliance, and technical constraints creates friction, which slows research and complicates collaboration. 

To address these barriers, Moffitt developed a new system called the Collaborative Computing Center (CCC), funded through a $2 million NIH S10 grant. Unlike traditional on-premises HPC clusters tightly integrated with institutional IT policies, the CCC is architected as a science-focused DMZ. It consists of approximately 30 nodes, dedicated internet connection, and 1.3 petabytes of raw Hammerspace high-speed storage. Crucially, it operates on its own dedicated internet connection, separate from the core institutional network.  

Functionally, the system delivers what organizations often seek in public Cloud environments—flexibility, collaboration, and scalable computing—but it does so on-premises. Additionally, the new system is designed as a secure multi-tenant environment where multiple collaborators can work simultaneously while maintaining data security, access controls, and data life cycle management. In essence, the team is building a private, research-focused Cloud environment in-house, optimized for secure collaboration and more predictable long-term costs.  

Looking ahead, DeAngelis and Corder express enthusiasm about expanding local artificial intelligence and machine learning inference capacity. Running large language models, vision-language models, and other foundation models within institutional data centers reduces PHI exposure and cloud-related friction. Corder emphasizes utilizing existing infrastructure. His priority is improving efficiency and accessibility so that more departments benefit from HPC capabilities.  

To learn more about the Moffitt Cancer Center’s projects, how they’re leveraging Globus for their work, and how Open OnDemand is helping, listen to the Trends from the Trenches podcast.