Pistoia Alliance Tackles Agentic AI in Pharmaceutical Research

October 2, 2025

By Allison Proffitt 

October 2, 2025 | The Pistoia Alliance has launched a new initiative to advance the safe adoption of agentic AI. The initiative sits under the Alliance’s strategic priority to Harness AI to Expedite R&D and alongside its AI/ML Community of Experts, which is actively seeking further funding to continue supporting responsible AI adoption across the industry. 

At the Alliance’s European conference in March, surveyed life science professionals shared that they expect agentic AI to be among the most disruptive technologies in the next 2-3 years and they view multimodal AI as a top opportunity for cross-industry collaboration. While detailed results from the survey will be shared in November, the Pistoia Alliance saw no time to lose. Robert Gill, Agentic AI program lead for the Pistoia Alliance, is spearheading the project and Genentech has volunteered seed funding.  

The agentic AI project aims to establish standards and frameworks for agentic AI systems that can break down traditional data silos and enable more intelligent, autonomous research workflows. “We’re trying to say, there is going to be a common language here, common standards that [everyone] should use,” Gill told Bio-IT World.  

The project aims to initially focus on two key deliverables: an agent-agent communication protocol and AI agent standard. Outputs will include whitepapers, guidelines, reference implementations and scientific publications to support adoption. By joining as sponsors, members gain early access to draft frameworks, the opportunity to shape project direction, and the ability to co-author outputs and promote results. 

The Evolution from LLMs to Agentic AI 

While large language models (LLMs) have already transformed how researchers interact with data, the next wave—agentic AI—promises to fundamentally reshape drug discovery and development processes. 

"We've had this huge hype and excitement around AI in the pharmaceutical industry that started with LLMs," Gill said. "Now you can have really interesting discussions with your software rather than the old click-respond, click-respond paradigm. But we're still dealing with that one-size-fits-all concept around LLMs."  

While Retrieval-Augmented Generation (RAG) systems have provided some relief, agentic AI breaks complex tasks into component parts, each handled by specialized agents. These agents can independently query different data sources—genomics databases, PubMed, clinical trial records—and then synthesize results in ways that mirror how human researchers collaborate across disciplines. 

Breaking Down Vendor Silos 

The shift toward agentic AI is forcing a fundamental rethinking of how pharmaceutical software vendors operate. Traditionally, vendors have built monolithic platforms designed to keep users within their ecosystem, maintaining control over both the platform and the data. 

"Most vendors want to bring you into their platform because they have control," noted Gill. "But senior scientists consistently tell me they only want specific pieces—that particular analysis tool or specialist function." 

This creates tension between vendor business models and user needs. However, some forward-thinking companies are already adapting, developing agent-based services that can operate independently while still generating revenue through scale rather than platform lock-in. 

The challenge remains convincing vendors to embrace standards that allow their agents to communicate effectively with competitors' systems. The Alliance is drawing on its historical success in influencing vendor adoption of interoperability standards but acknowledges this represents a significant shift in thinking. 

The Data Quality Conundrum 

For decades, pharmaceutical organizations have struggled with data quality issues—inconsistent standards, varying formats, and poor metadata that hamper analysis efforts. Agentic AI presents both a challenge and an opportunity in this regard. 

"Very few people have been saying, why don't we use these tools to make our data better?" Gill observed. Now, several companies are leveraging LLMs and network graphs to automatically identify key terms, genes, proteins, and other metadata within existing datasets. 

These systems operate with humans in the loop, surfacing potential improvements and allowing researchers to validate automated suggestions. This approach promises to address the fundamental data quality issues that have plagued the industry while simultaneously making systems more intelligent. 

The Shadow IT Risks 

Perhaps the most concerning challenge surrounding agentic AI isn't technical but cultural. The democratization of AI development tools means individual researchers can now create sophisticated software applications using tools like ChatGPT, much as they once wrote Excel macros or Perl scripts. 

“Those little scripts are brilliant, and if you're sitting at your desktop as a lab scientist, they're fantastic. Go ahead and do it,” Gill said. But sometimes those user-defined applications (UDAs) become part of the formal process of doing discovery. “If that gets out of control, it's going to make life very difficult." 

The solution lies in education, organizational culture change, and—likely—some standards. Researchers need training not just in how to build AI-powered tools, but in recognizing when their innovations should be elevated from personal utilities to enterprise-grade systems with proper security, maintenance, and validation protocols. 

Gill believes that the opportunities are rich. Agentic AI without any standards could create a fragmented landscape of incompatible systems that actually slows progress. But if done well, agentic AI could accelerate drug discovery, reduce costs, and ultimately bring life-saving treatments to patients faster.