July 25, 2012 | It wasn’t just the lure of a 10-minute commute from his suburban Boston home that prompted John Reynders to join AstraZeneca (AZ) as head of R&D Information some 18 months ago, after stints at Celera (Maryland), Lilly (Indianapolis), and J&J (New Jersey). Rather it was the diversity of skills among his team of some 400 people, with expertise ranging from clinical informatics and knowledge engineering to software architects and business intelligence, and the company’s belief in innovation and long-range strategic planning.
Reynders spoke recently with Bio-IT World editor Kevin Davies about his new responsibilities and a number of key initiatives that should bode well for AstraZeneca’s future.
Bio-IT WORLD: John, in terms of the traditional pharma pipeline -- from basic target discovery to preclinical to clinical trials -- where does your responsibility lie?
|John Reynders, AstraZeneca
REYNDERS: End to end and beyond! From target identification, target validation, lead generation, preclinical, translational, phase I-III, regulatory launch, and a fair bit of real-world evidence and payer – as we have medicines on market, how do we understand how that drug is behaving and from a pharmacoeonomic point of view, how do we bring that insight back to the pipeline.
I'm not sure your job title does you justice!
I actually asked the group what they wanted to be called, and we used Yammer, an internal social network, as a way to vote. We had R&D Informatics, R&D IS, but the majority felt R&D Information was the best superset title.
What is unique or different about AZ strategically, culturally or technologically, compared to the other major pharmas you’ve worked with, Lilly and J&J?
Two things attracted me to AZ. First, it is an innovation-focused pharmaceutical company -- one solely focused on pharmaceuticals. AZ is also looking at innovation at the core – how do we invest in innovation and have R&D as the engine for that model? I find that attractive. Couple that with a very broad focus in multiple therapeutic areas and a strong global footprint – these are areas I find essential. Another compelling aspect is a long-range focus on strategy and planning. I was very impressed. It’s not a 1-2 year timeframe but a 5-10 year time frame.
On the informatics/information front, as part of strategy set in motion in 2009, AZ made some very big strategic bets in areas that were crucial to accelerating R&D productivity. Predictive sciences (in silico modeling); personalized healthcare/patient stratification; clinical trial design and interpretation; and finally, payer (real world) evidence. In each of those areas, the role of informatics, IS, and information is critical. This organization is making exactly the sort of bets that need to be made to accelerate R&D productivity.
Wouldn’t most pharmas, if asked, say they really valued innovation? What’s special about AZ?
I’d refer to those R&D capabilities. It can become very tempting to have an engine where [the company says] let’s acquire molecules to put more gas into our engine – maybe the car will go faster. AZ’s approach is: let’s also tune the engine in terms of explicit investment in productivity. No matter how much gas you put into it, the engine has to be continually tuned to optimize R&D productivity.
Tell us about the size and core strengths of your group.
I have a total headcount of 400 globally. We have MDs, PhDs, architecture, software development, project management, business analysis, clinical and discovery informatics, knowledge engineering, in addition to operations with functions such as strategy, quality, and continuous improvement. We have many teams embedded directly in the drug hunting teams – experts in oncology or infectious disease or in PharmDev, working to ensure all these skills are being brought together, fit for purpose for a given need.
There are seven sites where we have R&D Information teams across North America, Europe, Russia, and AsiaPac. We have full deployment into the pipeline – rather than a team building something and lobbing it over the fence, we’re sitting right next to the customer, doing iterative development. Even huge projects, $10-20 million capital programs, we can scale up some of our scrum and agile methodologies pretty big – but you have to be sitting next to the customer.
How much of your responsibility is to implement a top-down strategy versus a more bottom-up approach where you’re responding to the needs of the research teams?
A good question that probes a perennial challenge. I firmly believe that you have to start with what the needs are. There’s a temptation to have a standard approach, but therapeutic areas are different, functions are different, and there must be clarity as to what the needs are. But there must be balance. As Clayton Christiansen wrote in The Innovators Dilemma, if you only provide what’s being asked for, you’ll provide what’s wanted but not necessarily what’s needed. So you have to have some innovation overhead where you say, by the way, we’re piloting something interesting over here. That moves people onto a different trajectory when they see what’s possible.
From your perspective, is there a key bottleneck or pain point in the pipeline where you really need to improve things?
I think the biggest opportunity is in improving the probability of success in Phase II/III trials. We’ve put in place a ‘5R’ framework, which brings critical decisions earlier in the pipeline: is it the right molecule, the right target, the right exposure, the right patient population, the right payer/commercial opportunity? Other folks might apply those in a serial fashion. We’re looking at them all in parallel and early. We’re seeing the first fruits in the PoCs we’re bringing forward, raising the confidence we have. It’s important before Phase II to have a good think about your patient population. If we can identify our patient populations earlier and more effectively, that can be transformative for probability of success.
We’re also looking at translational informatics, segmentation analytics, biomarker platforms, how can we bring to bear all that insight into patient populations.
How important is building a full genome profile for patient stratification?
It’s more than just genomics, it’s -omics and a full gamut of biomarkers. Genomics will tell you something about risk factors, but if we’re looking at the kinds of biomarkers we need to bring to bear – some can be a simple single genetic marker, but these diseases are very complex. The circulating proteomic markers can tell you about disease state and progression. Imaging biomarkers can tell you about hippocampal shrinkage in Alzheimer’s or deterioration of the substantia nigra in Parkinson’s disease. Bringing such diverse classes of biomarkers into a composite [analysis] and seeing their progression over time results in a lot of informatics complexity to identify these signatures in patient sub-populations.
What other new technologies are you excited about?
There’s quite a bit I’m excited about, so hard to choose. One area of focus at AZ is in what we’d call semantics, or knowledge engineering. AZ started making a bet in this area about a decade ago. Our challenges include the scale of data to navigate, high-performance computing and clouds, the complexity of data, mining and analytics. The scale is tough enough as it is, but now you have to face the semantic problem. How do you pull together very heterogeneous data and find insight? The team we’re building here is world class in this area. However, it’s going to take a village; we’ll have part of the puzzle, but managing data at scale, it will probably be an effort across multiple pharma companies and academia.
Let’s talk about your virtual framework and the FIPnet initiative.
In one area, Neuroscience, we’ve decided to create a virtual iMed (Innovative Medicines) unit. We have six iMed units, each responsible for a specific therapeutic area up to proof-of-concept. At that point, the molecule goes to our Global Medicines Development organization then on to Regulatory. Neuroscience is a very tough research area. The mechanisms in psychiatry, in Alzheimer’s disease, are still largely hypotheses. The success rates across the industry are some of the lowest. This is an area where we decided to re-engineer the iMed and bring AZ to the science, not the other way around.
We created a small team, 40 people, based in Cambridge UK and Cambridge US, which we identified as the neuroscience hotspots. We also decided that, from target to PoC [proof of concept], these teams needed to manage an entire pipeline – with no wet labs! They’re working with CROs, academics, partners, orchestrating deals, to progress the pipeline, all fly-by-wire.
If you’re a pilot flying through rain and turbulence, your best friend is the auto-pilot; while you’re trying to navigate, you want the autopilot to maintain altitude, course, and trim. We wanted to provide this team with a set of automation processes: how you identify and progress molecules you want synthesized or a set of assays to be run? How do we enable that network to be orchestrated? Almost like a flow scheme tool on your laptop – please run these three experiments, hit send, off it goes. That way, the team can focus on the critical decisions that need to be made on each molecule.
We also needed to deploy this fast. We’re taking all the molecules in flight in the neuroscience pipeline and saying ‘catch.’ We found some key partners – Deloitte was involved in some earlier builds of clinical systems. We’re working with Assay Depot, which has done some great work creating an Amazon-like environment. We want to build workflow bundles on top of that. On the front end, we’re partnering with Knode, a subsidiary of Enlight Biosciences, to identify key opinion leaders.
With each partner, critical components of the clinical management, preclinical, lead-generation, lead optimization orchestration, and front-end discovery, we have those main elements in place. Now we can focus on integration and building on top of these core components our end-to-end solution – the FIPnet.
And what is FIPnet?
It’s an industry term -- Fully Integrated Pharmaceutical Network. What better name for the project?
The head of the Neuroscience iMed is Mike Poole. It’s his pipeline, his decision as to which molecules to take on board. At the same time, he’s done a great job focused on the discovery front end and securing deals to strengthen the pipeline. And all this with no wet labs! Go visit their offices in Cambridge -- the only thing wet is the coffee! It has been a great partnership with the team bringing FIPnet online to help manage the pipeline through a network of partners.
We’ve made some pretty hard decisions to close our site in Montreal, a pain research site. Our R&D capabilities in Södertälje, Sweden are also being exited. But all the compounds from these sites are being caught by the Neuroscience iMed team – and providing something of a “live-fire” roll-out environment for FIPnet as we deploy the initial system iterations.
We’ve been pretty impressed with the progress so far – this was only announced in February 2012. At this point we have no plan to apply this virtual model more broadly in R&D, but we can apply things we learn to other groups, such as leveraging new ways of working and components of FIPnet.
Your first love is supercomputing. Are you still involved in that aspect?
We have quite a bit of HPC capability and a very good understanding of the data centric challenges ahead. We have clusters at multiple sites, cloud expertise, and a good understanding of the scale we need to navigate and the need for semantics to pull this all together. We’re going to need all these pieces to crack the nut -- predictive science, evidence payer, patient stratification, this all needs to come together.
A funny story: I have this artifact in my office – a board from a Connection Machine-5 from Los Alamos, which was a going away present when the machine was decommissioned. What made me feel right at home here was when a colleague was walking by my office and said, “Hey, that’s a CM-5 board!” That was a very promising biomarker!