YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Merck's Informatics Mission


Martin Leach and Ingrid Akerblom bring a wealth of complementary expertise to their new roles, leading - and integrating - basic research and clinical IT.

By Kevin Davies

May 12, 2008 | After five years at the helm of Merck's basic research IT group, Ingrid Akerblom calls her move to the clinical side "quite an eye opening experience." Akerblom has a Ph.D. in biology from University of California, San Diego and the Salk Institute, and later joined Incyte Pharmaceuticals as its 50th employee and "annotation guru," eventually leading informatics. She was then recruited by Merck - ironically the only pharma not to buy the Incyte database. She joined Merck in November 2002 - just over a year after the acquisition of Rosetta Inpharmatics. Akerblom worked extensively with the Rosetta IT leaders, helping to integrate systems around target identification and chemistry systems.

Assuming her former role is Martin Leach, a Brit who spent nine years leading IT and informatics at Curagen, spanning corporate IT, basic, pre-clinical, and regulatory informatics. But he also brings experience in regulatory and clinical IT areas gained during a two-year stint at Booz Allen prior to joining Merck last year. That clinical insight could prove useful even as he refocuses on basic research, and complements Akerblom's background in basic research as she transitions to the clinical side.

Kevin Davies spoke to Akerblom and Leach about their complementary roles and mutual understanding of the needs of both basic research and clinical teams, which could pay big dividends for Merck.

Bio•IT World: Ingrid, how did your move to clinical IT come about?

Ingrid: After five years in [research IT], with the last few years including leadership of Merck's Biomarker IT, it made sense to bring some of this expertise into the Clinical IT areas in order to meet the growing need to marry up clinical and discovery research information. In terms of how we operate, there has also been a significant evolution of the IT teams and operating model. At that time it was fully vertical, integrated. I had all the developers on my team; we had all the support on my team. Now, Martin and I really lead more of a client services team, where we have account managers, program managers, and business analysts, people with business expertise and technology expertise. Most of the delivery is done through shared services in the corporate area. And even within the Merck research labs IT, for some of the more innovative types of things that go on in research that aren't found in the other divisions like manufacturing and marketing. But we're evolving to a fully shared services model, which has its benefits, especially in clinical, with large projects.

Martin: For my groups, I have scientific computing, which is predominantly in the bio space, some cheminformatics, biomarker IT, of which there's a lot of collaboration with Ingrid. And drug discovery services, focused on lab automation, capture and management of biologic and chemical data and information, all of the things that make the basic research lab tick.

How useful will your mutual cross training prove, do you think?

Martin: One thing important to note is the new head at [Merck subsidiary] Rosetta. Kathleen Metters, the worldwide head of basic research, recently appointed Rupert Vessey to be the new head of Rosetta ... Rupert is a former clinical therapeutic area head, [so] basically we have a clinical leader heading up Rosetta, which is predominantly working on genomics, proteomics, genetics, and causal networks. So it's not just the IT with that cross-pollination - we've got two people from IT, from a clinical point of view and a basic research point of view, interacting with a former clinical person who is heading up the genomics space.

Ingrid: We've invested a lot in some core platforms; we need to start translating that into results in the clinic at some point. And so having people who have an understanding of what does that really take to help inform the earlier research directions, the platform directions, is a key theme...
When I was in Martin's position, it was very difficult to get the clinical IT teams to focus on longer term strategic projects, even short-term partnerships that weren't about a late-stage trial. Because at the end of the day, that's what they work for, right? They've got to get those trials filed. But when we think about the future, we want to have our data more integrated, and we weren't really getting a lot of traction. So one of the attractions for me moving to this position was someone who has that background will keep their eye on that ball and it won't be all about late stage. That's already proving true - there were a couple of times this year where there were scheduling conflicts between critical projects on both sides. And in the past, I know which one would have gotten dropped - it would have been the basic research project.

How is Rosetta working at Merck today?

Martin: There's Rosetta Inpharmatics, which is the part of Merck that's doing molecular profiling and genetics research, and then we have Rosetta Biosoftware, that is part of Rosetta Inpharmatics that makes and sells software products such as Resolver, Elucidator, and Syllego, which we also use internally at Merck.

At Rosetta Inpharmatics, I work closely with scientists working in the bioinformatics and the pathways space, who have taken a biological point of view to integrate information. One approach is trying to integrate as much information that is accessible, assay data and so on, for when scientists pull up a gene or target. They have developed a target gene index (TGI)... With a given gene, you can see all the relevant information. I think most pharmas have attempted that. I'd say it's the depth of the information and integration with some of the chemical space that is different than what I have seen at other pharmas. This depth of integration within TGI is still growing... We do data integration and data management within basic research IT, and we provide some of the core services needed to do it from a research point of view.

How do you interact on a more day to day level?

Martin: We have some very high level, strategic, long-term projects that we're working on. We have a large number of folks from my camp working with a large number from Ingrid's camp around the IT needs and implications with all the different clinical data, as well as sample data, and access to this information that's needed to enable translational research. So we have joint projects, very strategic, they have visibility all the way up to the MRL leadership.

In terms of some of the things being done at Rosetta, again, it crosses into the basic and clinical space, and we work together on making sure the right people are engaged in either basic or clinical IT. [Between the basic and clinical IT teams] we are very collaborative in terms of key strategic hires.

Ingrid: We're getting much closer to actually using genetics in our trials, based on the technology set up by our Seattle genetics group and the whole genome analysis group (See, "Merck Ties Gene Networks to Obesity"). We have a project team meeting with Martin, our business and information architects, and Rosetta Biosoftware together with clinical franchise and regulatory leaders, to talk about what is the actual proposed data flow and architecture for moving genetics data from research systems into the clinical systems. Having formerly been in basic [research], it's a lot easier to really see how that all fits together and how to move this data into the clinical systems now.

The Rosetta Biosoftware Syllego system that is being used by the FDA, is something we're looking at - How does that fit into the clinical architecture? We have a clinical warehouse, where should the genetic data go? Should it be Syllego for raw data and CDR for metadata? Again, it's moving into reality now, so understanding what that means and being on the clinical side I think is going to make it a lot easier to easily assimilate that type of data into the mainstream clinical systems.

My Basic IT team worked with the Imaging Research team to put in place an imaging platform with IBM, and Martin's team is continuing this work, that's working well in the early development and research space. Now I want to say look, we could save a tremendous amount of money if we move that into the late stage. But how to do that where every investigator now has to learn that system?... Do we show it through our portal or does it come in through EDC or on its own? So there are all these support issues once you start thinking about really getting out into the clinic with some of these newer things.

How are you handling the surge of data, especially related to genomics?

Martin: Where we are doing work on pharmacogenomics and genetics in the clinical space, there is so much data. For example, one of my team had to secure an additional 100 terabytes (TB) on the East Coast to just accommodate one experiment they were doing! Soon, I'm going to be playing around in the petabytes... At the moment, we need to keep the raw data because there's no clear guidance from the FDA as to what you need to keep. It's going to literally swamp us working in this space until we get better guidance around what data we need to keep versus could keep. One of my [team's] projects is basically a storage strategy this year because if it's 100 TB this year, it's probably going to be a couple of hundred TB next year...

We all [in the industry] have data and document retention policies, but what tools do we have to really monitor and manage that? If I've got a couple of hundred TB that's going to come around in the next couple of years, how do I know what to purge five years from now? Where are the tools to do that really large data management and purging? In the current file sharing landscape we have millions of files that normally have to be managed through retention policies. That's a challenge in itself. What is developing is managing a fewer number of files but with a large overall volume.

How do you view translational medicine?

Martin: In two parts. The first part is increasing the clinical context of basic research experiments, using clinically relevant samples with their clinical information, allowing you to "translate" additional research measurements on the samples with a clinical context. So that enables the research, but then as you get into the pharmacogenomics space, where you're looking at genetic information to segregate populations for responders and non-responders, that's then taking basic research discoveries and really applying them into the clinical space. So I sort of see translational medicine as that mix of pharmacogenomics and biomarkers and everything rolled into one.

Ingrid: I agree. One of the key areas is clearly samples, whether you're doing proteomics, gene expression, genetics, or potentially looking at what populations eventually could respond to your drug. Samples are at the center of that and so we have been actively pursuing better informatics around that in order to make it clear what samples are available from what trials, which are consented, and what can we use them for. We already have siloed platforms to show that data, we need to integrate it more than it is... We have a new standards-based clinical warehouse that went into production last year, where we're really planning to have all the patient data - whether it's through collaborations or Merck trials - in one place so that it's more available for our future data mining and understanding what types of patients and associated samples we have.

Martin: We have a major strategic collaboration with the H. Lee Moffitt Cancer Center [Tampa, FL] (See "Cancer Center Builds Gene Biobank," Bio•IT World, June 2007.) We get different types of cancer samples and those samples go to Rosetta [for] expression profiling. Moffitt uses that expression data internally for their research, and we get clinical data associated with the samples, as well as the expression profiling data, and we get to use that at Merck... This is a major collaboration driven by Stephen Friend [senior VP of Oncology] and the Oncology franchise. I think it's a landmark in how we approach translational medicine at Merck... Data from this collaboration was the first clinical data from oncology that made its way into Merck's clinical data repository (CDR).

So we have clinical data securely flowing directly from Moffitt through the firewalls, etc., into Merck's CDR meeting all compliance needs. And that data through web services is then shared to Rosetta and other places so that it can be integrated with expression profiling data. We've really embraced industry standards to make that happen. This really has been breaking down silos - it's very hard to find a clinical group that opens up web services where that information is then accessible to basic research. I think that in itself was groundbreaking at Merck. We've tried looking around to other pharmas, like are you guys doing this sort of thing? Everyone is talking to the standards boards, but I think we've really [made] an investment by implementing some of this work in a real active strategic collaboration...

Ingrid: The other important piece in that project that addresses the translational medicine question is that there are joint project teams between Merck and Moffitt clinical and basic researchers, all trying to mine and look at field experiments, build trials, identify new mechanisms, think about the future together. It's a very powerful collaboration, and IT has a seat at that table and is an active participant in those conversations. So I think it's a great area of translation where we really are leveraging clinical data to drive research.

How has informatics evolved at Merck? With budget tightening everywhere, does that impact the build-buy decision?

Ingrid: Ever since I joined, we've been primarily a buy shop, even in basic research. We mostly buy and we try not to customize too much, but you still end up in that space. The clinical systems have been primarily internally built, and now they are mostly purchased, with the exception of the data warehouse, which is based on the Janus data model, but it was still built inside with outsourcing. Where we're trying to find cost savings is in sharing services, particularly around support, maintenance of applications, infrastructure - trying to drive down cost on the maintenance and operations side in order to continue to invest in the new development of strategic applications.

There are innovative areas with many of them in the emerging research technologies where you're doing things that you just can't buy, where faster iterative in-house development is needed, for example we developed MouseTrap to support the management and display of animal phenotypic data. Generally speaking, it's quite a challenge. You've got to really be focused and the business has to partner with you to prioritize... it's critical to have a strong partnership and governance with scientific leaders to assure we are focusing IT resources on the right projects. The other thing is they're also feeling the money pressure. So it's not just IT, it's not just the services anymore, it's everybody really looking at how are we going to contribute to optimizing the bottom line, and how are we going to grow the top line, and let's all prioritize those initiatives together.

What are some projects where you think you're really going to be able to expedite or make better decisions? And what outstanding challenges remain?

Martin: We're in a position now where we know how to generate information for biomarkers, and we know how to collect clinical information. So at least one project this year is, "What is that killer application that you need to integrate the clinical information with biomarker information, so that we really do enable our scientists in their biomarker discovery or validation experiments?" At the moment, we've got bits of the puzzle - genetics being managed in Syllego, expression managed in Resolver, proteomics managed in Elucidator, these all being separate applications and repositories. But what is that killer application that brings it all together and integrates it with clinical, so that you can do some meaningful mining and analysis? That's one of my goals, and I've got some exciting challenges to work through there.

Ingrid: I think that's a shared one, because in the clinical sample area, combining the results data from clinical samples with the associated patient data, what's that platform? I know there are new commercially available things coming out like Azyxxi from Microsoft. So we need to be looking at what's out there, what's the gap, and do we put something together ourselves? We did a pilot last year with an EII platform collaborating with IBM. There was enough productivity gain from that to justify taking EII to the next level which our Innovation IT team is doing in 2008. The whole integration space and then the actual viewing of integrated data in a meaningful way continues to be a major focus.

We're embarking on an electronic medical records (EMR) strategy looking for signal detection among other uses. We're redoing our pharmacovigilance system and approaches. Those are things that are just starting to be reinvested in, figuring out how do we leverage that information, how do we get that connected? There's also appetite for clinical trial simulations across a number of dimensions including enrollment and operations optimization. We just overhauled our entire late-stage development systems in 18 months, so right now we're focused on ensuring that that gets optimized and the value from that investment gets realized.

Martin: Who is going to be the health partner with Merck? Where do we place our bet in key strategic partnerships thinking around EMR data or personal medical record data, and how do we find the best partners to enable translational research? From there it's doing the analysis of who will be the best partner and when will they be mature enough or Merck mature enough to interact with them.

Another exciting challenge is working with the external basic research team, [Catherine Strader, former head of research at Schering-Plough]. I'm working with her so that we really leverage information from our collaborations. In the past, how information flows in a collaboration has been managed ad hoc. Moving forward we really want to leverage and integrate this information more strategically.

What roles do the senior executives such as Peter Kim and Stephen Friend play?

Ingrid: Peter has a vision. He focuses us all on recognizing that the vast majority of information and innovation is happening outside the walls of Merck. We need to leverage it more by providing platforms that allow deep collaboration with external partners; there also is a focus on combining our own data with publicly generated data for competitive advantage - but holding the line to work pre-competitively where it makes sense. You get that vision through the research strategy meetings.

I think Stephen Friend is clearly a visionary who inspires many individuals at Merck both on the science side and the IT side, a very forward thinker pushing all the teams, Rosetta as well as myself and Martin, to think out of the box.

Merck Ties Gene Networks to Obesity

In March, scientists from Rosetta and Merck published a pair of papers in Nature identifying changes in gene networks associated with obesity. The team, led by scientific executive director of genetics, Eric Schadt, is deploying a more holistic approach to the pathogenesis of common diseases - not merely searching for gene variants, but measuring gene expression in tissues from obese humans as well as mouse models, which is coupled with information on DNA variations and clinical data. Massive computational analysis - the equivalent of 7,000 CPUs - pinpointed entire gene networks perturbed in obesity.

"Common diseases such as obesity result from genetic and environmental disturbances in entire networks of genes rather than in a handful of genes," says Schadt. "The accurate reconstruction of these networks will be critical to identifying the best therapeutic targets."

In one study, Merck researchers and scientists from UCLA identified DNA variations in mouse tissues associated with obesity, diabetes and atherosclerosis. Schadt and colleagues built gene networks and identified the constituent genes implicated in the various diseases, notably three specific genes - Lpl, Pmp1l and Lactb. In a separate paper, Merck scientists collaborated with deCODE Genetics and Iceland's National University to construct obesity expression networks using tissue and clinical data from more than 1,000 Icelanders, in large agreement with the mouse work.

Further Reading:
Chen, Y., et al. Variations in DNA elucidate molecular networks that cause disease. Nature, published online March 16, 2008.

Emilsson V. et al. Genetics of human gene expression and gene-gene transcriptional networks. Nature, published online March 16, 2008.

___________________________________________________

 This article appeared in Bio-IT World Magazine.
Subscriptions are free for qualifying individuals. 
 Apply Today.

 

View Next Related Story
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.