YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Richard Resnick’s Quest for Genome Governance


Resnick sees opportunities for content over algorithms and clinical integration.

September 27, 2011 | For the past two years, Westborough-Mass.-based GenomeQuest has been headed by Richard Resnick, a bioinformatician who trained with Eric Lander at the Whitehead Institute Genome Center in the late 90’s. GenomeQuest offers a cloud-based series of genome informatics solutions that have been gaining traction, particularly for handling large datasets in industry. Resnick spoke to Bio•IT World about his company’s progress, the ongoing challenges of interfacing with pharma, and the opportunities presented by clinical genome informatics.

Bio•IT World: Richard, could you give us a quick overview of GenomeQuest’s offerings and service?

RESNICK: Our IP database allows sequence searching, has been our core business for 11-12 years. After I joined, we added capabilities in next-gen sequencing (NGS) in genomics research. What we had is this engine that allows us to do things at ridiculous scales. Using that engine, we’ve built out a healthy franchise in research-oriented genomics.

Our service is cloud-based. We break that down into infrastructure, platform, and software. We have all three: Infrastructure-as-a-service, a platform-as-a-service, and software-as-a-service, because we have an end product application. We can put it up and install on Amazon or behind a firewall or in a CLIA lab or an enterprise organization. So it’s a highly parallel, distributed system that happens to run in the cloud.

Are you primarily targeting clients in industry?

Our customers include big pharma, big agriculture, smaller biotechs, and some academics. Quite bluntly, we haven’t performed as well on the academic side, we’ve favored the enterprise level because we can do bigger projects generally and the buying patterns are a little easier. It’s easier to support an agbio customer with a few thousand exomes of maize than say a researcher who has a fancy new method of chip-Seq!

We generally don’t lose in the enterprise accounts to any commercial software competitor. We’ll go up against all the likely suspects and nine times out of ten win those deals, because of scale and the openness of the system. Where we do fight and lose sometimes is competition against internal informatics teams. That’s a natural reaction when you’ve got a market full of sharp people all looking at open-source solutions.

What trends do you see in big pharma with regard to building internal bioinformatics resources versus outsourcing the analysis component?

Broadly, pharma is really struggling to figure this out. Frankly, every account is doing exactly the same thing as their competitors, which suggests an opportunity to focus on things of higher value. The typical trends are: no, there isn’t a concerted effort to bring NGS instruments inside the company—sequencing is largely being done by outside organizations. BGI is winning that battle, and Complete Genomics is second.

The informatics focus inside pharma is largely a do-it-yourself mentality, with a willingness to explore solutions on the outside. What’s happening broadly inside pharma is that there’s a disassociation between management and the informaticists. I can have meeting with a pharma CEO who says, we want to be innovative and that means outsourcing things that are pre-competitive. The VP research/oncology might say, our strategy is to bring in things we think are competitive and outsource the rest. But the director of bioinformatics at the same company says, we don’t have a strategy! I’ve got countless examples of this.

When we come in, we get some excitement, and questions such as how much do you expect this to cost? A company might spend $1 million on sequencing, but assign $15,000 for a software license. There’s that disconnect; pharma has an inability to cost properly what they do. There’s a budget for a team of informatics and a budget for licensing software, and no connection between the ROI of licensing software vs. building the staff. Only a few pharma have optimized this.

Moving research forward into biomarker discovery and clinical trials, I think there are people in the industry that have a vision, but they’re the exception.

We know one customer willing to take a drug and do genomics on strong responders and adverse events, using exon sequencing and hypothesis building using our technology. But largely, this is not something that pharma knows how to do yet.

What provides GenomeQuest’s competitive advantage?

One thing is scale and our ability to scale arbitrarily. Another is openness and API, and ability to customize. Third is interoperability. In some cases, we share accounts with our competitors: some users like software A and others prefer software B. But to manage and analyze big data, in context of 1000 Genomes data, genetic tests, and including patents on the genome… building data integration on a massive scale, that’s what we’re trying to do.

On the patent front, particularly in diagnostics, there are all sorts of crosstalk. The sequencing core is a producer or consumer of information coming/going from the patent group. There’s plenty of cross talk, with the same interface, you can share data and reports.

Moving into diagnostics research, as some pharmacy and LDT companies are doing now, there’s a bunch of problems to solve.

In CLIA mode, diagnostics must be analytically and clinically valid and useful, so we can help with that because of our capabilities around testing diagnostics against, say, Sanger sequencing. You’ve also got to get clearance on the IP side. It’s a part of the service.

How are you trying to attract more interest from academia?

Well, we have a free basic account so they [academics] can try it, but we don’t have an active sales force selling there. We don’t make a lot of money there.

But where we will focus is in two areas, both centered on translational research. First, take our announcement earlier this year with the University of Iowa around gene panel-based diagnostic testing [for inherited deafness]. These academic hospitals are an area where we are growing out, and there’s more to follow. The dollars aren’t that big there either, but this is the beginning of a complete revolution in health care.

We just hired Gerry Higgins, formerly at NHGRI, who wrote the roadmap for biomedical informatics. We’ll also focus in large academic projects and CTSA (Clinical Translational Science Award).

Tell us more about the Iowa deafness example.

None of this happened 6-9 months ago. Now there’s a new announcement every week. We just integrated GeneTest database into GenomeQuest. You can run every test on Genetest.org…

Iowa is an interesting and representative case. They had an old Sanger sequencing test for inherited deafness using 66 genes. It was very expensive and time consuming, $75,000 if you test all 66 genes. They’d sequence one gene, then a few others. In the meantime, the patient is 9 months old, deaf, and his/her diagnosis drives whether (s)he gets a cochlear implant. Obviously you want to do that early because they’ve got a developing brain…

The Iowa team played around with one of our free basic accounts, and they recognized that we had all the pieces to automate an NGS-based diagnostic that targets their 66 genes.

So we took GeneTest’s ability and shrunk it down to focus on the genes and variants they cared about. We built their medical logic into a workflow, and now you upload sequence and they get a printable diagnostics report.

It’s an example of a diagnostic that drives patient care, the first of its kind. Many other folks are moving in this direction, whether it be cardiomyopathy or cancer biomarkers or X-linked mental retardation.

So we have a commercial company providing a diagnostic backbone in software—and it won’t be the last—for about $2,000, physician friendly, in about a week.

We’ve seen several exciting clinical genomics stories this year. What are you doing with regard to clinical genome interpretation?

There’s no doubt we see the value in integration—the more we do whole-genome sequencing, the more our engine has an ability to shine, because the scale is so big. When you get into whole-genome sequencing, the cost of informatics is the same as the sequencing. Driving cost advantages because of scale is an important proposition.

The successes you cited, that’s not automated universal diagnostic, whole-genome stuff. That’s really translational research for desperate patients. There you do need absolutely a platform that can do whole-genome sequencing analysis like GenomeQuest, but you also need really well trained doctors, pathologists, and oncologists. It’s a labor for a team of 10-30 people to do that.

We want to do that. For example, Rick Wilson (The Genome Institute, Washington University) is on our scientific advisory board, and we work closely with the Beth Israel Deaconess Medical Center...

However, I think it’s dangerous for us as a community to make the leap from there to say, guess what, whole-genome sequencing is a universal diagnostic and we can tell you everything—that’s definitely not the case.

While I’m a proponent of whole-genome sequencing when you present with certain disorders and let’s look at the genome that’s clinically actionable, we don’t have an algorithm that can interpret a whole genome that’s been shown to be clinically useful at this point. I think the way to get there is by doing gene panels, as we’ve announced with Iowa, going to exomes over the next 18-36 months, and from there going to whole genomes.

Looking ahead a few years, what role might you be playing in genome interpretation?

I don’t think we’ll be in the business of developing new algorithms to compete with guys like Martin Reese [Omicia]. He sees algorithms such as VAAST as the competitive secret sauce and has the exclusive license.

What we do is integrate what is best of breed, what the Broad Institute says is great, what peer review says is great. Then we overlay the world’s data and our ability to do computation, and produce a physician-friendly report.

In the future, we think it will be less about algorithms and more about content.

Take Iowa—we’ll support some hundreds or thousands of diagnostics in the next year. If you think of a CBC, low hematocrit, if I’ve got a diagnostic running in software and de-identified, and over a period of a year I’ve generated a thousand of these, someone will want that content, right? Likely suspects include the cochlea implant manufacturers, pharma, payers, and patient advocacy groups.

So we think the value ultimately is in aggregation of diagnostics that can be redone over time, e.g Myriad know more about BRCA1 than anyone else…

Are you talking to clinical and medical centers already?

We’re targeting and in advanced conversations with a lot of them. The ones that are leading recognize that this isn’t just keeping up with science; the guys running diagnostics labs are running a business. One motivation is health care—the quality of care—and the other is costs. This technology solves both of those problems. They may stand behind a diagnostics test now, but in a year it’ll change—and they’re not waiting.  

This article also appeared in the 2011 September-October issue of Bio-IT World magazine.

 

View Next Related Story
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.