YouTube Facebook LinkedIn Google+ Twitter Xingrss  

State of the Third Generation


June 8, 2011 | Insights Outlook | Jeffery Schloss, Program Director, has been at the National Human Genome Research Institute for nearly two decades. He’s now responsible for technology development, soliciting applications for next-generation sequencing technologies, fostering many of the current second-generation sequencing technologies. Schloss spoke to Insight Pharma Reports for its latest report on third-generation sequencing. Here are some extracts from that interview.

On second-generation systems compared to third-generation systems: I’ve been amazed and pleasantly surprised by how far the [NGS] companies have been able to drive down the costs of second-generation sequencing with continually improving throughput and reduced costs while keeping the quality of the reads reasonably high. I think that makes it more difficult for the next rounds of technology, whatever they are, to compete in the marketplace. Some people are saying that they’ll have to be more niche-type players in the market because, for raw power sequencing production, the current next-gen technologies with their improvements are going to be really hard to beat. That said, we know that current next-gen technologies aren’t seeing some important parts of the genome because of factors that are inherent to their process…

If you could read a few thousand bases, then you could assemble regions that right now you can’t. You could assign them as unique locations for resequencing studies, and this would certainly help a lot for de novo assembly. So that’s where I expect the next rounds of technology to fit in. Whether it’s something like PacBio, Starlight, nanopore, or electron microscopy, any of these should be able to provide longer reads than you can get with any of today’s technologies. Even 454’s longer reads, of 400 to 500 bases, aren’t long enough to jump across some of these complexities in the structure of genome that are needed to assemble it properly.

On progress in nanopore sequencing: All of the pieces of the technology have been published: the ability to step DNA one base of the time under control and the ability to read bases when they’re held for long enough in particular positions in the reader. Now those groups are actively working to put their pieces together. There are a couple of different protein pores on which people are publishing, and there are other developments on protein nanopore sequencing that haven’t been published yet. (The groups working on these projects with support from NHGRI are all listed on our website.) Regarding the published work, hemolysin and MspA pores have both been shown to be able to read and distinguish all four bases... [as well as] six or seven including various types of methylation.

There are several ways being developed to step DNA through nanopores. The most promising is to use DNA polymerase to step an intact DNA molecule one base a time through the nanopore. So that gives you a way to hold the bases in reading position. We had a grantee meeting last week, where people were talking about these results and collaborations. It’s predicting the future, but I would not be at all surprised that a year from now we’ll have seen publications where people are reading bases. There won’t be a commercial sequencing machine yet, but I think we’re going to see the ability to read bases from intact oligos at least, if not longer strands of DNA, moving through nanopores. The potential for speed, accuracy, and long reads is why we keep investing in nanopores.

On solid-state nanopores: The work there, from the US and Japan particularly, shows that you can extract signals by tunneling—a different mechanism from how signal is detected with the ion pores. With tunneling, an electronic current moves through the nucleotide base that’s positioned between electrodes in the plane of the pore. So far the tunneling measurements haven’t been done by using pores but instead with atomic force microscopy set up to demonstrate technically that it can work. Now people are trying to move that into a nanopore configuration so that you have a way to thread the DNA past the sensors. In the proof-of-principle experiments on surfaces, it’s hard to move the AFM tip along the DNA molecule. But that approach is also looking promising.

On other approaches (optical reads, electron microscopy): Optipore sequencing is very interesting. The fact that you have to convert the DNA to another form loses a significant advantage of the nanopore potential, which is that you can use genomic DNA right out of the cell. On the other hand, the idea is that you would prepare a sample from whole genomic DNA in a single test tube, and it would go very fast. There’s also the question of whether the conversion chemistry retains the sequence fidelity. Yet most of the other technologies require enzymes, which would raise similar fidelity questions, and they work pretty well. So as long as any conversion error is random and not systematic, then you should be able to get around it by read redundancy.

Further reading: “Third Generation Sequencing Technologies” will be published in June 2011: www.insightpharmareports.com

 

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.