YouTube Facebook LinkedIn Google+ Twitter Xingrss  

DREAM6 Breaks New Ground


By John Russell

August 2, 2011 | The Russell Transcript | Roughly five years ago the organizers of DREAM—Dialogue for Reverse Engineering Assessment and Methods—set out to find the best algorithms for inferring biological networks from blinded data sets. Emulating the CASP* program, they created an annual competition in which researchers downloaded data for a set of challenges and used their favorite algorithms to solve the problem. Winners were announced at the annual DREAM conference, and the results published in an effort to create a valuable resource.

A funny thing happened along the way. Actually two important things. First, it turns out there is no such critter as the perfect algorithm. “Data itself is so high dimensional, the biology itself is so complex, that probably the notion of finding the best algorithm to analyze a data set was a little too simplistic,” says Gustavo Stolovitzky, DREAM chair and manager of functional genomics and systems biology, at IBM’s Computational Biology Center.  

Potentially much more important, it also turned out the aggregate prediction of competing groups was nearly always the best prediction or in the top three. This unexpected result highlighting the wisdom of the crowd is prompting DREAM to rethink its mission and begin seeking to put “collaboration by competition” to work solving real-world problems rather than chase down better algorithms.

It’s a fascinating finding, which might have great utility in unraveling thorny basic biology questions as well as use in early drug discovery.

“The big encompassing lesson is that without a doubt there is wisdom of crowds. Consistently the aggregate of the predictions is really robust with respect to any of the other individual predictions; very often it is better than the best, and when it’s not, it is among the best,” explains Stolovitzky.

Stolovitzky says, “There are many ways of aggregating predictions. The one we have been using is very robust.” It’s a little complicated, so with apologies to Gustavo, here’s an attempt to simplify and summarize it: Each competing team produces an overall solution to a challenge (e.g. fairly granular description of a particular signaling pathway or gene regulatory network). They rank each of their solution’s components (e.g. an interaction between two genes) in terms of their confidence it is correct. One interaction may be ranked high while another may be ranked low. A team’s overall solution may or may not perform well.

Stolovitzky averages the confidence rankings for each interaction from all the teams and produces a new aggregate solution to the challenge by re-ranking the interactions according to their average rank. Obviously there’s a bit more to it, but you get the broad picture.

Advanced Aggregate

One interesting aspect of this aggregation approach is that even algorithms that perform poorly overall may get a particular interaction right and be captured in the aggregate prediction. In one DREAM4 challenge, 11 of 12 groups identified a new interaction inferred from the data that was included in the aggregate prediction although many of the teams’ overall predictions were poor. “Even suboptimal algorithms have a place in the zoo of algorithm,” he says.

It’s collaboration by competition, says Stolovitzky. “So people try to do something against each other but unbeknownst to them, when you aggregate their predictions on exactly the same data you are really making them collaborate. In that aggregate prediction is where the wisdom of the crowd can emerge.” All of a sudden, “It is not worth it to try to develop the best algorithm if the aggregate is always the best.”

Instead, perhaps, problem selection becomes more important. This doesn’t mean work on developing great algorithms is worthless; it’s not, emphasizes Stolovitzky, but it becomes secondary. Tackling real world problems becomes more enticing and impactful. One issue is incentivizing the activity. “People like to be the best at something,” notes Stolovitzky.

Recently, the challenges for DREAM6 were posted (http://the-dream-project.org; submission deadline is August 22, winners will be announced October 14). This year, Stolovitzky says there is no specific network inference challenge while the DREAM team mulls over its future direction. However, there is one on diagnosing Acute Myeloid Leukemia from patient samples using flow cytometry data.

 Change is in the wind for DREAM.

* Critical Assessment of Techniques for Protein Structure Prediction

This article also appeared in the 2011 July-August issue of Bio-IT World.  

View Next Related Story
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.