YouTube Facebook LinkedIn Google+ Twitter Xingrss  



By Malorye Branca


 Toxic avenger: Brent Stockwell's team screened 23,000 compounds; nine kill cancer cells only. 
August 13, 2003 | In the wake of the genomic technology boom, many groups are now concentrating on "getting it right" by refining parts and processes in their platforms.

"I think we are just cutting our teeth on the technology," says Clarence Wang, senior scientist and group leader in bioinformatics at Genzyme. "People are getting tons of data, but answers are not falling out." Novartis' Paul Herrling concurs. "The more high-throughput your system is, the more you have to use your brain," he says.

That realization has led to a new emphasis on systems biology. "You just can't deny the inter-relatedness of pathways," Wang says. "You need better ways to pull out the causative relationships."

Exelixis has pioneered this approach, integrating multiple high-throughput genomic tools. The company has just filed an IND for its first homegrown cancer drug, XL784. The target of this drug was identified through comparative genomics: In the fruit fly, the molecule spurs the formation of the insect's trachea — a branching system. That gene turned out to have a similar function in humans, where it causes angiogenesis (blood vessel formation). What's most intriguing is that the compound is remarkably safe, an unusual feature for an anticancer drug.

Angiogenesis is one of the hottest emerging target areas, and Genzyme is also panning for targets in that field. Researchers there rely heavily on microarrays and serial analysis of gene expression (SAGE), an alternative approach to measuring gene-expression levels. At first, Wang says, they aimed at building an enterprise software system. But later, they "backed away" from that idea and concentrated more on getting scientists trained with the tools and thinking about experimental design and statistical issues. The results are starting to come in. "We are finding a bunch of new players in angiogenesis," Wang says. "And they seem to be making sense."

The National Cancer Institute serves a much larger, dispersed, and diverse group of scientists, and an enterprise system for microarray users was a priority there. "We wanted to make microarrays a package that anyone could easily learn to use," says Lisa Gangi, head of the Laboratory of Molecular Technology at NCI. The group makes an average of 300 arrays a month.

To manage all the data coming off those arrays, the National Institutes of Health's Center for Information Technology (CIT) developed the mAdb (microArray database) System. Bioinformatics firm SRA collaborated on the system design, implementation, and training. Currently, the system lets investigators store their data privately and securely and provides access to a wide range of analytical tools. "The intention is that all these data will become public as they are published, and be shared," Gangi says.

The system is so successful that other groups are adopting it. The NCI software is up and running at the Netherlands Cancer Institute, the Genome Institute of Singapore, and the Centers for Disease Control and Prevention.

The next level of data that scientists must address relates to proteins. Hybrigenics, for example, uses protein-protein interaction data, in conjunction with gene expression, to validate new targets. The company is particularly interested in the Wnt pathway "because it is related to a morphological difference [between cancer and noncancer cells]," says Donny Strosberg, Hybrigenics president and CEO. The company has about 30 potential targets for colon cancer compounds. "We go from the pathway to the cells," Strosberg says.

A new twist is the idea to use gene markers of compound characteristics to guide drug development. Avalon Pharmaceuticals uses a focused library of about 100,000 compounds in combination with gene-expression signatures and other data to do cancer-gene-specific screens. The company uses the genomic data to search for compounds with the right combination of features. "We hope to get a new class of drugs that will attack cancer but spare normal cells," says Meena Augustus, senior scientific director at Avalon.

In a similar vein, Whitehead Institute scientist Brent Stockwell and colleagues set out to "look for small molecules that only become toxic in the presence of specific oncogenes." They were searching for molecules that kill cancer cells exclusively. This type of study became possible when scientists could "create genetically well-defined tumor cell lines," Stockwell says. They screened 23,000 compounds, and found just nine that killed tumor cells only. One turned out to be an unknown chemical, which they dubbed Erastin and are following up on.

One thing is clear. This work will involve more, and increasingly complex, data. Companies are having to beef up their systems. Aventis, for example, is using a PSSC 34-node cluster running AMD's Athlon MP processor to do the intensive work of virtually screening small-molecule structures. The group plans to double the size of the cluster within the year.

There may not be a single best way to use genomic data for cancer drug discovery, but as the tools are sharpened, it may turn out that multiple approaches can be used.


Back to Targeting Tumors 





For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.