YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Open Source Solutions for Image Data Analysis


From neurons to nematodes, the challenges in data analysis remain pervasive.

By Olivier Morteau

August 2, 2011 | Ron Kikinis, who runs the Surgical Planning Laboratory (SPL) at Boston’s Brigham and Women’s Hospital, admits that his research field is privileged when it comes to the tools that have been developed in the past two decades for neuroimage analysis. “It’s probably the most advanced area in terms of image data analysis,” he says, which he attributes to a long-term effort by the NIH to fund projects in neuroimaging.

Imaging analysis exists for clinical applications in many other organs, but there has not been as much funding to develop post-processing for those applications, which consequently tend to lag behind neuroimaging applications. But that doesn’t mean that neuronal image data analysis technologies cannot be improved. For example, most technologies have focused on group comparison in like healthy brains. “A lot of tools can do an incredible job in finalizing this type of data. However, as soon as you go into brain pathologies, the technology available is significantly less robust,” says Kikinis.

Advances in bioimaging devices, which are producing larger volumes of data of ever greater complexity, mean “we’re drowning in data”, he says. Images generated by magnetic resonance imaging (MRI), CT scans and positron emission tomography (PET), are typically 3-D or 4-D, where the fourth dimension is time, contrast uptake, or some chemical parameter.

“How do you process and analyze data to the point where you see the information that you are interested in?” he asks. “That usually means some form of processing that consists of throwing away a lot of data, until the only data left are what you are interested in.” The key is a combination of acquiring high quality data by expert scientists and post-processing using relevant algorithms. “The point of post-processing is not to decrease the storage requirements—although it typically reduces data files of several gigabytes to just a few kilobytes—but to expose the relevant information in the context of a particular task.”

High-Throughput Imaging

Anne Carpenter, who directs the imaging platform at the Broad Institute, says that extracting key information is a task inherent to bioimage analysis. “That is just what image analysis is—converting a large amount of digital information into a more manageable amount of the most critical information,” says Carpenter.

Because her focus is high-throughput screening (HTS), she uses microscopes that generate static 2-D high-throughput images. The data are usually less complex than those generated by medical imaging devices like MRI or CT-scans. “In HTS, the goal is to take millions or hundreds of thousands of images and identify the small percentage of them that has the characteristics of interest. Conceptually, that’s very simple, but the challenge is actually in doing it,” says Carpenter.

Bioimaging and medical imaging possess separate challenges. The structure of the human brain doesn’t vary much from patient to patient. But studies of the nematode (Caenorhabditis elegans), for example, might involve organisms that can curve upside down or backwards. The cardinal features in one image analysis project can vary from one experiment to another, says Carpenter. The same is true with cultured cells. “You can’t align them to each other in the same way that you can align a brain to another brain,” says Carpenter.

From her viewpoint as a cell and computational biologist, the challenge of bioimaging merely reflects the level of physiological complexity of the biological system studied. Biologists are gravitating toward much more physiological systems than before, she says, preferring to work with whole organisms rather than cultured cells. “However, many organisms do not have yet their own image analysis algorithms. C. elegans and zebrafish are two organisms we’ve been working on.”

And cell biologists, who are often culturing two different types of cells together (because it keeps the cells in a more physiological environment), pose their own challenges. “Whenever you mix two cell types together, not only is it challenging to get the cells to grow happily, but it also presents image analysis challenges, because you are not tuning the algorithm just to fit one cell type,” she says.

Seeing Solutions

As a tool for medical image analysis and post-processing, Kikinis and his colleagues at the SPL have been developing the 3D Slicer software package. “I’m a medical doctor, so I don’t write codes myself, but I’ve been working in interdisciplinary research with computer scientists for a quarter of a century,” says Kikinis.

3D Slicer has been developed with NIH funding with no restrictions over the past several years. “NIH wanted us to make this software available in a meaningful way, and from our point of view the most meaningful way was to go completely open source,” says Kikinis.

“Think of 3D Slicer as a big chest of tools,” says Kikinis. For example, Kikinis and his colleagues rely on a proven imaging method called diffusion-weighted imaging (based on the local microstructural characteristics of water diffusion) that is used to study the organization of the brain’s white matter. 3D Slicer offers a suite of tools to do rapid post-processing of these images.

“You would first filter for noise reduction,” he says, “then do an estimate of the diffusion tensor of the diffusion-weighted images, and finally do some form of phase streamline analysis inside the diffusion tensor file.”

3D Slicer offers a versatile solution for biomedical imaging analysis. Many software packages overlap various aspects of 3D Slicer, but none cover all of its applications, says Kikinis, and none are compatible with both Mac and PC. One offering, developed at the Digital Imaging Unit of the University Hospitals of Geneva, Switzerland, is OsiriX, which is the successor of Osiris on the Mac platform (Osiris for PC, still available for free, is no longer supported). Another software product, ClearCanvas PACS, was recently released by ClearCanvas.

The 3D Slicer software package comes with a set of tutorials so as to be as user-friendly as possible. But 3D Slicer also targets developers using a plugin architecture. “We want to encourage people to develop their own things,” says Kikinis.

Although designed for basic research applications, another interesting feature of the software is its potential to communicate with clinical devices via the Open Image Guided Therapy (IGT) Link. The connection enables 3D Slicer to receive and send information from a medical device, allowing it to control a scanner or a robot, for example. Specific clinical devices produced by companies such as BrainLab come with the Open IGT Link.

Carpenter’s team built CellProfiler, a successful open-source software that won a Bio•IT World Best Practices Award in 2009 (see, “Carpenter Builds Open Source Imaging Software,” Bio•IT World, Jul 2009). The goal was to find an alternative to custom programs, such as MetaMorph (Molecular Devices) and Image-Pro Plus (Media Cybernetics), which can be challenging to adapt to a specific experiment, and to commercial software that is useful for screens in certain cells but otherwise limiting.

“CellProfiler is the only high-throughput cell image analysis software in existence that is open source,” Carpenter says. Not only is it modular and therefore quite flexible for complicated assays, but it is also user-friendly; a beginner can mix and match modules and different image analysis functions. “We have users who do low-throughput experiments where they just count cells in a dozen or so images, and users who look for a very complicated phenotype and need to process images in a cluster and measure hundreds of thousands of images in a round-the-clock manner,” says Carpenter.

Working with a number of nematode research groups, Carpenter is about to release a toolbox of robust algorithms for C. elegans analysis, and aims to do the same for the zebrafish. Her group has also completed a couple of screens in co-cultured cells, using machine-learning to accomplish those projects.

With two different cell types of different textures or size, it is easy to tune one algorithm to one cell type and a different algorithm to the other cell type. “But when you mix them together, both algorithms would have to work on the entire image, and an algorithm that’s very well fitted to one cell type might chop the other cell type into bits, and think that a portion of the large cell type might be a clump of a number of the other very small cell type,” Carpenter says.

The group has developed an algorithm that “intentionally chops the cells into bits and then uses machine-learning algorithms to allow the biologist to train the computer to learn which pieces belong to which cell type. Then, optionally, you can piece the cells back together again using machine-learning.”  

Olivier Morteau is a communication scientist at a Boston-based biopharma company.

This article also appeared in the 2011 July-August issue of Bio-IT World.

 

View Next Related Story
Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.