YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Visualize This


By Salvatore Salamone

July 20, 2005 | At the Cleveland Clinic  Lerner Research Institute, about 200 patients a year undergo a procedure called deep brain stimulation (DBS) to help alleviate the symptoms of Parkinson’s disease, essential tremor, and other debilitating neurological disorders.

The implanted DBS electrode acts as a sort of neural pacemaker, generating electrical stimulations in the brain. Positioning the electrode is, of course, critical, and understanding how the device’s electrical impulses impact neighboring neurons is of paramount importance to physicians and researchers.

To understand the interplay of the DBS device with the brain, Cameron McIntyre, an assistant professor in the institute’s Department of Biomedical Engineering, and his research associate Chris Butson are developing a 3-D model that takes into account the properties of brain tissue and the electric field generated during DBS.

visualize-sm.jpg 
VIEW FULL SIZE >> 
Understanding the interplay of the electrode with the brain is a complex problem given the large number of variables — electrode position and orientation, voltage, stimulation frequency, and so on. But neurologists performing the implants currently have limited insight into those interactions.

“We need a visualization technique that provides feedback to the clinicians so they can better understand how the simulation is [interacting] with the nervous system,” McIntyre says. A model would give his colleagues much more information to work with.

McIntyre and Butson’s initial modeling research employed a high-end desktop computer — with limited success. “We could only model a portion of the brain due to memory limitations of the desktop system,” McIntyre recalls. A Linux cluster would provide more computational power, but the modeling requires an enormous amount of shared memory — something clusters do not offer.

The Cleveland Clinic group’s dilemma is increasingly shared among biomedical researchers who need a combination of high-performance computing (HPC) for data analysis or modeling, the ability to support a large amount of data in memory, and some way to visualize the results.

McIntyre and Butson opted for a new visualization system from Silicon Graphics (SGI) called the Prism visualization system, which includes eight Itanium processors, 16 GB of memory, and graphic processors to handle visualization-specific computational tasks.

Clinicians at the Cleveland Clinic hope to use the brain interaction model as a pre- and postoperative tool that will allow them to position the DBS electrode and fine-tune adjustments once it is implanted. Currently, the work gives researchers insights into the interplay between the implanted device and the interaction with the brain; the work is not far enough along to be used as a clinical tool.

Driving New Research

The experience of the Cleveland Clinic researchers is being played out at many other life sciences organizations, where researchers are embracing a relatively new generation of visualization systems (see “A History of Architecture,” page 32).

An attractive combination of features — high performance, component standardization, and the ability to access large amounts of memory — is making new visualization systems appealing for many computationally intensive scientific applications.

For example, a project called the BioWall at the University of California, San Diego (UCSD), uses Sun Java Workstations to help researchers see more of their data in higher resolution. The BioWall uses a score of high-resolution, flat-panel displays arranged in a grid five across and four high and mounted flush on a wall. The BioWall is being used to display very high-resolution data sets generated by electron and multi-photon light microscopes at the National Center for Microscopy and Imaging Research (NCMIR). The data associated with the NCMIR images are very large — up to several gigabytes for 2-D images and several hundred gigabytes for 3-D data sets.

At the time of its deployment last fall, David Lee, UCSD’s BioWall Application Engineer, said: “We chose [Opteron-based] Sun Java Workstations because of their performance, ability to handle large amounts of memory per node, stability, and compatibility with QuadroFX graphics.”

This combination of increased computing power with visualization capabilities is the key to new visualization research deployments. The University of Wales, Swansea, working with technology partner IBM, has created a new Institute of Life Science (ILS) European Deep Computing Visualization Centre for Medical Applications.

The center’s research will focus on personalized medicine, disease control, and healthcare treatments. “This purpose-built institute will bring together medical and health scientific expertise with other key sciences such as bio- and nanotechnology, deep computing, and informatics,” says Julian Hopkin, head of the new School of Medicine and the ILS. The university will use a version of IBM’s Blue Gene computing, dubbed “Blue C,” to conduct its research.

At the other end of the spectrum, new systems have the capability of bringing visualization technology to even a single scientist who might not have the expertise in developing 3-D or complex visual representations of his or her data.

Sharp Systems of America recently announced the Sharp Actius AL3D, a high-performance notebook computer with Sharp’s 3-D LCD screen technology. The notebook lets researchers view objects in 3-D without the need for goggles or 3-D glasses. The Actius AL3D includes the Intel Pentium M P750 processor, more than 1 MB of RAM, and the Nvidia GeForce Go 6600 graphics processor. And the notebook’s 3-D feature works with life sciences software packages from Accelrys, Tripos, and Wavefunction.

Open Systems a Key

The slew of new visualization systems differs from past systems in one major way: Because they use an open-systems approach — commodity chips and familiar operating systems — they have the potential to bring visualization to a broader audience. Researchers do not need to learn Unix to run a workstation. And as the systems use common processors and Linux, Mac OS X, or Windows as their operating system, more software vendors are writing applications to visualize data.

On the hardware side, life scientists have many choices. For instance, Sun Microsystems and SGI, companies that enjoy a strong reputation in visualization systems, have added open-systems product lines to complement their existing proprietary systems.

SGI is approaching visualization in a similar manner to its targeting of the high-end HPC market with its Altix line of computers. Last fall, SGI introduced the Prism line of visualization systems based on the Itanium processor, Linux, and SGI’s NUMAflex shared-memory architecture. Like the Altix, the Prism line scales from incredibly large systems for huge enterprises — such as the Cleveland Clinic — down to a powerful deskside system for a small lab or individual researcher.

Most HPC desktop and cluster vendors, including Apple, Dell, Hewlett-Packard, IBM, SGI, and Sun Microsystems, have also introduced new systems that combine 64-bit processors, increased memory support, and powerful graphics cards.

Microway recently added the WhisperStation that uses either dual AMD Opteron or Intel Xeon EM64T processors and up to 2 GB of memory. The system also includes an NVIDIA FX1300 PCI Express Graphics engine and a Viewsonic 20-inch LCD monitor. Essentially, the system is designed for compute-intensive applications and has the graphics support to display such computational results.

And IBM has taken an entirely new approach to visualization with its introduction in February of a technology it calls deep computing visualization (DCV), a technology that combines IBM xSeries Intelli-Station workstations and middleware to support high-performance visualization.

The idea behind DCV is to adapt visualization to the way life scientists work. Previously, many high-performance visualization efforts focused on immersive environments such as 3-D caves or walls of tiled monitors. “These were great when you could bring everyone together,” says Becky Austen, director of deep computing marketing. But she notes that today, many life science organizations rely on collaboration of geographically dispersed groups, each with different areas of expertise.

To address this issue, core features within the DCV technology offering are secure remote access to visualization and collaboration tools. At the same time, IBM is addressing the shift away from Unix visualization systems. Similar to the computational biology shift away from proprietary systems, the DCV approach uses commodity components (e.g., central and graphical processing units) and open-source graphics applications to support high-performance visualization.
The DCV provides a middleware infrastructure to support and enhance the graphics functions of OpenGL software applications running on IntelliStation workstations, which run Linux. According to IDC, the goal of the DCV is to leverage the price/performance advantage of commodity graphics components and InfiniBand or Gigabit Ethernet adapters without sacrificing the needs of high-end users.

To support visualization in a distributed research environment, the DCV offers two visualization modes of high-end images. The scalable visual networking (SVN) mode lets researchers increase the screen resolution and size of a displayed image.

Alternatively, the remote visual networking (RVN) mode allows remote use of a visualization application. In general, the SVN supports larger-size and higher-resolution images such as those that might be found in an immersive environment; RVN lets researchers collaborate over low-bandwidth networks.

One consequence of this dual mode of operation is that it might help organizations build visualization into complex workflows. Most visualization projects are typically designed to run on a specific system, and the results are optimized for the display hardware that the primary researchers use. By being able to quickly provide access to a graphic image regardless of the display hardware, visualization could be incorporated into a normal workflow.

Challenges Remain

The slew of new visualization systems and technologies has the potential to advance current areas of research and open up new life science research arenas. While new applications can certainly benefit from the explosion in hardware to support visualization in conjunction with HPC life sciences, many older applications will need to be ported from Unix and the numerous variants of Unix that vendors included with their older systems.

Fortunately, vendors are starting to understand the need to support the legacy applications. For instance, SGI offers its customers QuickTransit, software that translates visualizations written to run on its older IRIX imaging systems so that they can run on the Prism system.

Click here to login and leave a comment.  

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1





For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359, jmulhern@healthtech.com.