YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

A History of Architecture

For years, visualization required proprietary technology. Most visualization systems were based on Unix for the operating system, customized graphical processors, and 64-bit reduced instruction set computer (RISC) processors (each vendor typically had its own version of a RISC chip and Unix).

Until a couple of years ago, the 64-bit RISC chips were essential components, as they supported much more memory than the 32-bit processors found in most servers and desktop computers. This meant very large data sets could be loaded into memory for visualization. Similarly, the optimized graphical processors were needed to speed up many computational tasks required to present data on a computer screen.

But while such systems were great for visualization, offering very high performance to quickly render and draw images of complex data sets, they had their own limitations. For one thing, the systems were expensive — anywhere from 4 to 10 times the price of a common server or desktop computer. And most applications had to be custom written, in some cases requiring special coding to use the graphical processing units (GPUs). As many of the vendors in this market used different GPUs, a program written for one vendor’s system would often need to be rewritten to work on a rival system.

Because of the high costs and complexity, such systems were often used only for visualization and not for large computational tasks. So a lab might first use another system to run an analysis routine on a data set or to perform an in silico simulation. The output of such computations would then be stored in a large file, which would then be moved to the graphical workstation to display the results.

The combination of the high price and optimization for visualization meant that visualization systems tended to be shared devices that entire labs or departments would use for the specific task of viewing large, complex data sets.

But over the past few years, high-end desktop and server systems based on commodity Intel, AMD, and Apple/IBM processors have become more powerful and started to encroach into, albeit on the low end, the graphical workstation market. Lately, the performance difference between the proprietary RISC-based systems and systems based on commodity chips has shrunk even further. Systems based on newly available 64-bit processors such as the AMD Opteron, Apple/IBM PowerPC G5, Intel Itanium, and Intel Xeon Extended Memory 64 Technology (EM64T) processors offer comparable performance at a much lower cost than some RISC-based visualization systems. And thanks primarily to computer gaming, commodity graphics processors have become much more powerful and easier to program.

Market research documents the shift in technology from the established proprietary systems to the newer systems. For example, the worldwide market for Unix visualization workstations dropped from $4.5 billion in 2000 to $1.3 billion in 2003, according to IDC. The shift is due in large part to the ability of the commodity systems to meet a portion of the visualization application needs of users. S.S.

Click here to login and leave a comment.  


Add Comment

Text Only 2000 character limit

Page 1 of 1

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.