High Performance Computing has Heterogeneous Future

November 10, 2011

By Allison Proffitt 

November 11, 2011 | A new IDC report, sponsored by NVIDIA, takes a look at the upcoming era of exascale computing. To break the next speed barrier, IDC sees "heterogeneous computing" as the essential trend.   

The systems topping the Top500 list are already using both x86 processors and accelerators--GPGPUs--together. In June 2011, three of the top 10 systems used GPUs. Last month, Oak Ridge National Laboratory unveiled plans to upgrade the number one U.S. supercomputer to a successor system ("Titan") with a planned peak performance of 20–30 petaflops by complementing more than 18,000 x86 CPUs with an equal number of GPUs.  

Reaching exascale computing means overcoming several current industry challenges, IDC reports, including system costs (flops/dollar), space and compute density required (flops/square foot), and energy costs. GPUs address each of these issues. China's Tianhe-1A supercomputer took the nubmer one spot on the Top500 list in November 2010 (see, "Top Spot Goes to China") achieving 2.57 petaflops with 14,336 CPUs and 7,168 GPUs. NVIDIA says that to achieve the same perforance with CPUs alone, the system would have required 50,000 CPUs and twice as much floor space.  

IDC does identify some challenges to extensive GPU deployment. Programming GPUs remains more challenging than standard x86 processors. There are also communication issues between GPUs and the base processor. And of course there's the hope that new generations of the x86 processors might alleviate the problem and bypass GPUs (and the associated learning curve).  

But even so, IDC believes that GPUs are moving past the experimental phase, and will play an increasingly important role in the global HPC market.  

Read IDC's full report (PDF).  

Read NVIDIA's blog on the report.