Exascale Challenges, Opportunities

June 14, 2013

By Allison Proffitt 
 
June 14, 2013 | Exascale computing has garnered some attention lately. President Obama presented funds for an exascale computing system in his 2012 budget, but by the time the budget was approved the Department of Energy’s funding for supercomputing was trimmed. A realistic goal for exascale may be 2020, some sources proposed in April. 
 
The limits to exascale lie mainly in power and memory issues, said George Michaels, director of life sciences programs at Intel. Speaking at the Genome Informatics Alliance event last week, Michaels laid out the roadblocks to exascale as Intel sees them.  
 
The rapid global scale out of genomics informatics activities will need to leverage exascale computing technology advances. The primary challenge is power, Michaels said. It’s really a physics problem, the transistor counts need to increase.  The technologies that got us to petascale computing won’t get us to exascale, he said.  Power efficiency is the key challenge with data movement being  the key contributor at all system levels.
 
To move forward, Michaels called for a shift in priorities. Whereas in the past we focused on through frequency and architecture features for productivity and were constrained by cost and reasonable energy use, future priorities should be parallelism and simple architecture designed for energy use. We’ll be constrained by programming productivity and cost.  Much of this needed power efficiency will be gained by using new programming models that optimize performance for power utilization for compute and data movement.  
 
There are also memory issues, and exascale computing will require re-thinking memory architecture on a system level—“how to get more useful memory as close to the cores as possible,” Michaels said. That’s going to take emerging memory technologies, new levels of memory hierarchy, and innovative packaging and I/O solutions, he said. 
 
Michaels proposed changes in dynamic random-access memory architecture. Today’s DRAM relies on lots of reads and writes and uses excess  power in maintaining and accessing the array. DRAM needs to activate smaller pages for fewer refreshes so that most read data can be used and I/O can be widened to increase bandwidth, Michaels says. 
 
Future systems must be self-aware, Michaels contended. He defined self-aware systems as those that are introspective (observes its own behavior and learns), goal-oriented, adaptive (takes action to achieve goal), self-healing, and approximate (does not expend more effort than necessary to meet goals). 
 
Hardware and software must work together, Michaels said, for an energy efficient platform.