By Martin Gollery
January 14, 2013 | GUEST COMMENTARY | Two years ago, venture capitalist Hermann Hauser (Amadeus Capital Partners, UK) boldly predicted the fall of Intel in an interview with the Wall Street Journal. The tech giant would be crushed, he said, by the mighty ARM (Advanced RISC Machines) Holdings—a British company that Hauser, not coincidentally, helped spin out from Acorn in 1990.
Such posturing is not uncommon in the world of technology, where fresh upstarts often smugly predict that they will use superior technology to defeat their older and slower competitors. But the difference in this case was that Hauser was pointing to the business models of the two companies, not the technology per se.
Intel does it all—chip designs from small cell phone chips to powerful server CPU’s. But these chips are discrete units that customers have no control over. ARM, on the other hand, licenses core designs out to chip designers and manufacturing companies that can then customize them for their own special needs.
Does this sound a bit like the Open Source vs. proprietary software choice, or the debate over open access research journals? The comparison is apt, though perhaps not perfect. After all, while these designs are customizable, they are not free. In any event, it is unusual to find a difference based on business model rather than technology.
So, what has happened over the past two years? Intel is certainly not yet dead, but how are things going? Well, Hauser’s prediction is starting to come true in some respects. Hauser claims that more ARM chips are shipping than Intel. Well, last year, nearly 8 billion ARM-based chips shipped inside various devices—Hauser asserts that the number is greater than the number of chips sold in Intel’s entire history. Today, ARM technology is in use in 90 percent of smart phones, 80 percent of digital cameras, and 28 percent of all electronic devices, primarily because of the superior power efficiency that these designs provide. Better efficiency leads to cooler running devices with longer battery life—highly desirable characteristics in the mobile world. Intel has a negligible (about 0.2 percent) market share in these devices.
As a result, ARM has continued to make some bullish statements. At the recent Consumer Electronics Show in Las Vegas, James Bruce, ARM’s lead mobile strategist, even dared to suggest that Intel would do better to abandon the x86 architecture and start licensing ARM designs.
But do we care? After all, most of these chips are going to be used for smart phones and tablets. Hardly the sort of thing that we want to acquire for large-scale metagenomic assembly!
Or is it? Remember that all of the technologies that are used to crunch big data can be scaled in some way, whether CPUs, FPGAs, GPGPUs or ASICs. (I reviewed trends in hybrid computing in an earlier commentary for Bio-IT World.) The question for many years has been, ‘How much bang for the buck?’ Comparing the different technologies is a good way to start arguments among techies. (The virtues of GPGPUs, for example, were recently on display in Titan, currently the world’s top-ranked supercomputer.) More recently, the question of late has turned to ‘How many FLOPs per Watt’ as power consumption and the subsequent heat elimination has taken center stage.
When it comes to being energy-efficient, ARM chips have a distinct advantage, but when we start talking about linking thousands of nodes together, then the question of interconnects must be addressed. One company that is addressing this issue, and building ARM-based servers is the start-up company Calxeda. The Calxeda server offerings present a fabric with an 80-Gigabit per second switching speed. Their four-node design consumes only 20 Watts! Having spent far too much time in server rooms with hundreds of noisy little cooling fans, this sounds awfully appealing to me. Overall, Calxeda claims that they can save 89% of the power used in a conventional cluster. Penguin Computing now offers a system based on Calxeda nodes, with 192 cores requiring a mere 240 Watts!
Perhaps the most convincing argument that ARM servers will have a big impact in future server decisions is the news that Intel’s chief rival, AMD, will begin making ARM-based server chips in 2014. AMD has licensed a 64-bit design from ARM and will combine it with the ‘Freedom fabric’ interconnect technology that it acquired when it bought SeaMicro. The Freedom fabric promises to provide the capability to connect thousands of servers in a cluster efficiently and at a low cost.
So will ARM kill off Intel? I highly doubt it. But would it be nice to have a server cluster that uses far less electricity to provide a given amount of computing power? Absolutely. And would it be useful to have a biocomputing cluster that combines thousands of ARM cores with some GPGPU chips and FPGA accelerators? Most definitely.
Martin Gollery is a bioinformatics consultant that works with industry and academic researchers to provide testing, training, writing, analysis and support. He can be reached at Martin.firstname.lastname@example.org.