On ARM versus x86

It seems like everyone in the industry wants to compare ARM (and other variants of RISC processors) to x86 (and other variants of CISC processors).


For example, analysts ask, can Intel muscle it’s way into the mobile market? Can ARM sneak in the back door of the tablet, laptop, or even desktop markets? Who is faster, more powerful, cheaper, uses less power, less space, and generates less heat?

These are not easy questions to answer and the major players spread misinformation by the shovel full. So let’s take a step back and look at what makes each technology tick.

RISC (reduced instruction set computing) technology has been around since before the acronym was even invented.

It goes back as early as the 1960s (it could be argued that the CDC 6600, one of the first supercomputers designed by Seymour Cray in 1964, was a RISC machine).



The term ‘reduced instruction set’ is a bit misleading since modern RISC processors can have just as many instructions as CISC (complex instruction set computing) processors. The primary difference is that in a RISC processor all instructions are formatted exactly the same way and all take exactly the same time to execute – usually one cycle per instruction.

To achieve this consistent, compact design many instructions and addressing modes that were built into CISC processors – such as floating point operations and division – were not included in RISC processors. These operations were usually offloaded to coprocessors.

This approach meant that instructions could be executed very, very quickly and the chip architecture could be simplified. RISC processors tend to have far fewer transistors than CISC processors – as few as half the number in many cases. And fewer transistors means less power, less heat, and a smaller footprint, features that make them ideal for small devices with limited power. 



Also, because the architecture is simplified and performance is so predictable compilers can be optimized to eke out every possible iota of performance. Of course, RISC processors have grown progressively more complex over the years and as each licensee adds their own twists to the original architecture the stack of developer manuals gets taller and taller.

The main reason that game consoles (which use RISC processors) perform so well is that there are no surprises. Developers know exactly what the hardware is capable of and can optimize their games to use every ounce of power. They don’t have to worry about different screen resolutions or vagaries in graphics cards or unpredictable performance hits if the user has multiple applications running – the environment is always the same.

You could also argue that the performance of all-things Apple is also a result of carefully controlled, very predictable hardware and software environments – develop for a Mac or an iPad or an iPhone and you know exactly what you are getting into.

To use an Olympic games metaphor, the RISC architecture is like an Olympic pool. It’s exactly fifty meters long. Not forty-five meters or fifty-two-and-a-half meters or whatever the pool just happens to be. It’s very specific, and since it’s so specific swimmers can hone their technique to fit the pool.

They know, with a good amount of certainty, exactly how many strokes it will take them to get to the other side, turn, and head back. Toss those same swimmers in the ocean and tell them to race out to that piece of driftwood and back and you might have different winners standing on the podium.

Now of course there are also downsides to RISC processors…



 
This is part one of a two part article. Part II will run on Thursday, August 2, 2012.