On ARM versus x86: Part II



Note: This is part two of a two part article. Part I can be read here .



Yes, there are some downsides to RISC processors. For one, as I mentioned earlier, certain tasks (like floating point operations or division) can’t be performed with a single instruction except in some of the newer RISC designs.

So even if a single instruction runs faster on a RISC processor than the same instruction on a CISC processor, it may take dozens of instructions to perform some operations that a CISC processor can do with a single instruction.

And there is a double-edged-sword with RISC technology. The companies that develop the designs don’t actually manufacture the chips – they license the technology (referred to as cores – reusable, and sometimes customizable, blocks of logic or chip layout designs) to other corporations.

And like any good department store would do, they lower the licensing fees on older versions of the design. 

So if a manufacturer wants to use the latest and greatest version of a RISC design, they pay a premium, but if they don’t need the latest and greatest features for a particular application they can license an older version. Nearly every RISC architecture design ever developed is still available if you want it.

Now for some manufacturers this is a good thing. If you need a chip to run a waffle maker you probably don’t need the newest RISC cores – you can probably get away with an older design. But it also means if you’re developing a chip for a new smart phone and just want to save a few bucks you can license a design that’s a few revs older than the newest design. (Granted, you can also buy older versions of CISC chips too, but companies like Intel don’t manufacture 286 or 386 chips anymore).

Also, companies that license cores can modify them to suit their specific needs, so, for example while the Nvidia Tegra Kai processor slated to power the Microsoft Surface tablet may be based on the ARM Cortex, Nvidia has already stated that the Tegra line comes in a variety of configurations with and without certain features.

The CISC architecture, as the name implies, has a richer and more complicated set of instructions built into the main processor. This complexity sometimes comes at the expense of performance but it can

also provide more muscle when performing certain tasks. It also makes it easier to program complicated tasks (although achieving high levels of optimization can be more problematic).

The CISC architecture approach began in the days when RAM was expensive (and comparatively slow by today’s standards) and programming was done at very low levels (programs were written in assembly code rather than higher-level interpreted or compiled languages). The more functionality built into the main processor the easier it was to program and, in most cases, it was faster than moving data back and forth from memory (or even worse hard disk-based ‘virtual memory’). A divide command might take a handful of clock cycles to complete but it was damned easier than building a divide sequence of instructions when all you had was add, subtract, multiply, and compare.

One disadvantage of the CISC approach was that designers had to account for a lot of different possibilities, and that meant sometimes there was functionality built-in that wasn’t actually used. It might take the same number of clock cycles to perform a simple operation as it did to perform a complex operation. It also meant that it was harder to predict performance because different instructions would take a different amount of time to complete.

Another disadvantage is that all this functionality comes with a cost – the more functionality you build into the processor the more transistors you need and more transistors require more power and that means more heat, higher costs, and sometimes a larger size chip. (You can get around the size problem by making the circuitry smaller but that raises the cost. You can reduce the number of transistors but that means less functionality. You can get around the power problem by running the chip at a slower speed but that makes it, well, slower).

Of course, there are two non-hardware specific advantages to CISC. First, Intel is a mega-giant in the chip design and manufacturing business with lots of money and some very big guns so they can devote enormous resources to customizing the x86 line – resources than most RISC developers can only

dream about. Second, there are roughly a zillion applications tailored to run on x86 processors.

Now that second advantage doesn’t hold true in the mobile space where there are roughly a zillion apps tailored to run on Android and iOS. Granted, a high percentage of those mobile apps could be considered trivial (not a lot of acoustics modeling, 3D rendering, or CAD apps written for smart phones… yet).

So which processors are better? Depends on what you are trying to do. If you’re building a mobile device and don’t care about some of the more advanced features (or can live with stripped down versions of them) then RISC is a good choice. If, on the other hand, you want to do some serious hard-core processor-intensive tasks then CISC processors are the way to go.

Try as they might (and they’ve been trying for years) Intel just can’t quite get the hang of developing embedded processors. And while they have tried (and failed) on various occasions, RISC processors just can’t seem to crack the desktop market. Maybe that will change in the coming years as the lines continue to blur but it probably won’t happen overnight.

And a quick historical footnote: Ironically, Intel first developed the 8080 – forerunner of the x86 line – back in 1974 as a low-power, reduced function processor for calculators. ARM processors, on the other hand, began life in 1985 when Acorn Computers released a new chip based on the MOS Technology

6502 processor and called it, logically enough, the ARM-1 (Acorn RISC Machine).