Intel aims to take the pain out of programming future multi-core processors

Posted by Wolfgang Gruener

Santa Clara (CA) – The switch from single-threaded to multi-threaded applications to take advantage of the capabilities of multi-core processors is taking much longer than initially expected. Now we see concepts of much more advanced multi-cores such as heterogeneous processors surfacing – which may force developers to rethink how to program applications again. Intel, however, says that programming these new processors will require a “minimal” learning curve.

As promising as future microprocessors with perhaps dozens of cores sound, there appears to be huge challenge for developers to actually take advantage of the capabilities of these CPUs. Both AMD and Intel believe that we will be using highly integrated processors, combining traditional CPUs with graphics processors, general purpose graphics processors and other types of accelerators that may open up a whole new world of performance for the PC on your desk.

AMD recently told us that it will take several years for programmers to exploit those new features. While Fusion - a processor that combines a regular CPU and a graphics core -  is expected to launch late in 2009 or early in 2010, users aren’t likely to see a functionality that is different from a processor and an attached integrated graphics chipset. AMD believes that it will take about two years or until 2011 when the acceleration features of a general purpose GPU will be exploited by software developers.

 

 

 

Intel told us today that the company will be taking an approach that will make it relatively easy for developers to take advantage of this next generation of processors. The company aims to “hide” the complexity of a heterogeneous processor and provide an IA-like look and feel to the environment. Accelerators that are integrated within the chip are treated as processor-functional units that can be addressed with ISA extensions and a runtime library. Intel compares this approach with the way how multimedia extensions (MMX) were integrated into Intel’s instruction set back in 1996.

As a result, Intel hopes that developers will be able to understand these new processors quickly and develop applications almost immediately. “It is a very small learning curve,” a representative told us today. “We are talking about weeks, rather than years.”

Nvidia, which is also intensifying its efforts in the massively parallel computing space, is pursuing a similar idea with its CUDA architecture, which allows developers to process certain applications - or portions of them - through a graphics card: Instead of requiring a whole new programming model, CUDA can be used by a C++ based model and a few extensions that help programmers to access the horsepower of an 8-series Geforce GPU.