Palo Alto (CA) – Stanford University’s [email protected] project today announced a new software that will enable the use of graphics cards within the distributed computing project. Project leaders will tap into the floating point horsepower of graphics chips and hope to see a massive jump in processing power that can lead to more research results in less time.
Whenever there is a need of enormous processing power scientist typically make use of, if available, supercomputers. And if supercomputers aren’t enough, clusters have been expanded in the past, for example for the [email protected] (“search for extraterrestrial life”) program of the University of California, Berkeley, or the [email protected] project of Stanford University, to distributed computing projects.
[email protected], launched in 2000 by Stanford Associate Professor Vijay Pande, researches the “folding” process of proteins – a term that refers to the assembling and reassembling of proteins. This biological process not always happens in a perfect way and if “proteins get screwed up,” said Pande, biomedical problems and diseases can develop – such as Alzheimer’s disease, one of Pande’s research interests.
Computer simulation of these folding processes can help researchers to learn more about the processes and find cures for or even prevent certain diseases. The problem is that these simulations consume huge amounts of processing power. “This scenario has become a 25-year nightmare,” Pande said. Researchers are far away from being able to calculate folding processes in real time: In fact, 1 ns of folding currently takes about 1 day to calculate – or 1 second will take more than 2700 years on “one fast processor,” he said.
“Even if the government would give us everything they have in computing power, it wouldn’t be enough,” he explained. Some simulations could still take up to 40 years to yield some results. [email protected] was an idea to reach many more machines that would run calculations and, according to Pande, it has been a great success – with more than 200,000 active computers worldwide crunching numbers at this time.
Dual-core processors promised to bring another increase in horsepower, but Pande said that “twice” the performance doesn’t cut it: “We need 30 or 40 times the speed to turn months into days,” he said.
The project members looked into options to increase the processing speed and ended up at solutions such as Clearspeed’s accelerator card, which provides about 100 GFlops or about four times the performance of a current Core 2 Duo chip. But the cards are priced at $5000 even in volume and aren’t quite what one would call affordable. Pande now believes to have found a solution by tapping into the capabilities of modern graphics cards, which are monsters in terms of floating point performance: A new client released today supports ATI’s X1900 and X1950 graphics cards, which can unleash about 375 GFlops, which is about 20 to 40 times more speed than what the project has seen so far. The group has also improved the software algorithm of [email protected], which he expects will bring another 10 – 15x improvement for a total maximum performance increase of about 500x – when ATI’s graphics cards are used. However, Pande conceded that the graphics may only be able to deliver a sustained 100 GFlops of speed.
[email protected] Project
At this time, the beta client is limited to the X1900 series of graphics cards; the researcher said that X1800 cards will be supported soon. The group will also be leveraging the Playstation 3 with its powerful Cell processor. A client for the PS3 was shown already in August, but, according to Pande, that applications for the Cell aren’t easy to program.
So, what about Nvidia cards? According to Pande, the group has not been able to get the software to work on Nvidia chips.
Andy Keane, general manager of visualization applications at Nvidia, said in response to the ATI/Stanford announcement that general processing graphics processing units (GPGPUs) so far have been “fundamentally flawed” in a sense that there has not been a lot of “commercial exploitation with GPUs as a processor.”
He mentioned that Nvidia wants to change this situation and considers the GPGPU market as “exciting” and something that “the company has been looking at for years.” He stated that he had no personal knowledge of the development of a [email protected] client for the Nvidia platform, but stressed that the company has a “long-standing relationship with Stanford.”
At least as far as we know, Nvidia cards were in fact used for general processing projects before ATI came into the picture. One of the early projects was the now defunct BionicFX, which used Geforce 6800 processors to accelerate audio processing. ATI publicly mentioned the possibility of using graphics processors for other applications than graphics shortly before the launch of the X1800 graphics cards series. Such an approach, which ATI called “load balancing” could one day run, for example, physics effects on consumer and enthusiast PCs. Nvidia outlined a similar approach earlier this year for future SLI systems.
However, the fact that graphics chips particularly excel in floating point performance, currently limits the general purpose use of the chips largely to scientific applications. More and more companies are entering this lucrative field and try to answer the need for more processing power. For example, startup Peakstream last week announced an application interface layer that enables developers to add graphics cards to computer systems as a way to create “cheap” supercomputers.
The hidden potential in your graphics card: A supercomputer?
ATI to use third graphics card to run physics processing
Clearspeed updates 50 GFlops accelerator board
Multi-core processors may replace physics cards, says Intel
Nvidia announces SLI-accelerated physics
ATI releases first GDDR4 memory graphics card