Researchers at North Carolina State University have developed a technique that allows graphics processing units (GPUs) and central processing units (CPUs) on a single chip to improve their level of collaboration.
“Chip manufacturers are now creating processors that have a ‘fused architecture,’ meaning that they include CPUs and GPUs on a single chip,” explained Dr. Huiyang Zhou, an associate professor of electrical and computer engineering.
“This approach decreases manufacturing costs and makes computers more energy efficient. However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That’s the issue we’re trying to resolve.”
To be sure, GPUs – initially designed to run graphics programs – are capable of executing numerous individual functions very quickly. CPUs, or the “brains” of a computer, have less computational power – but are better able to perform more complex tasks.
“So our approach is to allow the GPU cores to execute computational functions, and have CPU cores pre-fetch the data the GPUs will need from off-chip main memory,” Zhou said.
“This is more efficient because it allows CPUs and GPUs to do what they are good at. GPUs are good at performing computations. CPUs are good at making decisions and flexible data retrieval.”
Essentially, CPUs and GPUs fetch data from off-chip main memory at approximately the same speed, but GPUs can execute the functions that use that data more quickly. If the CPU determines what data the GPU needs in advance and fetches it from off-chip main memory, this will allow the GPU to focus on executing the functions themselves – significantly reducing the time spent on the overall process.
In preliminary testing, Zhou’s team determined the new approach improved fused processor performance by an average of 21.4 percent. This technique has not been possible in the past, Zhou noted, because CPUs and GPUs were located on separate chips.