These chips are mini Internets


Despite Moore’s Law, the speed of computer chips has stagnated over recent years, with manufacturers routinely adding additional cores to increase processing power. 



Indeed, a typical chip might boast six or eight cores, all communicating with each other over a single bundle of wires – known as a bus – which allows only one pair of cores to “talk” at a time.

 This is obviously a serious limitation for (potential) chips with hundreds or even thousands of cores, which many electrical engineers envision as the future of computing.



That is why Li-Shiuan Peh, an associate professor of electrical engineering and computer science at MIT, wants cores to communicate the same way computers hooked to the Internet do: by bundling the information they transmit into “packets.”

Essentially, each core would have its own router, which could send a packet down any of several paths, depending on the condition of the network as a whole.

In principle, multicore chips are faster than single-core chips because they are capable of splitting up computational tasks and running them on several cores simultaneously.

Cores working on the same task will occasionally need to share data, but until recently, the core count on commercial chips has been low enough that a single bus has been able to handle the extra communication load. 

That’s already changing, however, as buses have hit a limit, typically scaling to about eight cores. Indeed, the 10-core chips found in high-end servers frequently add a second bus, but that approach won’t work for chips with hundreds of cores.

For one thing, says Peh, buses take up a lot of power, because they are trying to drive long wires to eight or 10 cores at the same time. Instead, Peh is proposing a network in which each core communicates only with the four cores nearest it. 



“Here, you’re driving short segments of wires, so that allows you to go lower in voltage,” she explained.

In an on-chip network, however, a packet of data traveling from one core to another has to stop at every router in between. Moreover, if two packets arrive at a router at the same time, one of them has to be stored in memory while the router handles the other. 

Unsurprisingly, many engineers worry that the added requirements will introduce enough delays and computational complexity to offset the advantages of packet switching.

“The biggest problem, I think, is that in industry right now, people don’t know how to build these networks, because it has been buses for decades,” Peh acknowledged.

Fortunately, Peh and her colleagues have developed two techniques to address these concerns. One is known as “virtual bypassing.” Typically, when a packet arrives at a router, the device inspects its addressing information before deciding which path to send it down. 

With virtual bypassing, however, each router sends an advance signal to the next, so that it can preset its switch, speeding the packet on with no additional computation. In her group’s test chips, Peh says, virtual bypassing allowed a very close approach to the maximum data-transmission rates predicted by theoretical analysis.

The other technique is dubbed low-swing signaling. Digital data consists of ones and zeroes, which are transmitted over communications channels as high and low voltages. A circuit – designed by Sunghyun Park – reduces the swing between the high and low voltages from one volt to 300 millivolts. 

With its combination of virtual bypassing and low-swing signaling, the test chip consumed 38 percent less energy than previous packet-switched test chips. 



The researchers have more work to do, Peh says, before their test chip’s power consumption gets as close to the theoretical limit as its data transmission rate does. But, she adds, “if we compare it against a bus, we get orders-of-magnitude savings.”