Chicago (IL) – In a recent guest article for the CERN courier, Google vice president of operations Urs Hoelzle provided some insight of Google’s challenges and strategy to limit overall power consumption of the firm’s huge data center operations. According to Hoelzle, besides plain system power consumption, additional obstacles for data centers include cooling requirements, inefficiencies in power distribution and data centre layout.
Google is known for mastering a model to get the most performance out of a computer system for the least amount of money. Instead of running a few super computers, the company relies on thousands of fairly cheap entry-level systems with lots of system memory.
The reasoning behind this strategy is the simple fact that there is no linear relationship between more expensive computer systems and the performance gain they provide. As long as applications support a massively linked system, the cheapest available processor will deliver the most bang for the buck. But even with the money saved through this strategy, the staggering number of those computers rakes up rapidly climbing operational costs such as power consumption.
Even with cheap commodity computers, the control of cost appears to be a growing challenge for Google. And this challenge is not just about hardware costs, but also about reducing energy consumption, Hoelzle writes. He estimates the system power consumption of a single dual-core processor system – which he described as a “successful attempt to reduce processors’ runaway energy consumption” – at around 265 watts, which requires another 135 watts of power to cool the system down within a data center. “Over four years, the power costs of running a PC can add up to half of the hardware cost,” he writes, and adds: “Saving power is still the name of the game, even to the extent that we shut off the lights in them when no-one is there.”
Looking at power inefficiencies, Hoelzle criticizes that the performance-per-watt ratio, a phrase that is being touted more and more by Intel but has been promoted especially by Transmeta in the past, “is stagnant.” While performance increases, power consumption is rising as well and “operational costs of commercial data centres are almost directly proportional to how much power is consumed by the PCs,” according to Hoelzle.
As one of the major innefficiency factors, Hoelzle points to DC power supplies that ” re typically about 70 percent efficient,” but reach 90 percent at Google. Hoelzle says that Google is working with compinent makers to accelerate the time-to-market of more efficient devices, such as motherboards with a smaller number of DC voltage inputs. Other strategies of limiting power losses include more efficient software as well as an effort to improve the physical design layout of a data center: “We employ mechanical engineers at Google to help with this, and yes, the improvements they make in reducing energy costs amply justify their wages.”
Hoelzle believes that “ultimately, power consumption is likely to become the most critical cost factor for data-centre budgets” in the light of rising energy prices and “and concerns about global warming.”