Generic CPU power efficiency claims are pointless, almost

Posted by Rick C. Hodgin

Opinion - After spending some time reviewing several power efficiency claims and reports I have come to the conclusion that unless you're testing specific workloads applicable to your very own data set needs, most everything you read about power efficiency claims could be thrown out the window.

Required workloads
How much power does your workload need?  Does it stress the SIMD engine?  Is it general purpose compute intensive?  Are you reliant upon external I/O or memory bandwidth?  Is yours a balanced workload or is it specialized with particuar stress on single components?  All of these questions are just the beginning.

In the x86 world today, AMD and Intel are both claiming their processors are the most power efficient on the market.  Direct statements by Intel and AMD have been issued at various times in the past on that fact.  It's difficult to know at any given point and time, and on which specific machine or processor they're talking about.  The efficiency claims are there and one message is being conveyed by all: Data center power needs are moving front and center.  That means more efficiency, and that means performance per watt.

Power claims
Most every x86 benchmark article I read these days has power claims.  Power consumption is shown for a given workload, resulting in a relative metric for that task.  The benchmark shows that over this many seconds, this much work was performed and it used this much power.  And the comparison might even be between two different architectures, possibly several models within each.  But how accurate are those reports to you and what can you get out of them?  The answer: hardly anything.

Efficiency, performance, energy used, they all vary with code models.  AMD was recently able to demonstrate that Intel's SPEC numbers varied greatly just by changing compilers.  Differences in excess of 10% are not uncommon.  And what did the different compilers do to obtain those widely varying results?  They re-arranged the internal workloads slightly so that the same exact compute burden was carried out, albeit differently.  And the net result?  If something takes 10% longer to compute, it will use more power.  Right?  Possibly, but not definitely.  It depends on what parts of the chip are being stressed, and the system overall.  It might be using less power, just taking longer to compute.  It might be consuming more power and taking longer.  It might not.  There are just so many variables.

Stated objectives
Upon direct inquiry for an official response from Intel following a recent efficiency report, a PR representative informed us that there are literally 100s of independent benchmarks out there showing efficiency.  And while he did not give us a list of any benchmarks Intel specifically recommends, he did include this comment in his response:  "For those IT managers who don't do their own in-house testing, we recommend that each look at the 100s of independently verified benchmarks and reviews that exist for the most credible assessment."

In that sentence the Intel PR representative makes the indirect claim that if you want it done right, you need to do it yourself.  And if you're not willing to do it yourself, there are 100s of independent performance benchmarks to review.  And the true result there is the more you review, the better.

Conclusion
It does not appear to matter what benchmark you read or look at.  The results for different workloads can greatly affect the numbers.  And unless you are willing to take the time to get the candidate machines yourself, set them side by side using as many constants as possible (same hard drives, memory size, power supplies, operating system and application install, network, sound, and whatever else is specific to your personal needs), and then put them through their paces on your workload and measure the results, then the truth is you'll never know the answer to the question about which machine is more efficient (for you).  You'll only have other people's answers.

One final thought. It's of little doubt that large companies are going to go for what's best by doing their own testing.  It just makes financial sense for them to do so.  Still, for the rest of us, bear in mind that quad-core computing is available today for less than 100 watts per CPU, or 25 watts per core.  For the compute abilities we have today at 2.0 - 3.0 GHz, that's nothing to sneeze at.  All companies deserve their propers in this regard, and we here at TG Daily are more than happy to give all of them the same.

The low-power, multi-core future is here.  It's bringing with it many variables and we all need to be aware that straight-forward benchmarks won't always give us an accurate measure of either performance or power.  They are only guides, not definitive answers.