Forrester recently surveyed several thousand consumers and found that interest in Windows tablets has dropped precipitously. This apparently prompted analysts to conclude that Redmond is too little too late for the hyper-competitive tablet market.
However, I think if Forrester had conducted a survey about Apple phones years before there was an iPhone or even an interest in putting a Mac in your pocket, well, the results probably would have been even lower. This illustrates why tech surveys - while interesting to read - might not be very accurate in terms of their predictions.
Now, to be clear, the current lineup of Windows tablets aren’t selling very well. As such, a survey about consumer buying behavior that shows little interest in the product would likely accurately predict what vendors will report by the end of the quarter.
So why would one survey be accurate and the other inaccurate? Let’s explore this.
The Problem with Predictive Surveys and Focus Groups
Surveys are great at determining (if they ask the right questions) why someone did something (past tense) but they suck at predicting what someone will do. Recall that in last US election Hillary Clinton was, based on surveys early in the primaries, certain to win the election. We even had TV shows depicting female presidential characters in anticipation of this supposed outcome.
Obviously, Hillary didn't win the elections, simply because surveys don’t place an individual in the exact same situation he or she will be in when making a future decision. True, focus groups do try emulate or imagine future scenarios, but generally fail at this task. Really, it is almost impossible to precisely anticipate what choices buyers/voters will face in the future and what the overall environment will be like.
This point was illustrated for me a few years ago when I was part of a huge focus group for Chrysler, which was thinking about bringing out a new car. I was very impressed and told them I would absolutely buy the car at the price they suggested. Two years later they rolled the car out, yet it didn’t even make my short list.
Truthfully, my needs had changed. Plus, there were a crop of cars on the road that no one in the focus group knew were going to be tooling around when we convened two years earlier. By the way, this also showcases a problem with focus groups because, looking back, Chrysler actually got me excited about the car at a conference that was very close to a sales event - which likely further corrupted my response. This is why I no longer really believe in predictive focus groups (I later found out that Steve Jobs had arrived at the same conclusion).
In short, focus groups and surveys often fail because they don’t analyze the world as it will exist, but rather, at an alternative universe created by the survey or focus group facilitator. Meaning, the greater the time and environmental differences, the less chance they have of being predictive.
The Danger of Predictive Surveys and Focus Groups
However, people will definitely make decisions based on surveys like the one conducted by Forrester about Windows 8 tablets. So, for example, if Microsoft concluded there was no way it could win the tablet war and pulled marketing and vendor support, the survey would turn out to be true. Of course, in this particular case, it would have caused, rather than just predicted, the result: a self-fulfilling prophecy of sorts.
Focus groups tend to be even more dangerous because, as the above-mentioned example illustrates, they can prompt companies to build products that have absolutely no chance of selling. Worse, they may create a false set of assumptions that suggests no marketing is needed because just about everyone will rush to buy the product.
In both cases, the danger is that if people believe and act on the survey (which has no real predictive accuracy), they could change the outcome of a future event from success to failure. The old computing axiom of "Garbage in/Garbage out" (GIGO) applies.
However, just because you can’t trust the results doesn’t necessarily mean one should assume they are patently false. For example, a survey conducted about the Zune long before it shipped would have shown little demand, which turned out to be a rather accurate prediction. True, the Zune failed because the product sucked and was severely under marketed. Yet, the survey would have likely been useful to showcase that the market was not a "build it and they will come" space.
Wrapping Up: Avoiding the Prediction
The Forrester survey does indicate that flagging interest in Windows tablets will likely increase estimated marketing costs and illustrates potential name equity (much like there is with the Windows Phone platform) that will need to be overcome.
As such, If Microsoft underspends or under executes - which has been a recurring problem with products like Origami, Zune, Windows Vista, and Windows Home Server - failure will result. And, unlike those other products, Windows is a core offering from Redmond, meaning failure would likely be catastrophic.