Views from the Hills by R. E. Stevens, GENESIS II (The Second Beginning) E-Mail views@aol.com

False Alarms vs. Missing the Boat

In analyzing our research data, we are always faced with two types of risks, "Alpha" and "Beta." The "Alpha" risk is the risk of concluding that a particular observed difference is real, when in fact, there is no difference (False Alarm). In the "Beta" risk, the risk is in concluding that no real difference exists, when in fact, one does (Missing the Boat).

This year we will be assessing over 275,000 new market ideas. From these ideas, about 23,000 will be put into the market. In less than 12 months, about 21,000 of the 23,000 will be deemed market failures. This failure rate is the result of our "Alpha" risk. That is, we concluded that the idea was real and important. The level of our "Beta" risk failure will never be known since these ideas never make it to the market to determine if they were "really good ideas."

It has been my experience that most companies set their "Alpha" and "Beta" risk without regard to where they are in the development of an idea. I believe that the "Beta" risk is a much greater threat to Product Development than the "Alpha" risk and therefore we should operate with a high "Alpha" risk ("Alpha" of .2 rather than the usual .05) when the idea is in the development stage. However, as we approach market introduction, we should focus more heavily on "Alpha" risk (.05 vs. the .2). Let me explain my thinking. In upstream work (product development), we cannot afford to throw out a good idea ("Beta" risk Missing the Boat), but we can afford to move forward an idea that may not be truly worthy of market introduction ("Alpha" risk False Alarm) because the idea will be re-evaluated a number of times before market introduction.

Significance tests unfortunately, concentrate mainly on the "Alpha" risk (False Alarm), so the atmosphere of a go/no-go significance test actually encourages the "Beta" errors (Missing the Boat). If, for example, we call a 55/45 split on a base of 300, "not statistically different at the 5% risk level" and therefore accept the Null Hypothesis (50/50) we are running a good chance that we are discarding a real difference. The 55/45 result has exactly the same probability of being a 60/40 as a 50/50. Remember that the 55/45 is our best estimate. Actually the odds that our result is really a 56/44 (significant difference) is 2:1 greater than the odds of a 50/50.

Rather than concerning ourselves with "significance," we should be looking at the internals, continuity of the data and the statistical power of the study. Understand what the data are telling us. Compare the data with our expectations and other research.

Your approach to how you conduct your research and how you analyze the data, should depend on where you are in the development cycle of the idea.


[Back][Index][Forward]