Views from the Hills by R. E. Stevens, GENESIS II (The Second Beginning) E-Mail views@aol.com

Startling New Evidence

Recently when presenting "Researching Research" results, I was asked how I handled unexpected test results. Rather than just state how I'd handle an incidence of this type, I chose to give a real example.

A few years back, a company was working to improve the image and shelf visibility of one of its brands. To achieve this, it was decided to change both the container and the graphics. To create the best possible execution, a series of tests was conducted, each building on the previous execution. After the studies were concluded, one last confirmatory study was conducted. A syndicated, simulated, store shelf, mall intercept study was utilized. This particular study was noted for its ability to predict share results based on spot test results utilizing their extensive test result database. The results of the study were devastating. They contradicted all previous research.

The first reaction was to check the obvious. That is, check the product coding for the possibility of a code reversal and to check the data processing. When the checks did not uncover an error, it was decided to treat the new data as a new hypothesis and not a new truth. As a result of this thinking, we elected to do a follow-up confirmatory study. In the follow-up study, the exact same interview was used but the location of the study was changed. Instead of utilizing a mall intercept approach, we chose to use real stores for the test location where we could place the products on the store shelves exactly in the shelf positions where they would be placed when the new execution was introduced into the market.

The results of this follow-up research were directly opposite of the mall intercept data. As a result of this new data, the company chose to proceed to the market with the new execution and it was a success. The thinking of the company in making this decision was that the two studies were exactly the same except for location and intuitively they felt that the more accurate results were those collected in the more natural environment. The results of the In-Store Research was also in agreement with all previous testing except for the simulated store shelf test.

From my perspective there were two learnings from this experience.

First, the test environment can have a major effect on your data. It is in your best interest to remove as many test variables as possible. When we are studying WHAT happens, we should be alert to the effects of the WHERE, WHEN, WHO AND HOW variables in our test.

Second, treat new discoveries as a new hypothesis where this new hypothesis should be confirmed before action is taken. It has been my experience that startling new results most often turn out to be test errors.


[Back][Index][Forward]