March 15, 2006
- by Robert E. Stevens, GENESIS II(The Second Beginning) E-Mail: firstname.lastname@example.org
All too often we spend all our design time on the questions to be asked and not the Who, What, When, Where, and How factors of the research. Each can have an enormous effect on your data. Consider the following:
Would you expect a person who was recruited to "Test" a product to use it and observe it any differently than a person just using the product with no expectation of an interview? There are differences and they are big. I have conducted research to evaluate the size of the differences. In the planning stage of your project, decide if it requires "Testers" or "Users." There is a time and place for each.
Would you expect differences in data between research conducted in-store among a complete array of competitive brands including prices and research conducted in the back room of a Mall? You should.
Would you expect the results of an interview obtained from shoppers in a store to be any different from results obtained from people pre-recruited to participate in a test? You should.
Do you remember the exact price you paid for your last purchase of a bag of frozen peas? If not, why would you ever insist that the concept you are evaluating contain the price unless you are conducting your research in the presence of the competition and their prices?
Does the taste of blind test products differ from the taste of brand identified products? I don't think so but a beverage company found out that taste test results indicate there can be really big differences.
Then there is the issue of directed interest testing vs. non-directed. Many researchers do not believe in directed interest testing. I have problems not informing the participant of the product's "reason for being." How would you
ever evaluate a product such as a laundry detergent that contained a stain inhibitor? The participants do not expect this attribute, so why would they ever look to see if it is there and how well it works?
There are times when the biases are injected internally, with little or no help from the test participants.
I was employed by a company (yes P&G) that had an Invalid Interview rule. If the respondent did not use the test product for 50% of the test duration, they were not interviewed. They were not even asked why they did not use it for the duration of the test. I encountered this rule when I first moved into the MRD Department. In a test we had approximately 20% invalid interviews. I asked why the participants stopped using the product. I was told we did not ask them. I went back and called the invalid participants and found out that of the 20%, 90% stopped using the product because it left bleach spots on their clothes. Now why would the company not be interested in why a person receiving free product refused to continue to use it? I don't know but it took two years to get the rule
changed to at least ask why the usage was terminated. As a point of fact, P&G is not the only one with similar internal rules.
Do you conduct concept evaluation studies via a Paper questionnaire? Do you have the contingency question associated with the Intent To Purchase Rating such as, if you did not vote "definitely would buy," write in why? In a controlled study we found that the Definitely Would Buy scores will be increased by 20% to 25% with the addition of the contingency question. Who would want to write out a reason when they only needed to change their vote to probably would buy?
The above are just a few of the biases we must deal with in our research. But as Charlie Zitnik, Kroger Company, says, “it is all about the PIE factor." Planning Is Everything. In the design of your research, consider not only what you want to ask but also the Who, What, When, Where, and How you conduct your research.
~ ~ Oregon: 800.542.0123 ~ ~ Minnesota: 888.616.0123 ~ ~ Pennsylvania: 866.993.0123 ~ ~