New Tree Header

[Views home]
Next
POPSG
home
MRlibrary
Sorensen
Associates
[Views home]
Views
Researcher
Resumes
In-store
Bibliography
Register for
Email Views
[Next]
Previous

Conducting the Paired Comparison Blind Test

April 12, 2006 - by Robert E. Stevens, GENESIS II(The Second Beginning) E-Mail: views@aol.com

I was recently asked to look at and comment on a Parallel Paired Comparison Blind Test. The study was conducted by a very large international company. Many within the upper management of this company spent their early years at P&G.  I had assumed that the research principles of P&G would be practiced within that company, not that P&G market research principles are without flaws. I was surprised, however, at how far the protocol deviated from my early training.

 

First, the study was conducted to determine if the consumer satisfaction with the brand had slipped.  The documentation did not state if the study was to address an absolute or relative change. My indoctrination into brand maintenance was that an absolute change should have been caught at the manufacturing quality control stage. And above all, the comparison would have been made with a known benchmark and not with a competitive brand as in this study. A secondary point of change awareness would have been in the Performance Testing of brands, both our own and competition. At P&G, bimonthly samples are purchased from stores throughout their market area and tested both analytically and through laboratory performance. Again the comparison is made with a known benchmark. We would not be trying to determine a change by comparing the brand with a competitive brand. In the comparison with a competitive brand, you are not sure which brand is changing, only that some change did or did not occur.

 

The documentation did not reveal who was participating, how they were recruited, where the interviewing took place, or the method of interviewing. All of which can have major effects on the results and should be present in the documentation.

 

According to the outline, the brand was compared with two competitive brands in a blind test format. The ordering of questions was to ask for attribute ratings first and then the overall rating and preference.

 

My training, right or wrong, states that you ask preference first followed by the reasons for preference. Only after these two questions are asked do we interject attributes into the mix. The reasoning here is that the participants are expecting you to ask them which product they prefer and why, so get it out of the way immediately. It is also felt, and there are data to substantiate the fact, that the introduction of attributes into the mix before the preference is stated, can influence preference and the overall ratings. You are telling the respondent what

is important to you and you may introduce an attribute that the respondent did not consider in their evaluation and thereby cause them to re-evaluate their choice.


These pages are proudly sponsored by:
[Sorensen Associates Inc]
"Dedicated to the relentless pursuit of WHY?"

~ ~ Oregon: 800.542.0123 ~ ~ Minnesota: 888.616.0123 ~ ~ Pennsylvania: 866.993.0123 ~ ~