Views from the Hills by R. E. Stevens, GENESIS II (The Second Beginning) E-Mail views@aol.com

Product Ratings in a Single Product Test

Some of the very first "Researching Research" projects that I was involved with focused on the order of questioning.  These projects involved both paired comparison and single product blind tests.  The research that dealt with the order of product ratings, preferences and comments (likes/dislikes and reasons for preference) on average were not greatly influenced by the order of presentation.  However, there were times when the order did play an important part in the results.

In our early single product work, we experimented with the order of the Comparison with Own rating, the product rating and the voluntaries (likes/dislikes).  While, as stated above, on average we did not see a great deal of difference, there were times however, when we obtained greater product rating differences.  The larger differences occurred when we gave the respondent more deliberation time.  That order was first asking for "likes and dislikes" followed by the Comparison with Own ratings and finally the Product rating.

The above order of questioning seemed logical to us based on the expectations of the respondents.  That is the respondent comes to us ready to report on their experience.  Their prime focus at the time is what they like and did not like about the product.  It seemed logical to get that out of the way first and also by letting them enumerate their experiences, the review aided them in the overall evaluation of the product.

Our research pointed us in a slightly different direction in the paired comparison protocol.  In this case, the respondents appear to be thinking preference as opposed to reasons for preference.  Therefore, with paired comparison studies, we would ask preference first followed by reasons for preference and then product ratings.

I was recently asked that if reflection by the participant was important, wouldn't it be advisable to ask preference and ratings at the end of the interview?  My thought was that it would be acceptable if you included only those attributes that were in the participant's equation of excellence and importantly no other attributes.  Therefore, all interviews would be different and dependent on the participant.  If you added something the respondent did not normally consider, you could easily change the respondent's response pattern.  If you left something out that they felt was important, you may cause them to consider that factor to be unimportant and thereby reducing that factor's importance in the overall assessment.

Basically I look at up-stream blind testing in two lights:  1) perception and 2) performance.  In the first case, perception, you want to determine how the respondent assesses the product using their personal criteria of excellence.  In the second case, performance, you want to understand how the respondent evaluates the performance on specific attributes regardless of the respondents' criteria of excellence or importance.  In the case of the performance based research, I would propose that the respondent be given in advance a performance rating card to be used during the use of the product whereby they would rate the performance of the product on each attribute during the actual use of the product.  Both research approaches take on very different protocols.


[Back][Index][Forward]