Our world of product research delves into two areas of product evaluation. That is, how the user perceives the performance of the product and how the product really performs.
In the first area, perception, we are interested in how the user sees the performance of the product, real or imagined. The user is free to assess the product performance under her/his standards of excellence while weighing the product characteristics in her/his order of importance.
In the second area of testing, performance, we are interested in determining what actually takes place. Of prime concern is what has happened and not whether the user considers it important, liked it or not, or even observed it. We are interested in factual data instead of attitudinal data.
Both perception and performance are extremely important in the development and improvement of a product. While both are valuable approaches, however, they are usually incompatible in the same test. When you ask a user to observe and record specific information concerning a particular event or group of events, you will automatically distort the user's personal assessment. By the mere fact that you asked for the observation of a performance characteristic, you have indicated importance and the user will challenge the degree of importance he/she has placed on the event.
It is not unusual to see studies where there is an attempt to collect both types of data. The user may be asked to complete a diary during the use of the product or to observe specific events and record the results. As a performance study this is acceptable, but when we ask such questions at the completion of the study such as overall evaluation, intent to purchase, etc. (perception data), the data are biased and unacceptable as a result of the addition of the requested observations and evaluations during the testing.
It is imperative to know the source of your data and circumstances surrounding data collection.
~ ~ Oregon: 800.542.0123 ~ ~ Minnesota: 888.616.0123 ~ ~ Pennsylvania: 866.993.0123 ~ ~