Views from the Hills by R. E. Stevens, GENESIS II (The Second Beginning) E-Mail views@aol.com

Do Our Clients Really Know the Risks and Limitations of Our Research?  Do We Know the Risks and Limitations of Our Own Research?

Recently i called an author of an article I had read in a leading Market Research magazine.  I was impressed with some of the work he was writing about.  In the course of the conversation, he stated that in 20 years of research, he has never made a mistake.  At this point, his credibility went down the drain.  I have met people who have never made a mistake but they were always in the hospital newborn nursery and in bassinets.  In my mind, the only person who does not make a mistake is someone who does not do anything.  I can tell you this, if I were hiring a consultant, this is one person I definitely would not hire.

How does the above experience fit the title of this paper?  I think it fits in many ways.  Is there anyone out there in the real world of research that has ever conducted a perfect piece of research?  I don't think it is possible.  Actually, I have never conducted a research project where when finished, I was perfectly satisfied with the study.  I have always found ways after the fact where the study could have been improved.  if I would ever design one that turned out perfect, I would be concerned that the research was really unnecessary.  Research is about estimating, approximating, putting risks in perspective, etc.  Risks and limitations are everywhere.  They are in the test design, sampling, execution and analysis.  I have written many times about the errors in the design, sample and execution of a study.  I don't believe I have ever written about problems of analysis except when they were related directly to the use of an improper design and therefor an inappropriate analysis.  The improper design/analysis problem appears most frequently in the use of the paired comparison design to assess the acceptability of a product.  Paired comparison studies are designed for evaluating choice (among two alternatives).  Single product testing is designed for measuring acceptability.

The most common analysis error I have seen is in the interpretation of the reliability statistic.  That is, the +/-3% or the alpha risk of .05.  While these are reliability statistics, they are most frequently used as accuracy statistics.  These statistics only address the probability that replicate studies will yield similar results.  Specifically, if you have a study with a flawed sample, the replicate studies must have the same flaw for the statistic to be valid.  In other words, confidence statistics address Precision and not necessarily Accuracy.

The greatest area of limitations is in the sampling.  This is one area of research than can NEVER be perfect without 100% sampling.  I'm reminded of a friend who was going to conduct a nationally representative sample by using 20 malls from around the country.  While his sample was perhaps regionally representative, it was far from a nationally representative sample.  His sample was representative of people who shop malls in the selected cities and who agreed to participate in his research, a far cry from a nationally representative sample.

I have written many times about experiences in the real world of research.  All one has to do is get out from behind the desk and into the world of action to see the execution problems.  It is our responsibility to become aware of the biases and to work to minimize them in our research.


[Back][Index][Forward]