Baseball and Marketing
October 25, 2005
- by Robert E. Stevens, GENESIS II(The Second Beginning) E-Mail: firstname.lastname@example.org
The criteria for success in baseball are a lot different than that in market research. In baseball, we idolize those hitters that have batting averages above the .300 mark and reject those with averages below .200. It is a good thing that a one-in-five success or better is not expected in the development of new consumer products.
Historically, only 8% of new products researched will ever hit the market and of those, 9 out of 10 will disappear from the market within one year. In baseball, those numbers would be considered a failure but not in the business world. Maybe we in business consider those numbers as a sign of success since we seem to tolerate these results year in and year out.
I don’t have the figures, but it seems to me that the failure rates in the 1950s and 1960s were a lot lower. If so, what has changed? From my perspective, the major change involves time. I remember when the product development time of an idea averaged 24 months with some projects taking up to 10 years. We took time to maximize the effectiveness of the product, enhance the acceptance through aesthetics, cultivated the positioning, developed the message and polished the marketing plan. Today we seem to “Rush to Failure.”
Towards the end of my career, a client asked if I were to risk my career on one test to establish the market potential of a new brand, what test protocol would I use? I told him that that was an easy question to answer. There is none. The reason is that I did not have any confidence in the methods currently in use for that purpose. Nothing we have in our tool box seems to be very reliable. While we collect numbers, most of the analysis appears to be art rather than science. As one analyst of simulated test market data told me, “50% of the conclusion comes from the test data and 50% is from judgment.”
The question remains, however, what would I do when faced with the challenge? Where would I start and what would I do? It just so happens that a young man (Gary Walker) and I were faced with the challenge of putting a brand modification on the market that had failed two test markets. We decided to start with the end of the process, that is, what did we really need to know to determine market success or failure? We decided that it was SALES data and not Liking Scores, Intent to Purchase Scores or Appreciation Scores. We needed real sales data, from consumers and not testers; we needed real people using real money while making their purchase decisions, involving all the available brands. As Gary said, “That is a Test Market and we can’t afford one.”
Based on the needs and the restrictions, we conducted a mini-test market, much smaller in scope and with tighter controls. We made a major packaging change and a minor communication change. The result was a successful mini-test market followed by a very successful, full-blown test market and the market introduction of the brand (Cheer Ultra and Tide Ultra).
The above study was conducted in the early 1980s. Following that experience, I utilized the protocol a number of times, both while at P&G and following my retirement, with excellent results. I believe methods such as this one can go a long way in identifying market failures before significant investments in time and dollars are made. Think of the possibilities with technology today where we can easily track household purchases as well as the consumer’s in-store activities.
If anyone is interested in this type of research, give me or James Sorensen a call. I know Sorensen Associates is experienced with this type of research.
Sponsor: Sorensen Associates Inc
~ ~ Oregon: 800.542.0123 ~ ~ Minnesota: 888.616.0123 ~ ~ Pennsylvania: 866.993.0123 ~ ~
the in-store research company™
-- Dedicated to the relentless pursuit of WHY?