Archive

Archive for October, 2015

MASB’s Game Changing Brand Investment and Valuation Project – Part III

October 13th, 2015 Comments off

In Part I and Part II of this blog series we discussed the empirical strengths and corporate needs driving brand preference’s adoption.  But one aspect that pleasantly surprises those new to the technique is how easy it is to deploy relative to other measures.

Most common brand metrics are collected through the use of a closed-ended question followed by a Likert or intention style scale.  An example would be the common stated purchase intent question:

How likely are you to buy [INSERT BRAND] in the next [INSERT TIME PERIOD]?

  1. Definitely will buy
  2. Probably will buy
  3. Might or might not buy
  4. Probably will not buy
  5. Definitely will not buy

While on the surface this looks fairly simple, in practice it is difficult to extract meaningful, sales calibrated information from it.  Since this is a stated measure it is subject to each respondent’s subjective interpretation and cognitive bias.  One respondent’s understanding of “Definitely”, “Probably”, and “Might or might not” can vary dramatically from another.  And while this effect can be averaged out across large samples, it makes subgroup comparisons very difficult; with psychographic and demographic groups oftentimes exhibiting substantial mean differences.  Without strong normative data (which is oftentimes very difficult to achieve) this can lead to false relative conclusions.

Worse yet, differences can also be manifested by seemingly innocuous changes in survey deployment like changing question order or sample sources.  As demonstrated in the ARF Foundations of Quality project, different panels produce substantially different response levels even when great effort is applied to demographic balancing.  This occurs for even the most straightforward stated questions, such as reported product usage, at rates which exceed those expected from sampling error.

MASB-PART-III-FIG-001

And even when the above factors are rigorously controlled the stated questions still require a scale translation to calibrate the results with in-market performance.  This translation, which in itself is subject to estimation error, results in a ‘black box’.  This slows down the analytic process and can also reduce confidence in the results by end users because the linkage is no longer intuitive.

Brand Preference by comparison is much more robust.  The incentivized act of choosing from a competitive set replicates much of the dynamics of an actual purchase occasion.  Therefore respondents intuitively understand the exercise and the results naturally calibrate to sales performance.  This makes it an ideal method for sub-group comparisons as no norms or translations are needed for interpretation.

MASB-PART-III-FIG-002

But perhaps most exciting is how respondents respond to the brand preference exercise.   Surveys consisting of closed-ended and open-ended questions can quickly disengage respondents leading to straightlining, speeding, satisficing, and other bad survey taking behavior.  In an attempt to combat this insight teams have been compelled to continually reduce the number of questions asked in a survey and the number of options, especially brands rated, included within attribute tables.   Essentially depth of research is being traded off for response quality.

Including a brand preference exercise within such surveys counteracts this trend.  Not only does it provide valuable information for each brand within a category in a very time efficient manner, the nature of the exercise improves engagement in much the same manner as gamification.  In fact, when brand preference is added to a survey it is common to see self-reported survey length drop while survey satisfaction ratings rise.

As an example of this, we recently created for a client a first-of-its-kind brand preference based, behavioral in-store shelf optimization testing platform.  Respondents have often viewed traditional approaches to this type of research as tedious and not worthwhile.  By contrast, the results for this new approach have been outstanding.  On a ten point scale, 98% of respondents rated the system a 5+ and 55% rated the new system a perfect 10.

MASB-PART-III-FIG-003

 

But perhaps more impressive than this quantitative assessment is the open-ended survey feedback respondents chose to share.  Comments like these were common:

“LOVE that it was short and to the point, no dragging it out.”

“…the ease of instructions. They were not confusing.”

“There was not a lot of ambiguous stuff. Well prepared.  User friendly.”

“It was very different than other surveys I’ve taken, and I appreciated that variety!”

 “This survey was very different, fun, interesting, and relevant. I like the conciseness of it and that it didn’t ask the same questions over and over again. Nice survey and great topic.”

“I was actually a little disappointed when the end questions came up. I wanted to shop more.”

Simply put, when it comes to survey deployment MSW•ARS brand preference is unlike any other metric.  MSW•ARS Brand Preference can be incorporated into a wide variety of research and can even become a standard key performance indicator in your reporting, particularly in your tracking data.

Please contact your MSW●ARS representative to learn more about how our brand preference approach has been integrated across our entire suite of solutions.

Categories: MASB Tags: