Archive

Author Archive

MASB’s Game Changing Brand Investment and Valuation Project – Part I

July 20th, 2015 Comments off

How much is my brand worth in financial terms?  How much will my marketing grow its value?

Despite their seeming simplicity, these two questions have frustrated brand practitioners for decades.  It is well accepted that there is a link between brand building activities and corporate profits.  After all, the entire field of marketing is based upon this proposition.  Yet it is equally well accepted that there is no standardized approach that companies can rely on to quantify brand value in the dollar-and-cents terms applied to other assets.  This puts marketing at a severe disadvantage within boardroom discussions of resource allocations, as its expenditures are all too often seen as pure costs rather than investments in the business.  And this is despite a growing realization that intangibles account for up to eighty percent of overall corporate value with brands being at the top of the list.

But one industry group is actively working to change this.  The Marketing Accountability Standards Board (MASB) created the Brand Investment and Valuation (BIV) project to establish the quantitative linkages between marketing and financial metrics.   The solution they have proposed is as simple as the questions themselves:  Identify a “brand strength” metric which captures the impact of all branding activities, understand how this metric translates into financial returns (ultimately cash flow), and then use this to calculate a brand value and to project the return from future marketing investments.

MASB-FIG-01

Of course this begs the question, does such a “brand strength” metric exist?  And if so, is it practical enough to be used?  After an exhaustive search of research literature, MASB identified brand preference as the most likely candidate for the brand strength metric.  Brand preference (also known as brand choice) is defined within the common language in marketing dictionary as:

One of the indicators of the strength of a brand in the hearts and minds of customers, brand preference represents which brands are preferred under assumptions of equality in price and availability.

The ability of brand preference to isolate brand strength from other market factors (e.g., price and distribution) separates it from other marketing measures.  Furthermore, previous studies demonstrated that the behavioral brand preference approach pioneered by MSW•ARS met MASB’s predetermined ten criteria of an ideal metric:

  1. Relevant:  It has been proven to capture the impact of all types of marketing and PR activities.  Over the last 45 years it has been used to measure the effectiveness of all forms of media (e.g. television, print, radio, out-of-home, digital), events (e.g. celebrity and event sponsorships), and brand news (e.g. product recalls, green initiatives).  It has also been shown to capture both conscious and unconscious customer motivations and so applies equally to rational, emotional, and mixed branding strategies.
  2. Predictive:  Its ability to accurately forecast financial outcomes has been demonstrated in a number of studies.  This includes studies comparing preference to sales results calculated from store audits, in-store scanners, pharmaceutical prescription fulfillments and new car registrations.  When applied to advertising, changes in brand preference have been proven to predict changes in the above sales sources from control market tests, split media tests, pre-to-post share analysis and market mix modeling.  In fact, Quirk’s Magazine noted over a decade ago that “this measurement has been validated to actual business results more than any other advertising measurement in the business”.
  3. Objective:  It is purely an empirical measure by nature.  No subjective interpretation is needed.
  4. Calibrated:  It has been applied to the broad spectrum of brands and categories and its correlation to sales has proven consistent across geographies.  Furthermore, it self-adjusts to the marketplace where it is collected so it has the same interpretation without any need for historic benchmarks.

MASB-FIG-02

  1. Reliable:  It has been shown to be as reliable as the laws of random sampling allow.  This is true both for brand preference gathered at a point in time and for changes over time caused by marketing activities.  The table below summarizes this consistency in measuring changes.  Changes in brand preference caused by 49 campaigns were each measured twice among independent groups of costumers.  Observed variation between the pairs was compared to what would be expected from random sampling.  The ‘not significant’ conclusion confirms that the measure is as reliable as the laws of random sampling allow.

MASB-FIG-03

  1. Sensitive:  It is able to detect the impact of media even from one brand building exposure (e.g., a single television ad shown once).
  2. Simple:  It is easily applied and understood.  It can be incorporated within any type of customer research including tracking, pre-testing, post-testing, segmentation, strategy, product concept.
  3. Causal:  While it captures the effect of product experience, it is not driven by just product experience.  In fact, it has been proven predictive of trial for new products for which consumers have no experience.
  4. Transparent:  It doesn’t rely on ‘block box’ models or norms.
  5. Quality Assured:  Its reliability and predictability are subject to continuous review.

To verify its suitability as the brand strength metric, MASB included an aggressive trial of brand preference as part of its BIV project.  A cornerstone of this endeavor was a longitudinal tracking study sponsored by six blue chip corporations and conducted by MSW•ARS Research.  The two year study covers one hundred twenty brands across twelve categories with a variety of market conditions.  In part II of this article we will review several of the key findings from this project, which are already changing industry perceptions on measuring brand value and making brand building investments.

The MSW•ARS Brand Preference measure can be incorporated into a wide variety of research and can even become a standard key performance indicator in your reporting, particularly in your tracking data.  In future blog posts we will discuss this and how you can easily apply it.

If you don’t want to wait then please contact your MSW•ARS representative to learn more about our brand preference approach.

Don’t Be Fooled, Ad Wearout Is Real!

June 23rd, 2015 Comments off

One of the most persistent point-of-view requests we get from clients concerns wearout of their advertising. Why; because ads are expensive to produce and media is costly. What we would all like to hear is that ads don’t wearout and that they will continue to drive brand sales at the same rate regardless of the spend placed behind them. But the reality is ads do wearout. They reach a stage after being seen so many times that their impact diminishes substantially. In fact that’s one of the accepted definitions of commercial wearout:

Commercial Wearout – Stage an advertisement reaches after being printed or aired so many times that its effect on the brand‘s sale is zero or even negative.

Source: businessdictionary.com

Given this reality the important question isn’t Will my ads wear out? but rather When will my ads wear out? Luckily there are a variety of marketing research tools and techniques to identify wearout points across the spectrum of media channels; TV, Digital, Radio, Print. Pre-testing, post-testing, brand health tracking and sales decomposition/market mix modeling can all be used. But regardless of the tool selected, 3 conditions must be met to properly measure wearout:

1. Execution level granularity of both sales effectiveness and media spend. The research literature abounds with evidence that wearout occurs at the individual ad level. Two ads within the same campaign can have very different levels of wearout depending on their initial sales effectiveness and media placed behind them. If the measurement approach does not take this into account, these two factors will be convoluted, resulting in misinterpretation of the data.

Take this case study as an example. The advertiser launched a campaign of three sequentially aired television ads. The green bar in the chart indicates a base period without advertising, while the blue bars indicate when the ads were on air. The dotted line represents the moving average market share which gives an indication of sales effectiveness relative to the base period.

From this view it appears that the ads hit their wearout point midway through the campaign as indicated by the moving average flattening out at the third airing period. The implication is that the ads are well overdue for replacement.

WEAROUT-FIG-01

Here is the same case with a more granular view. Four-week periods replace the twelve-week ones. And instead of looking at the three ads in aggregate, each ad’s airing is indicated. This chart tells a very different wearout story. Rather than a flattening, it shows a very aggressive battle for market share, with ads B and C being particularly effective in driving gains for the brand. While ad B has been worn down substantially, ad C is still exhibiting some elasticity to sales.

WEAROUT-FIG-02

In the above example it is easy to see each ad’s sales effectiveness and wearout because the ads were aired sequentially, the media plan was fairly flat with few hiatuses and the advertising was so powerful that it dwarfed other factors within the marketing mix. This is unusual, as the vast majority of campaigns include ads aired simultaneously, more varied media plans and advertising that generate more modest share changes. Because of this, even the most sophisticated market mix models can have difficulty accurately assessing wearout. A useful rule-of-thumb is that if the model cannot provide a statistically sensitive sales impacted estimate at a given GRP level (e.g., sales decomposition beta weight at 500 GRPs) for each ad unit, then the estimates of wearout are likely to be impacted by this convolution. Another caveat is that even when the data is sufficient for the model to provide reasonable estimates, oftentimes the analysis period is too long for the results to be actionable.

2. Metrics correspond to sales/share changes. An advantage of using decomposition modeling is that the connection to sales is inherent in the method. By contrast, the other research approaches use attitudinal measures (e.g., message communication, purchase intent, ad liking/enjoyment) or behavioral measures (e.g., changes in brand preference, ad/brand recall) as a stand-in for sales. Typically, attitudinal measures are ineffective in measuring wearout because the over-time relationship to sales response is weak. For example, while an ad’s ability to communicate a certain key message can be an important contributor to sales effectiveness, its ability to do so at one hundred GRPs is roughly the same as it is at five hundred GRPs.

This pattern holds true even for some behavioral metrics. The table below shows the results for 32 television ads tested before airing and then again after airing. Two behavioral metrics were investigated. One is proven ad and brand recall. In this measurement, respondents were incidentally exposed to advertising within consistent television programming and then later were queried to both describe the ad and to name the brand, thus verifying their recollection of it. The other is the CCPersuasion metric. This is the observed shift in brand preference among the competitive set given an actual acquisition opportunity (in this case a prize drawing). This is much different than attitudinal persuasion (aka purchase intent) which is simply a verbal commitment to try the product at a later date. The amount of airing randomly varied between the two tests with the minimum being 232 GRPs and the maximum 2806. The median media weight between tests was 1018.

WEAROUT-FIG-03

The first thing to note is that the recall measure showed very little variation between the pre-test and post-test regardless of the media spend level. CCPersuasion on the other hand was very sensitive to airing, with all thirty two ads showing a decline. On average, CCPersuasion lost 47% of its value between tests, with the degree of drop in direct relationship to the amount of airing. This drop in CCPersuasion level has been shown to be consistent in over one hundred and fifty cases covering one-hundred eighteen brands competing in fifty one categories.

3. Isolation from other in-market effects. One final area which impacts the ability to properly isolate wearout is variation in programming context between the measurement times. For example, if a media plan uses more expensive, higher engagement placements early in the rotation and less engaging placements later, then the drop in effectiveness seen will be a combination of wearout and the shift in placement quality. And since program engagement can vary significantly among shows with similar viewership ratings, even plans with consistent media placements will experience this variation. In-market tracking and post-testing systems are especially vulnerable to such deviations, oftentimes with ads exhibiting what appears to be a “spontaneous recovery” from wearout when in reality it is simply a shift to more engaging context.

WEAROUT-FIG-05

In short, it is easy to be fooled by these factors, even coming to the conclusion that the wearout phenomenon is either unpredictable or, on the opposite side of the coin, rare or non-existent. But by employing a wearout monitoring program that avoids these pitfalls the quantification of wearout becomes as straightforward as the concept itself. And the rewards for doing so are high. In one published case our client experienced a five-fold improvement in campaign ROI just from managing wearout!

If you would like more information on managing wearout to improve advertising return, please request our white paper Outlook® Media Planner & Forecasting Tool – Wearout Retrospective and Application

Read more…

Is Brand Preference Marketing’s Higgs Boson?

November 20th, 2014 Comments off

Higgs Boson-02Chances are you have heard of the Higgs Boson, an elusive elementary particle that physicists have spent the last fifty years and billions of dollars to find.  Reports of its potential discovery have captured headlines around the globe.  If verified, not only will it help cement our mathematical understanding of how the universe works, but will set the trajectory for future technological advances.

What has this got to do with the marketing discipline?  For the last fifty years we have been dealing with our own elusive particle, an accurate metric that quantifies the financial value a brand provides.  Without this the mathematics is incomplete for financial forecasting, planning, justifying marketing investment or improving marketing return.

But 2015 may be the year that this changes due to the work of the Marketing Accountability Standards Board (MASB).  This group of marketing and financial practitioners and academics has been pursuing aggressive “game changing” projects to not only create general principles and methodological standards for brand valuation, but to prove them out in brand “trials” that serve as practical examples of their application.  Based on prior research, MASB chose the MSW•ARS brand preference measurement approach as the cornerstone of its two-year long brand investment and valuation trials.  The first installment of this research was presented at the group’s summer summit in August and the initial results have been making waves in industry news.

Mathematics of Brand Preference

Just like physics equations hinted at the existence of the Higgs Boson, so did the equations of marketing hint at brand preference.  For years marketers have dissected sales data and realized that maintaining market share and price point were critical to maintaining revenue streams.

masb-image-03

But this just pushes the question a level deeper to: What drives a brand’s unit market share?  Economic theory provides two of the key elements, price relative to competing products and distribution.  Simply put, on average the less costly in terms of time and money a product is to obtain, the higher the demand for it will be.  But people are not economic robots.  They will oftentimes choose a more costly option if they feel that it will provide them a decisive benefit, even if it is a purely emotional one.  Thus it is the breadth and strength of consumers’ preference which set the base level for a brand’s unit market share with distribution and relative price acting as modifiers to it.

masb-image-04

So how effective is brand preference in explaining a brand’s unit market share?  In the initial MASB trial analysis, six months of brand preference, unit share, price premiums, and distribution were analyzed across twelve participant categories containing one hundred nineteen brands.  The categories examined included a diverse mix of product types; prices from thirty cents to thirty thousand dollars, impulse buys to deliberate purchases, consumables to durables.  Across these categories brand preference accounted for seventy-one percent of the differences between brands while effective distribution and price premium added another fourteen percent.

masb-image-02

With this milestone achieved the next step is already underway, incorporating brand preference in financial and marketing forecast and planning applications.  More details on these endeavors will be forthcoming in future installments.

Please contact your MSW●ARS representative to learn more about how brand preference is embedded throughout all of our research solutions.