Archive

Archive for the ‘Uncategorized’ Category

The “5 CMO Objectives” According to the AMA / Deloitte and How MSW Addresses Each

March 14th, 2022 Comments off

The 26th edition of the annual CMO survey conducted in February 2021 for the American Marketing Association by Deloitte, identified that the importance of Marketing had increased during the pandemic.  CMOs said that they were focused on brand building and reported 5 specific “Objectives”.

MSW Research directly addresses each of these “5 CMO Objectives” with unique measurements that enhance your insights process; measurements all founded upon the evidence based, MSW Predictive Brand Growth Marketing Model™

The “5 CMO Objectives” and how MSW addresses each with, a Philosophical Framework, specific Applications/Products/Services, and the Evidenced Based Proof that supports this:

“CMO Objective” Number 1:  Building Brand Value That Connects with Customers

Two elements of the MSW Marketing Model that address this CMO ”Objective”:

1:  Brand Relationships; existing brand relationships drive Brand Preference.  MSW uses a segmentation model that places every individual into one of eight groups for each brand in the category.  We utilize a relationship decision tree to identify the strength of the brand relationship with customers.

2:  Brand Perceptions; all successful brands have a set of distinctive brand assets (sensory cues: color, logo, design, character, jingle, etc.) that aid memory encoding and act as signals to enhance availability.  Additionally, all brands also have a differentiated positioning (a reason to be).

Framework and Research Objectives:

  • Category whitespace and market need priorities-Decoder™
  • Brand lift opportunities in competitive context-BrandScape™
  • Preference shift linked to short term sales and long-term brand equity-The Brand Strength Monitor™
  • Ongoing campaign and individual message/medium brand lift in competitive context-Advertising Performance Monitor™
  • Pre and post campaign message and media effectiveness-Touchpoint™

Applications/Products/Services:

  • A&U – Decoder™
  • Brand Purpose – BrandScape™
  • Brand Equity Tracking – The Brand Strength Monitor™
  • Early-Stage Message Screening – Sifter™

Evidence Based Proof that Supports this:

  • Of the points that support this objective, one that stands out in particular because of its validation as a predictive indicator of brand health and sales is our RDE Analytic Framework™, which measures Relevance, Differentiation and Emotion.  RDE™ has been proven to grow brands based on 17 years of experience, with 3,500,000+ individual brand evaluations, across 400+ categories, for 2,000+ brands, in 44 countries.

“CMO Objective” Number 2:  Increasing Awareness

Every piece of brand communication needs to build awareness. At MSW, we have proven methods to effectively measure awareness and determine the contribution to the memory structure that drives saliency and brand association.

Framework and Research Objectives:

  • Ongoing campaign and individual message/medium awareness and preference lift in competitive context-Advertising Performance Monitor
  • Pre and post campaign message and media effectiveness-Touchpoint

Applications/Products/Services:

  • Development / Copy Testing – TouchPoint™
  • Advertising Performance Tracking – APM™

Evidence Based Proof that Supports this:

  • Saliency, as measured by Top-of-Mind Awareness, is a stronger predictor of sales than Aided Awareness (average R2 = 0.70 vs. 0.44). Simply improving a brand’s TOM Awareness can often lead to an increase in market share.  We see this in the correlation between TOM Awareness and sales.  TOM Awareness is usually the second most accurate predictor of sales after our CC Brand Preference measure, which is independently proven to correlate at .94.

“CMO Objective” Number 3:  Acquiring New Customers

Evidence shows that the primary driver of brand growth is penetration, all other growth mechanisms are secondary. New customers can be acquired though promotional activity, but these gains tend to be short lived. A more successful longer-term strategy is to invest in brand building, and evidence shows that message quality is the most impactful element when explaining changes in market share.

Framework and Research Objectives:

  • Customer Acquisition Forecast and sales from advertising

Applications/Products/Services:

  • Development / Copy Testing – TouchPoint™
  • Advertising Performance Tracking – APM™

Evidence Based Proof that Supports this:

  • Research conducted by MSW on its database of advertising has found that 52% of changes in market share can be explained by ad quality.  Media explains 13% of market share changes and a variety of other factors explain 35% of the changes.  The importance of ad quality is undeniable, and our validated, proven and in 2 cases Patented, advertising development tools such as CC Brand Preference™, CC Brand Persuasion™, RDE Analytic Framework™ and Outlook® Media Mix Optimization & Market Share Forecast Model™ deliver high quality advertising.

“CMO Objective” Number 4:  Retaining Customers

This was the primary activity of more CMOs than any other in the 2021 CMO survey.  The pandemic was a shock to the system, but brand loyalty has been declining for years along with trust in brands.  MSW’s brand tracking studies show that approximately half the consumers in any given category are not loyal to any one brand.

An entire industry has been built to measure Customer Satisfaction, and to help companies improve their Satisfaction scores, yet customer loyalty continues to decline.  This has led people to question the value of customer satisfaction and to recognize that the real goal is Loyalty.

For MSW Research, the key to addressing this objective is the ability to accurately measure human emotional response to brands and their messaging, combined with measurement of various brand relationship segments.

Framework and Research Objectives:

  • Loyalty shifts among various brand relationship segments-The Brand Strength Monitor
  • Preference Shift linked to traffic, sales, etc.-The Brand Strength Monitor

Applications/Products/Services:

  • Brand Equity Tracking – The Brand Strength Monitor™
  • Brand Franchise Analysis™
  • Persuadables Segmentation Analysis℠

Evidence Based Proof that Supports this:

  • Our data shows that Brand Loyalty is affected by how each brand interaction makes consumers feel.  The MSW brand relationship model captures attitudinal loyalty and allows us to understand the drivers of loyalty. A battery of emotional response measurement tools drills down to provide exact direction to activate these drivers.

“CMO Objective” Number 5:  Improving Marketing ROI

Most importantly of all, brand and marketing investment must create brand preference over competitive offerings, which propels sustainable and longer-term financial value and growth.

Framework and Research Objectives:

  • Continuous measurement – The Brand Strength Monitor
  • Point-in-time measurement – Touchpoint 360

Applications/Products/Services:

  • Brand Equity Tracking – The Brand Strength Monitor™
  • Outlook® Media Mix Optimization & Market Share Forecast Model™

Evidence Based Proof that Supports this:

  • MSW Brand Preference explained more of the variation in sales across a study of 120 brands in 12 categories over 18 months than other classic market research metrics.  MSW Brand Preference is a measure of Long-Term Brand Equity and it explains most, but not all, changes in brand sales; but we still have a Market Gap that is explained by other factors.
  • The link between MSW Brand Preference and Sales has been presented to the ARF and AMA, written about in The Economist, The International Finance Review, The Journal of Brand Management and CFO Magazine, and has been discussed with The International Accounting Standards Board and incorporated into the ISO definition of Brand Equity.

Conclusion

The 5 CMO “Objectives” can be summarized by one illustration.

All brands have potential, not all are living up to that potential.  We help brands identify the reasons for the gap between their Performance and their Potential and provide guidance to help close the gap.

In addition to helping brands meet their current potential we uncover opportunities for brands to expand their future potential.

Categories: Uncategorized Tags:

Unusual Statistical Phenomena, Part II: Stat Testing of Percentages

January 24th, 2022 Comments off

Sometimes when looking at the results from survey data, we see something that makes us say ‘huh?’ or ‘that doesn’t look right’. When the odd results persist after verifying the data were processed correctly (always a good practice), there is typically still a logical answer that can be uncovered after doing some digging. Sometimes the answer lies with something that we will call ‘unusual statistical phenomena.’  This is part 2 of a series that will look at some of these interesting – or confounding – effects that do pop up now and then in real survey research data.

This time we will look at an unusual phenomenon that can occur when doing something typically considered fairly mundane – testing for statistical significance between percentages. An example will help to illustrate this phenomenon which periodically causes us to question stat testing results.

Let’s say we have fielded the same survey for two different brands. One part of the survey collects respondent opinions of the test brand using a battery of attribute statements with a 5-point agreement scale. The base size for each survey was 300.

Stat testing was conducted between results for the two brands for Top Box percentages on each of the attribute statements. However, some of the results are questionable. Specifically, for the attribute “Is Unique and Different” Brand B’s score was higher than Brand A’s by 4 percentage points, which was statistically significant at the 90% confidence level (denoted by the “A” in the chart below); while for the attribute “Is a Brand I Can Trust” Brand B’s score was higher than Brand A’s by 6 percentage points, which was NOT statistically significant at the 90% confidence level. How could this be!

How can a difference of 4 points be statistically significant while a difference of 6 points is not, even with the same base sizes? To understand how this can happen, let’s first look at the basics of how a statistical test for comparing percentages works.

First, a t-value is computed according to this formula:

Then this t-value is compared to a critical value. If the t-value exceeds the critical value then we say that the difference between the percentages is statistically significant.  The critical value is based on the chosen confidence level and the base sizes of the samples from which the percentages were derived.

In our example, we chose the 90% confidence level for both statistical tests and the base sizes are the same, so the critical value for both tests is the same. We also know the difference between the percentages (the numerator of our equation) is what appears anomalous as the difference of 4 led to a t-value that exceeded the critical value, while the difference of 6 did not exceed the critical value. Therefore, the issue must lie with the Standard Error of the Difference.

Let’s next examine what a Standard Error represents. Our surveys were fielded among a sample of the overall population. If we sample among women 18 to 49 in the United States, we will infer that our results are representative of the entire population of interest, which is all women 18 to 49 in the United States. However, it is unlikely that the measures we compute from the sample (such as the percentage that say Brand A “is a brand I can trust”) will be exactly the same as the percentage would be if we could ask everyone in the entire population of interest.  There is some uncertainty in the result because we are asking it of only a subset of the population. The Standard Error is a measure of the size of this uncertainty for a given metric.

In our equation, the denominator is the Standard Error of the Difference between the percentages. While not precisely correct, the Standard Error of the Difference can be thought of as the sum of the individual Standard Errors for the two percentages being subtracted (the actual value will be somewhat less due to taking squares and square roots). As the graph below illustrates, the Standard Error for a percentage is a function not only of the sample size, but also of the size of the percentage itself.

Specifically, for any given sample size the Standard Error is largest for values around 50% and decreases as values approach either 0% or 100%. For a base size of 100 (the dark blue line), the Standard Error is close to 5 for percentages near 50%, but decreases close to 2 for very small or very large percentages.  You can think about this as it being harder to estimate the percent incidence of a characteristic of a population when around half the population has that characteristic versus when almost all (or almost none) of the population has that characteristic.

In our example, the percentages for Is a Brand I Can Trust are close to 50%, so at a base size of 300 the individual Standard Errors would each be a little under 3. In contrast the percentages for Is Unique and Different are around 10%, so at a base size of 300 the Standard Errors would each be around 1.5.  That’s a big difference!

It follows that the Standard Error of the Difference for Is a Brand I Can Trust would be much larger than for Is Unique and Different. In fact, the actual values are 4.08 for Is a Brand I Can Trust and 2.34 for Is Unique and Different. Again, a big difference. If we divide the differences in the percentages by these values for Standard Error of the Difference, we get t-values of 1.47 and 1.71, respectively. Given the critical value is approximately 1.65, we see that the t-value for the difference of 6 is below the critical value (hence not statistically significant); while the t-value for the difference of 4 is above the critical value (hence is statistically significant).

Hopefully this takes some of the mystery out of stat testing and helps in understanding why what can appear to be anomalous results may actually be correct.

Categories: Special Feature, Uncategorized Tags:

Do you ever look at your data and say, “huh?” The Unusual Statistical Phenomena of Simpson’s Paradox

November 2nd, 2021 Comments off

Sometimes when looking at the results from survey data, we see something that makes us say “huh?” or “that doesn’t look right”.  When the odd results persist after verifying the data were processed correctly (always a good practice), there is typically still a logical answer that can be uncovered after doing some digging.  Sometimes the answer lies with something that we will call “unusual statistical phenomena.”  This is part 1 of a series that will look at some of these interesting – or confounding – effects that do pop up now and then in real survey research data.

This time we will look at Simpson’s Paradox.  And we aren’t referring to the fact that Bart Simpson never seems to age while the rest of us do.  It is actually a phenomenon first described by the statistician Edward H. Simpson in 1951.

It’s easiest to understand this phenomenon through an example.  So, let’s say that we have two ads that have been on air, ad A and ad B.  In our tracking survey among adults 18 to 65, we will ask respondents if they recognize having seen each ad on air.  Earlier in the survey we ask Purchase Intent for the product which is featured in each of the two ads.  From these results, we will compare Top Box Purchase Intent among respondents who recognized each of the two ads.  The results in the table below show somewhat higher Top Box Purchase Intent for Ad A:

However, the client is also interested in seeing the results among each of two age groups: age 18 to 39 and age 40 to 65.  When we table those results, we find something that just doesn’t make sense.  Purchase Intent is slightly higher for Ad B among both age groups – a reversal from the overall results.  How can that be!

After verifying with data processing that the data are correct, we have our team dig into the data to figure out what is going on.  Finally, an explanation is found.

Ad B was aired heavily among programming targeted to a younger audience, while Ad A was primarily aired in general interest programming – which skews to a slightly older audience.  Hence Ad B had much higher recognition among the younger age group – and as a result, a much higher proportion of young people in the set of respondents among whom purchase intent was calculated.

The table of base sizes shown below reveals this imbalance. When combined with the younger age group’s more skeptical nature (and lower results) when it comes to Purchase Intent – especially in our category – the apparent anomaly is explained.

This is an example of Simpson’s Paradox.  It is a phenomenon in which individual subgroups all show the same trend in results, but the trend reverses when the subgroups are combined.  This occurs when there is a confounding variable that causes an imbalance in base sizes such as we saw above.  In our example, the confounding variable was the differing recognition levels for the ads among the two age groups.

Simpson’s paradox shows us the importance of knowing and understanding our data and keeping a watch out for the kind of confounding factors that could end up misleading us if we don’t account for them.

Categories: Uncategorized Tags: