Archive

Author Archive

Effective Ethnic Advertising Results From Understanding the Cultural Impact on Your Brand

April 21st, 2015 Comments off

Effective Ethnic Advertising Results From Understanding the Cultural Impact on Your Brand

With a purchasing power estimated to reach 1.5 trillion this year, the U.S. Hispanic segment has become a key target for many advertisers. With our studies proving that Hispanics tend to be more responsive to advertising than their non-Hispanic counterparts in terms of recall (54% higher Related Recall)…

 fig-01

  …and persuasion (50% more persuasive results)…

 fig-02

  …this creates a very attractive scenario for brands poised to grow.

However, even with an understanding behind the Hispanic diversity, brands find that advertising to the Hispanic population is challenging. Assumptions may be made around the brand’s equity and positioning performing similarly across the different demographic segments.  Avoiding these assumptions becomes a key element for success, particularly if the company plans to adopt a Total Market strategy.

Know Where You – and Your Competitors – Stand in the Category

Advertising tactics should vary depending of the brand’s position in the market; so understanding where your brand preference falls within the category across the different target segments becomes a priority when formulating a brand’s communication plan. The example below, an example using MSW•ARS’ Brand Preference data for the US Toilet Tissue category among Females, illustrates how inclination among the top five brands changes when comparing the Non-acculturated, Semi-acculturated and General Market Female segments. While Charmin is the consistent leader across all three groups, Scotts’ secondary position is eroded among Semi-Acculturated Hispanic Females by Angel Soft.  Similarly, preference for the Quilted North brand falls back among Semi-acculturated Hispanic Females, as this group claims preference of value-based store brands like Costco’s Kirkland, and Walmart’s White Cloud.

fig-03

Understand What You – and Your Competitors – Stand For in Hispanic’s Minds

Great caution should be exercised in understanding the relative type and strength of equity a brand – or a particular RTB included in the selling proposition – holds in the countries from where Non-acculturated Hispanics originate when developing a communication strategy.  This is due to the fact that Hispanics may lack, or have a different understanding of, what the brand represents based on the communication in their – or their parent’s – country of origin.  Advertising may make assumptions about similar brand equity across the different cultural groups, when education about the brand’s characteristics is needed instead.

For example, there is limited understanding of the damage caused to the hair when coloring using ammonia-based colorants in Mexico.  This results in advertising highlighting a “reduced damage” component tending to be less persuasive among the Non-Acculturated Hispanic Women when compared to other segments, than advertising communicating other functional benefits like tint duration.

Another example that illustrates this dynamic is evidenced by an ad quantitatively tested by MSW•ARS Research using the TouchPoint solution for the Tecate beer brand among the Hispanic market. In the ad, the one man in a bar who remains stoic after several attractive women pass by him is rewarded by a Mexican-type fiesta complete with some stereotypical characters, like a luchador.  While the Hispanic Males who participated in the study found the creative to be funny and engaging, the behavioral, non-cognitive results showed the ad failed to generate any change in brand preference among men towards Tecate.

Revision of cognitive data indicated men focused their attention on the fiesta element, the attractive/sensuous girls, and the “luchador” characters; all of these effectively tying back the ad to a Mexican beer.  As a result, Mexican beers showed the strongest shift in preference (CCPersuasion) when compared to beers from other countries as identified below:

 

fig-05

Unfortunately for Tecate, other Mexican brands of beer – such as Corona and Modelo – had stronger brand preference among Hispanic men. Therefore, while linking the advertising to Mexican cultural elements was effective to switch beer purchasers over to “Mexican brands,” it was not effective enough to drive consumers to one particular brand among those imported from Mexico. Mexican beers with the highest preference, such as Corona and Modelo, were the ones that capitalized from the ad, while the advertised brand Tecate saw flat results.

 fig-06

Stronger understanding of the Hispanic male beer consideration set, including brand preference, would have given further insight that advertising for Tecate needed not only to cue the Mexican element, but incorporate strong branding elements to Tecate in order to avoid potential misattribution.

Learn why

Developing effective advertising for Hispanics, or in which Hispanic are included as an important segment, requires expertise and constant monitoring throughout the different stages of the creative process.  Our Brand Building Portfolio offers a consistent analytic philosophy to drive a clear incremental improvement in each step with an end-to-end perspective.

Please contact your MSW●ARS representative to find out how our products and research can help you develop effective advertising for the Hispanic market.

3 Keys to Balancing Technology With Traditional Qualitative Research

February 28th, 2014 Comments off

qual-onlineThe use of technology and social media in qualitative research over the past few years has undoubtedly been a welcome addition to the researcher’s toolbox.  Indeed, we’re a huge proponent and regularly incorporate such methods into our projects. However, it’s crucial to not let the buzz over technology and social media overshadow the need for solid research design, or to diminish the value that can be gained through more traditional qualitative approaches.  Indeed, in spite of all the hype over the use of technology, the 2014 GreenBook Research Industry Trends Report indicates that the most widely used qualitative methods continue to be in-person focus groups, in-person IDI’s and telephone IDIs; with no significant change in use noted among any type of qualitative method from the previous year.  So we are not advocating the use of one approach over the other, but rather stating the case that there is a place for all of these tools, as long as they are being used for the right reasons.

1)     Rather than pitting technology against traditional methods, or viewing these varied approaches as an either/or proposition, consider all approaches as viable and complementary

  • An increasing amount of qualitative consists of a hybrid of methods vs. a singular approach
  • We encourage experimenting with new methods in order to better understand the value they offer
  • There’s always room for new tools, it’s a matter of knowing how and when to use them

2)     Make certain that the research objectives and requirements are driving the methodology… not the other way around

  • Remember, garbage in-garbage out
  • Fairly weigh the pros and cons of each approach, and determine which will best suite the research, as opposed to force fitting a method for no other reason than its novelty or newness

3)     Above all, don’t overlook the skill set required of the qualitative consultants conducting the research, regardless of the chosen method

  • The need for a solid foundation and understanding of qualitative design
  • The mindset that, “anyone can conduct focus groups,” is as false for technology-based methods as it is for traditional approaches
  • The importance of asking the right questions and knowing how to listen
  • The ability to extract valuable insights is where the true value lies
  • Working with researchers who understand people more than technology

Following are a few examples which illustrate how technology and traditional methods can co-exist, and when one approach may offer advantages over the other:

qual-mobileMobile Ethnography:  Mobile technology allows participants to self-report “in the moment,” communicating via any combination of text, audio and video from whatever environment the research calls for.  While this method may offer the benefit of capturing a person’s feelings and experiences in-situ, there may be other behaviors or actions that the respondent is not capturing or reporting, or possibly not even aware of, that a trained ethnographer would notice.  Thus, immediacy and speed may be gained at the loss of small, but very telling details; as what people don’t tell and don’t do can be some of the most valuable information gained in an ethnographic project.

We might suggest that a combination of shopping trips with and without a researcher present may provide balance, as learning gained on the assisted trips may help explain behaviors noted on the self-reported ones.  Or, self-reported trips followed by either in-person or webcam interviews allowing for further probing and exploration might also be considered.

qual-groupFocus Groups/IDI’s:  Both face-to-face and online interviews (real time or bulletin board) have their pros and cons.  Deciding which route to follow may be dependent on a number of factors–such as the ease/difficulty of recruiting qualified respondents, budget and time constraints, geography—but the key determinant should be the objectives of the research.

A relatively simple, straightforward concept screen or evaluation can easily be handled by any number of online platforms offering markup tools, which allow participants to view, critique and comment on concepts without being influenced by others, and then allow for discussion.  However, a project more exploratory in nature, where the sharing and building of ideas is important, or having people with different views challenge each other and engage in more natural flowing conversation is critical, would better lend itself to face-to-face groups.

While both of these scenarios could be handled through either method, for the latter example the benefit of being able to read non-verbal cues such as facial expressions or body language, and hear voice intonation would favor a more traditional group approach.

qual-social-mediaSocial Media:  Social media is too big to be ignored as a source of real-time information for companies and researchers, but caution needs to be taken in terms of how it is used as a qualitative tool and its influence on decision making.  One of the primary concerns is not knowing enough about the people providing commentary, or having the opinions of few speak for many.  Some research conducted on social media shows that the majority of people use it for consumption without being active contributors in sharing content or interacting with others, and while there may still be valid learning gained from those who do participate, researchers need to be aware of the bias that exists.  And while there are numerous text aggregation and analytic tools that troll popular social sites, is there one that can truly interpret language nuances and understand the context in which comments are made–incomplete and grammatically incorrect sentences, sarcasm, humor, anger, irony?

As this early stage, social media content may be most valuable in providing fodder for further qualitative exploration… through either tech or traditional methods:  developing hypotheses, identifying language used around brands or categories, bringing potential problems with products or services to light, and raising other questions that may prompt meaningful discussion on key issues.  However, at this early stage, we advise caution in viewing social media research as a stand-alone qualitative tool.

MSW●ARS recognizes the value technology offers, and is continually experimenting with new methods and tools, without losing sight of the value brought by tried and true qualitative approaches.  Please give us a call to discuss your qualitative needs, and allow us to recommend an approach that utilizes the best of both worlds.

 

Categories: Brand Plannning, Qualitative Tags:

Brands lament mobile measurement, but options abound.

November 29th, 2011 Comments off

I was reading an article on Mobile Marketer recently about a round table discussion at the Mobile Marketing Forum in Los Angeles.  During the discussion, brand representatives from Coca-Cola, Microsoft, ABC and AOL described their wishes and requirements for measuring a broad spectrum of brand mobile efforts, including apps, ad campaigns, even SMS.

At first glance, the main requirement cited seemed to be a centralized dashboard by which mobile efforts could be measured, given an ROI currency much like what they have on the web.

It’s an ironic conversation if you think about it. Certainly, the digital web is establish itself a myriad of different currencies that help around ROI, ad campaigns, branded sites and even exposure to interactive elements.

However, in my opinion, it might be comparing apples to oranges. While the web and mobile certainly share some overlap in how they interact with consumers, the disparities between the two, and the vehicles that they use that are often mutually exclusive to each other.

A common assertion, one even mentioned at this particular roundtable, is that in order to convince people to spend on mobile, one must be able to measure the outcome – preferably in one place such as a dashboard or common reporting mechanism.
While there is no panacea that will allow brands and media outlets to measure their mobile efforts whatever they may be; branded app, ad campaign, SMS marketing, one thing is clear – solutions have been, and are in place to facilitate the types of measurement being demanded.

Within the thread of discussion at the roundtable, it was apparent that those in attendance had the desire for a unified ‘all in one place’ dashboard approach to measuring mobile success. While I applaud the desire to have clear and concise information that spans across as many mediums as possible, it may not be entirely possible to “mix measures” between mobile mediums such as apps, mobile ad campaigns, or other branded efforts, especially when you consider that in many cases you’re measuring different types of movement along the consumer axes of perception, desire, and intent.

Someone at the roundtable was wise to point out; defining engagement depends on the goals of the campaign. For instance, an ad campaign on a mobile device might have the goal of driving site traffic, disseminating information about a new product or service, or perhaps its goal is to drive adoption of another mobile vehicle, such as a mobile application.

In some cases, one mobile action drives another, in the example given where a consumer’s exposed to an ad campaign, and that campaign is for branded app, and the branded apps purpose is to drive interaction with the brand, improve or increase positive perception of the brand, then the line becomes blurry when trying to measure the effectiveness of either: you might be able to get at the ad campaigns success at driving adoption of the app, you might even be able to get at the app’s success at improving consumer perception of the brand, but how do you chain the two together?

Here at MSW, we spent a lot of time thought and effort into exactly how mobile can be measured most effectively, and across the widest array of mobile efforts.  By carefully isolating exactly what measurements of success constitute a positive return on investment within a brands mobile effort, we can then begin the process of determining just exactly what to measure.

With apps, obviously engagement is King. If you build it, and they don’t come… fail. As anyone can tell you, that particular measurement of behavioral engagement with a mobile app, is simple to measures; unique downloaders, and sessions. Fortunately for the mobile device, unlike the digital web, in most cases there is a direct one-to-one relationship between a unique consumer, who only has the one mobile device that they’ve downloaded an app to, and the app itself.  On the digital web it’s true, perhaps you’re having a one-to-one conversation with the unique consumer, or perhaps you’re having a conversation with that consumer along several touch points be they home, work, and school computers, mobile web, or even tablets.

It’s an ironic conversation, like I said before. On one hand you have this great demand for measurement that fits across a wide variety of different mobile efforts and that you can compare to the measurements you use on the digital web – but on the other hand, the measurements you’re comparing it to is by far less stable, less accurate, and overall less capable.

So again for apps, going beyond engagement, one of our specialties is going deeper than ‘I downloaded an app’, ‘I used an app.’  Without discounting the importance of branded app adoption and usage, it really is just the tip of the iceberg.  It’s also where measurement itself starts to become a little bit more difficult, and perhaps tougher to get at with a consistent dashboard set of measures.

All apps are inherently slightly different than each other, and so while the goal of driving adoption might be consistent across all branded apps, what happens after that is highly specialized and specific to the end goals of the brand. It is for this reason, although not alone, that our particular measurement platform was designed to blend behavioral measurement with attitudinal measurements within the construct of a mobile app right from the very beginning.  Behavioral gets you those core critical measures; adoption and usage. It’s also very effective, when used properly from the beginning, and measuring feature level engagement – and this is very important.

It’s very easy for an app that has a moderate to high degree of consumer adoption and engagement to bear with it the illusion that the app in its entirety is enjoying high levels of engagement, when, what we have found more often than not, that this is not the case.

In our mobile research practice, it’s commonplace for us to instrument, that is to say, place measurement capabilities around distinct mobile app features down to a very granular level.

Don’t get me wrong, we’ve gotten our share of push-back when we’ve suggested embedding the capability of understanding how long a consumer might spend in an area of the mobile app before engaging with the feature, perhaps this sounds too granular?

However, when you’re later able to compare ‘consumer idle time’ on a feature, which is a consumer time spent looking at a feature before deciding to actually use it, and you compare this across multiple features within your app, the value of this very granular level of feature engagement becomes more apparent.

Perhaps your behavioral data has done a good job of suggesting that perhaps one feature within a mobile app is more popular with users than another feature. So, now what?  Well, now you’re starting to get into the area of attitudes, and we become very good at combining insights we gather from behavioral data with attitudinal data we collect via surveys to quickly get a sense, a true holistic sense, of the perceived value a mobile app has with it consumers, and hence, its subsequent impact on brand perception, intent, and the like.

At the end of the day, engagement is a pretty common metric across most forms of mobile media. For ad impressions, you have the number of unique impressions served, you have click through’s, you have cumulative exposures, and these translate well over two mobile applications. Instead of impressions served, you have the notion of the unique user. Instead of click through’s, you have feature level engagement, in app drive to site, in app purchase, sessions, etc.

It’s a funny place that mobile is in today especially when it comes to brands. It’s not completely dissimilar from the wild west we saw of the digital web of 5 to 7 years ago. I remember my days at comScore, early, turbulent days where agencies tried to push brands and digital web spend, brands demanded ROI from their web efforts, but it was still just too early for them to both spend on the effort and also pay for the measurement.

Mobile is a lot like that today. Make no mistake, measurement capabilities and technologies abound. We are certainly not the only ones capable of understanding who downloads a mobile app who sees a mobile ad, who engages with the mobile app feature. Not to toot our own horn, but I will toot and say that we’re about as close to measuring the success of mobile efforts that brands happen to be making in a centralized place, that is to say, ‘here are your behavioral measures’, ‘here are your attitudinal measures’, ‘here’s how that relates to ROI’.

What’s funny is, I’m not entirely convinced that providing these ‘dashboards’ at this early stage, is such a great idea at all.

Consider this. If the digital web was akin to a football game, and you were looking at the scoreboard, you would understand the measures that were being presented to you. You know what down it was, you know who was in possession of the ball, you know how much time there was left in the game.  But what if you also need to know how the players involved felt emotionally at the time, how playing the game impacted their desire, altered their perceptions, what would your scoreboard look like then?

We are quick to offer any customer we work with the capability of looking at all of the data we generate from their mobile effort in its raw form, in aggregate form, however they like. That said, 90% of our engagements involve us translating the outcome for our clients. The data certainly isn’t undiscoverable, not by any means.  But it does have a particular set of nuances that emerge when you try to connect behavioral, location, attitudinal, it can get confusing fast.

Reading the ‘scoreboard’, if you will, requires little bit of what we refer to as “expert interpretation”, the digital mobile equivalent of Lewis and Clark, the ability to guide a client through veritable cornucopia of possible insights, interpretations, and results.

Another challenge that we face here at MSW, relates back to what I said about the early digital web. Brands know they need to engage in mobile, they see themselves being outpaced by other brands who adopt earlier, and they know they have to get involved. Meanwhile, agencies know this, and they try to get the brands involved, but they end up in situations where they end up pitching only the cost of developing the deliverable itself, not measuring the outcome.

It’s an Achilles’ heel, it’s not new, and it equates back the old saying about insanity being taking the same steps over and over again, expecting a different result. I might draw fire for the comment, but I’ll just go out and say it; if you’re not willing to invest to measure the outcome of your mobile efforts, whatever they may be, you may be best served to wait until such time that you are willing to make that investment. Yes, mobile is more expensive than the digital web, but it’s by far more intimate direct line of communication with the brands consumers than any other medium we’ve seen yet. It’s worth understanding how well you’ve done it, it’s also worth considering, that the measures of success in mobile may not align to those used in other mediums.

An app is not a webpage. A push message is not an SMS message, which is not a pop-up.

If you are successful in engaging the mobile consumer, you have been led in the front door. You can either step in, casually observing the area, making note of everything that happens, every reaction, every response. Or you can make an equally grand entrance with a blindfold on and ultimately, maybe both entrances are just as effective but in the case of the latter, you will unfortunately never know.