Archive

Posts Tagged ‘Digital Practice’

It’s not about the clicks!

October 2nd, 2012 Comments off

Here’s a blog posting that I wrote back in early 2010.  Not much has changed in the three years since writing this.  Advertisers are still struggling with digital measurement.

Originally posted April 13, 2010

Overall Click Through Rates [CTR] are well below 1%, so low in fact that ComScore frequently has to report CTR’s to the hundredth of a percent!  Furthermore, the people that do click are the wrong people.  Clicks are meaningless.  And yet advertisers are projected to spend $26 Billion this year online [including display, search, video, and other categories of online advertising].

Since the internet became a serious advertising medium, marketers have clung to CTR as THE metric.  It was presented to them as the holy grail of performance metrics.  Compared with traditional advertising, click throughs were a major breakthrough.  Advertisers could get reports on impressions [how many people saw their ads] and get reports on what they viewed as “ads that worked” [clicks].  These numbers were easily measured and easily reported.  But since the very first banner was delivered, the CTR has only moved in one direction – down.  So there’s good news and bad news: the good news is that although Click Through Rates are still declining, they’re beginning to stabilize.  The bad news is that they’re stabilizing at about 0% – AT ABOUT NOBOBY!  NOBODY CLICKS!  And even worse, among the very, very few that do click, conversions (those people that follow a click with a purchase, registration or some type of measurable commitment towards the brand) have declined at even greater rate than clicks – because the wrong people are clicking in the first place.

Very early on in the history of internet advertising, those people that sell online advertising realized that they didn’t in fact have the holy grail; they saw the data, they knew that clicks were only moving in one direction and that didn’t look good for them. So they tried to steer their customers away from clicks and towards “the branding” ability of online advertising to try and get the major players to commit ad dollars to their medium.

Whether it was their efforts to talk about “the branding” ability or some other reason, many brand marketers have agreed and committed to a presence; clearly online is an important medium. They are spending significant monies there and, more importantly, adjusted their expectations about the return on investment metric.  They have come to realize that CTR is not the holy grail it was first presented to be.  The problem is they still get those CTR reports and haven’t embraced a new measure.

Well the root of this dilemma is that traditionally, the more expensive the medium the more advertisers invest in advertising evaluation. Television is an expensive medium and so advertisers are willing to test their ads before airing them; billboards are relatively inexpensive and so advertisers do not invest the money to evaluate them.  Their thinking is, why evaluate a billboard ad for more than it costs to create it?  Online advertising falls in this category: inexpensive to produce so why should I test it?  Advertisers used to be able to conduct a quick two cell test of their banners and whichever one garnered the most clicks was the one they would launch.  Well, when you change the conversation from “CTR” to “branding” that doesn’t work.  It also doesn’t help that people aren’t clicking so there’s nothing to measure.

So what’s an advertiser to do in late 2012?

Many advertisers already get it about click rates but still using the wrong metrics.  Many have graduated from CTR to uniques and page views.  The problem with uniques and page views is that they’re media-buying effectiveness measures of reach and frequency, not advertising effectiveness measures.

Advertising is about more than that first step.  If it wasn’t, then when TV was invented it would have immediately been followed by the “IGOTSARTTSR” (“Immediately Get Off The Sofa And Run To The Store Rate”).

The CTR (Click Through Rate) is as irrelevant a measure for big brands as the IGOTSARTTSR.  It’s not the way advertising works; just because someone sitting at home on their sofa watching TV doesn’t immediately jump off the sofa after seeing your commercial doesn’t mean it isn’t working.

Let us help:

  • Evaluate the effectiveness of your digital advertising with metrics that go beyond CTR.
  • Evaluate how well your digital advertising is working with your traditional efforts.

Brands lament mobile measurement, but options abound.

November 29th, 2011 Comments off

I was reading an article on Mobile Marketer recently about a round table discussion at the Mobile Marketing Forum in Los Angeles.  During the discussion, brand representatives from Coca-Cola, Microsoft, ABC and AOL described their wishes and requirements for measuring a broad spectrum of brand mobile efforts, including apps, ad campaigns, even SMS.

At first glance, the main requirement cited seemed to be a centralized dashboard by which mobile efforts could be measured, given an ROI currency much like what they have on the web.

It’s an ironic conversation if you think about it. Certainly, the digital web is establish itself a myriad of different currencies that help around ROI, ad campaigns, branded sites and even exposure to interactive elements.

However, in my opinion, it might be comparing apples to oranges. While the web and mobile certainly share some overlap in how they interact with consumers, the disparities between the two, and the vehicles that they use that are often mutually exclusive to each other.

A common assertion, one even mentioned at this particular roundtable, is that in order to convince people to spend on mobile, one must be able to measure the outcome – preferably in one place such as a dashboard or common reporting mechanism.
While there is no panacea that will allow brands and media outlets to measure their mobile efforts whatever they may be; branded app, ad campaign, SMS marketing, one thing is clear – solutions have been, and are in place to facilitate the types of measurement being demanded.

Within the thread of discussion at the roundtable, it was apparent that those in attendance had the desire for a unified ‘all in one place’ dashboard approach to measuring mobile success. While I applaud the desire to have clear and concise information that spans across as many mediums as possible, it may not be entirely possible to “mix measures” between mobile mediums such as apps, mobile ad campaigns, or other branded efforts, especially when you consider that in many cases you’re measuring different types of movement along the consumer axes of perception, desire, and intent.

Someone at the roundtable was wise to point out; defining engagement depends on the goals of the campaign. For instance, an ad campaign on a mobile device might have the goal of driving site traffic, disseminating information about a new product or service, or perhaps its goal is to drive adoption of another mobile vehicle, such as a mobile application.

In some cases, one mobile action drives another, in the example given where a consumer’s exposed to an ad campaign, and that campaign is for branded app, and the branded apps purpose is to drive interaction with the brand, improve or increase positive perception of the brand, then the line becomes blurry when trying to measure the effectiveness of either: you might be able to get at the ad campaigns success at driving adoption of the app, you might even be able to get at the app’s success at improving consumer perception of the brand, but how do you chain the two together?

Here at MSW, we spent a lot of time thought and effort into exactly how mobile can be measured most effectively, and across the widest array of mobile efforts.  By carefully isolating exactly what measurements of success constitute a positive return on investment within a brands mobile effort, we can then begin the process of determining just exactly what to measure.

With apps, obviously engagement is King. If you build it, and they don’t come… fail. As anyone can tell you, that particular measurement of behavioral engagement with a mobile app, is simple to measures; unique downloaders, and sessions. Fortunately for the mobile device, unlike the digital web, in most cases there is a direct one-to-one relationship between a unique consumer, who only has the one mobile device that they’ve downloaded an app to, and the app itself.  On the digital web it’s true, perhaps you’re having a one-to-one conversation with the unique consumer, or perhaps you’re having a conversation with that consumer along several touch points be they home, work, and school computers, mobile web, or even tablets.

It’s an ironic conversation, like I said before. On one hand you have this great demand for measurement that fits across a wide variety of different mobile efforts and that you can compare to the measurements you use on the digital web – but on the other hand, the measurements you’re comparing it to is by far less stable, less accurate, and overall less capable.

So again for apps, going beyond engagement, one of our specialties is going deeper than ‘I downloaded an app’, ‘I used an app.’  Without discounting the importance of branded app adoption and usage, it really is just the tip of the iceberg.  It’s also where measurement itself starts to become a little bit more difficult, and perhaps tougher to get at with a consistent dashboard set of measures.

All apps are inherently slightly different than each other, and so while the goal of driving adoption might be consistent across all branded apps, what happens after that is highly specialized and specific to the end goals of the brand. It is for this reason, although not alone, that our particular measurement platform was designed to blend behavioral measurement with attitudinal measurements within the construct of a mobile app right from the very beginning.  Behavioral gets you those core critical measures; adoption and usage. It’s also very effective, when used properly from the beginning, and measuring feature level engagement – and this is very important.

It’s very easy for an app that has a moderate to high degree of consumer adoption and engagement to bear with it the illusion that the app in its entirety is enjoying high levels of engagement, when, what we have found more often than not, that this is not the case.

In our mobile research practice, it’s commonplace for us to instrument, that is to say, place measurement capabilities around distinct mobile app features down to a very granular level.

Don’t get me wrong, we’ve gotten our share of push-back when we’ve suggested embedding the capability of understanding how long a consumer might spend in an area of the mobile app before engaging with the feature, perhaps this sounds too granular?

However, when you’re later able to compare ‘consumer idle time’ on a feature, which is a consumer time spent looking at a feature before deciding to actually use it, and you compare this across multiple features within your app, the value of this very granular level of feature engagement becomes more apparent.

Perhaps your behavioral data has done a good job of suggesting that perhaps one feature within a mobile app is more popular with users than another feature. So, now what?  Well, now you’re starting to get into the area of attitudes, and we become very good at combining insights we gather from behavioral data with attitudinal data we collect via surveys to quickly get a sense, a true holistic sense, of the perceived value a mobile app has with it consumers, and hence, its subsequent impact on brand perception, intent, and the like.

At the end of the day, engagement is a pretty common metric across most forms of mobile media. For ad impressions, you have the number of unique impressions served, you have click through’s, you have cumulative exposures, and these translate well over two mobile applications. Instead of impressions served, you have the notion of the unique user. Instead of click through’s, you have feature level engagement, in app drive to site, in app purchase, sessions, etc.

It’s a funny place that mobile is in today especially when it comes to brands. It’s not completely dissimilar from the wild west we saw of the digital web of 5 to 7 years ago. I remember my days at comScore, early, turbulent days where agencies tried to push brands and digital web spend, brands demanded ROI from their web efforts, but it was still just too early for them to both spend on the effort and also pay for the measurement.

Mobile is a lot like that today. Make no mistake, measurement capabilities and technologies abound. We are certainly not the only ones capable of understanding who downloads a mobile app who sees a mobile ad, who engages with the mobile app feature. Not to toot our own horn, but I will toot and say that we’re about as close to measuring the success of mobile efforts that brands happen to be making in a centralized place, that is to say, ‘here are your behavioral measures’, ‘here are your attitudinal measures’, ‘here’s how that relates to ROI’.

What’s funny is, I’m not entirely convinced that providing these ‘dashboards’ at this early stage, is such a great idea at all.

Consider this. If the digital web was akin to a football game, and you were looking at the scoreboard, you would understand the measures that were being presented to you. You know what down it was, you know who was in possession of the ball, you know how much time there was left in the game.  But what if you also need to know how the players involved felt emotionally at the time, how playing the game impacted their desire, altered their perceptions, what would your scoreboard look like then?

We are quick to offer any customer we work with the capability of looking at all of the data we generate from their mobile effort in its raw form, in aggregate form, however they like. That said, 90% of our engagements involve us translating the outcome for our clients. The data certainly isn’t undiscoverable, not by any means.  But it does have a particular set of nuances that emerge when you try to connect behavioral, location, attitudinal, it can get confusing fast.

Reading the ‘scoreboard’, if you will, requires little bit of what we refer to as “expert interpretation”, the digital mobile equivalent of Lewis and Clark, the ability to guide a client through veritable cornucopia of possible insights, interpretations, and results.

Another challenge that we face here at MSW, relates back to what I said about the early digital web. Brands know they need to engage in mobile, they see themselves being outpaced by other brands who adopt earlier, and they know they have to get involved. Meanwhile, agencies know this, and they try to get the brands involved, but they end up in situations where they end up pitching only the cost of developing the deliverable itself, not measuring the outcome.

It’s an Achilles’ heel, it’s not new, and it equates back the old saying about insanity being taking the same steps over and over again, expecting a different result. I might draw fire for the comment, but I’ll just go out and say it; if you’re not willing to invest to measure the outcome of your mobile efforts, whatever they may be, you may be best served to wait until such time that you are willing to make that investment. Yes, mobile is more expensive than the digital web, but it’s by far more intimate direct line of communication with the brands consumers than any other medium we’ve seen yet. It’s worth understanding how well you’ve done it, it’s also worth considering, that the measures of success in mobile may not align to those used in other mediums.

An app is not a webpage. A push message is not an SMS message, which is not a pop-up.

If you are successful in engaging the mobile consumer, you have been led in the front door. You can either step in, casually observing the area, making note of everything that happens, every reaction, every response. Or you can make an equally grand entrance with a blindfold on and ultimately, maybe both entrances are just as effective but in the case of the latter, you will unfortunately never know.

Smartphone video content consumption on the rise

October 11th, 2011 Comments off

We recently surveyed our growing panel of smartphone consumers about their video consumption habits, and it’s clear that the smartphones are starting to keep pace with desktops and laptops as the method chosen to view video.

Polling over 400 consumers, we asked how many had watched some form of video on their smartphones in the last month. Responding by type of video watched, 97% of those surveyed watched at some kind of video content.

That isn’t too surprising.  91% of those surveyed find watching video on a smartphone enjoyable, with another 94% calling the act ‘convenient’, and 87% finding the activity ‘easy’.

But, when asked about ‘most preferred method of viewing’ specific types of video content, things got very interesting.

For instance, despite enjoying viral success  that originated on the ‘desktop/laptop internet’, 52% of consumers who watch video content on YouTube now prefer to use their smartphones to view the content.

It doesn’t stop there.  The fact is mobile consumers we surveyed called the smartphone the preferred method of viewing versus desktop & laptop for every type of video content, besides television and movies.

This sort of makes sense, don’t you think?  The fact that consumers prefer to watch movies and TV shows on the desktop or laptop might lie in the fact that the content itself requires longer to consume, and the small screens of most smartphones are very likely a big factor as well.

Helping underscore this theory is the finding that desktops and laptops aren’t the only viewing modes taking a hit from the smartphone; television is losing the news audience to smartphone viewers too.

As our study reflects, the percentage of viewers who now prefer watching news on a smartphone is in even greater proportions versus television,  than those who prefer to watch on desktop or laptops.

 

Overall, the smartphone seems to be taking on viewers any time the content doesn’t rely heavily on large screens and/or high quality for proper enjoyment.

TV still reigns supreme in the areas of TV Shows, Movies, Sports and even commercials.

As for desktops and laptops?

My prediction is that where smartphones haven’t already surpassed them as the preferred video viewing platform, increasing ownership of tablets and more new & lower cost tablets on the horizon will probably signal a near complete displacement of viewership.