Brands lament mobile measurement, but options abound.
I was reading an article on Mobile Marketer recently about a round table discussion at the Mobile Marketing Forum in Los Angeles. During the discussion, brand representatives from Coca-Cola, Microsoft, ABC and AOL described their wishes and requirements for measuring a broad spectrum of brand mobile efforts, including apps, ad campaigns, even SMS.
At first glance, the main requirement cited seemed to be a centralized dashboard by which mobile efforts could be measured, given an ROI currency much like what they have on the web.
It’s an ironic conversation if you think about it. Certainly, the digital web is establish itself a myriad of different currencies that help around ROI, ad campaigns, branded sites and even exposure to interactive elements.
However, in my opinion, it might be comparing apples to oranges. While the web and mobile certainly share some overlap in how they interact with consumers, the disparities between the two, and the vehicles that they use that are often mutually exclusive to each other.
A common assertion, one even mentioned at this particular roundtable, is that in order to convince people to spend on mobile, one must be able to measure the outcome – preferably in one place such as a dashboard or common reporting mechanism.
While there is no panacea that will allow brands and media outlets to measure their mobile efforts whatever they may be; branded app, ad campaign, SMS marketing, one thing is clear – solutions have been, and are in place to facilitate the types of measurement being demanded.
Within the thread of discussion at the roundtable, it was apparent that those in attendance had the desire for a unified ‘all in one place’ dashboard approach to measuring mobile success. While I applaud the desire to have clear and concise information that spans across as many mediums as possible, it may not be entirely possible to “mix measures” between mobile mediums such as apps, mobile ad campaigns, or other branded efforts, especially when you consider that in many cases you’re measuring different types of movement along the consumer axes of perception, desire, and intent.
Someone at the roundtable was wise to point out; defining engagement depends on the goals of the campaign. For instance, an ad campaign on a mobile device might have the goal of driving site traffic, disseminating information about a new product or service, or perhaps its goal is to drive adoption of another mobile vehicle, such as a mobile application.
In some cases, one mobile action drives another, in the example given where a consumer’s exposed to an ad campaign, and that campaign is for branded app, and the branded apps purpose is to drive interaction with the brand, improve or increase positive perception of the brand, then the line becomes blurry when trying to measure the effectiveness of either: you might be able to get at the ad campaigns success at driving adoption of the app, you might even be able to get at the app’s success at improving consumer perception of the brand, but how do you chain the two together?
Here at MSW, we spent a lot of time thought and effort into exactly how mobile can be measured most effectively, and across the widest array of mobile efforts. By carefully isolating exactly what measurements of success constitute a positive return on investment within a brands mobile effort, we can then begin the process of determining just exactly what to measure.
With apps, obviously engagement is King. If you build it, and they don’t come… fail. As anyone can tell you, that particular measurement of behavioral engagement with a mobile app, is simple to measures; unique downloaders, and sessions. Fortunately for the mobile device, unlike the digital web, in most cases there is a direct one-to-one relationship between a unique consumer, who only has the one mobile device that they’ve downloaded an app to, and the app itself. On the digital web it’s true, perhaps you’re having a one-to-one conversation with the unique consumer, or perhaps you’re having a conversation with that consumer along several touch points be they home, work, and school computers, mobile web, or even tablets.
It’s an ironic conversation, like I said before. On one hand you have this great demand for measurement that fits across a wide variety of different mobile efforts and that you can compare to the measurements you use on the digital web – but on the other hand, the measurements you’re comparing it to is by far less stable, less accurate, and overall less capable.
So again for apps, going beyond engagement, one of our specialties is going deeper than ‘I downloaded an app’, ‘I used an app.’ Without discounting the importance of branded app adoption and usage, it really is just the tip of the iceberg. It’s also where measurement itself starts to become a little bit more difficult, and perhaps tougher to get at with a consistent dashboard set of measures.
All apps are inherently slightly different than each other, and so while the goal of driving adoption might be consistent across all branded apps, what happens after that is highly specialized and specific to the end goals of the brand. It is for this reason, although not alone, that our particular measurement platform was designed to blend behavioral measurement with attitudinal measurements within the construct of a mobile app right from the very beginning. Behavioral gets you those core critical measures; adoption and usage. It’s also very effective, when used properly from the beginning, and measuring feature level engagement – and this is very important.
It’s very easy for an app that has a moderate to high degree of consumer adoption and engagement to bear with it the illusion that the app in its entirety is enjoying high levels of engagement, when, what we have found more often than not, that this is not the case.
In our mobile research practice, it’s commonplace for us to instrument, that is to say, place measurement capabilities around distinct mobile app features down to a very granular level.
Don’t get me wrong, we’ve gotten our share of push-back when we’ve suggested embedding the capability of understanding how long a consumer might spend in an area of the mobile app before engaging with the feature, perhaps this sounds too granular?
However, when you’re later able to compare ‘consumer idle time’ on a feature, which is a consumer time spent looking at a feature before deciding to actually use it, and you compare this across multiple features within your app, the value of this very granular level of feature engagement becomes more apparent.
Perhaps your behavioral data has done a good job of suggesting that perhaps one feature within a mobile app is more popular with users than another feature. So, now what? Well, now you’re starting to get into the area of attitudes, and we become very good at combining insights we gather from behavioral data with attitudinal data we collect via surveys to quickly get a sense, a true holistic sense, of the perceived value a mobile app has with it consumers, and hence, its subsequent impact on brand perception, intent, and the like.
At the end of the day, engagement is a pretty common metric across most forms of mobile media. For ad impressions, you have the number of unique impressions served, you have click through’s, you have cumulative exposures, and these translate well over two mobile applications. Instead of impressions served, you have the notion of the unique user. Instead of click through’s, you have feature level engagement, in app drive to site, in app purchase, sessions, etc.
It’s a funny place that mobile is in today especially when it comes to brands. It’s not completely dissimilar from the wild west we saw of the digital web of 5 to 7 years ago. I remember my days at comScore, early, turbulent days where agencies tried to push brands and digital web spend, brands demanded ROI from their web efforts, but it was still just too early for them to both spend on the effort and also pay for the measurement.
Mobile is a lot like that today. Make no mistake, measurement capabilities and technologies abound. We are certainly not the only ones capable of understanding who downloads a mobile app who sees a mobile ad, who engages with the mobile app feature. Not to toot our own horn, but I will toot and say that we’re about as close to measuring the success of mobile efforts that brands happen to be making in a centralized place, that is to say, ‘here are your behavioral measures’, ‘here are your attitudinal measures’, ‘here’s how that relates to ROI’.
What’s funny is, I’m not entirely convinced that providing these ‘dashboards’ at this early stage, is such a great idea at all.
Consider this. If the digital web was akin to a football game, and you were looking at the scoreboard, you would understand the measures that were being presented to you. You know what down it was, you know who was in possession of the ball, you know how much time there was left in the game. But what if you also need to know how the players involved felt emotionally at the time, how playing the game impacted their desire, altered their perceptions, what would your scoreboard look like then?
We are quick to offer any customer we work with the capability of looking at all of the data we generate from their mobile effort in its raw form, in aggregate form, however they like. That said, 90% of our engagements involve us translating the outcome for our clients. The data certainly isn’t undiscoverable, not by any means. But it does have a particular set of nuances that emerge when you try to connect behavioral, location, attitudinal, it can get confusing fast.
Reading the ‘scoreboard’, if you will, requires little bit of what we refer to as “expert interpretation”, the digital mobile equivalent of Lewis and Clark, the ability to guide a client through veritable cornucopia of possible insights, interpretations, and results.
Another challenge that we face here at MSW, relates back to what I said about the early digital web. Brands know they need to engage in mobile, they see themselves being outpaced by other brands who adopt earlier, and they know they have to get involved. Meanwhile, agencies know this, and they try to get the brands involved, but they end up in situations where they end up pitching only the cost of developing the deliverable itself, not measuring the outcome.
It’s an Achilles’ heel, it’s not new, and it equates back the old saying about insanity being taking the same steps over and over again, expecting a different result. I might draw fire for the comment, but I’ll just go out and say it; if you’re not willing to invest to measure the outcome of your mobile efforts, whatever they may be, you may be best served to wait until such time that you are willing to make that investment. Yes, mobile is more expensive than the digital web, but it’s by far more intimate direct line of communication with the brands consumers than any other medium we’ve seen yet. It’s worth understanding how well you’ve done it, it’s also worth considering, that the measures of success in mobile may not align to those used in other mediums.
An app is not a webpage. A push message is not an SMS message, which is not a pop-up.
If you are successful in engaging the mobile consumer, you have been led in the front door. You can either step in, casually observing the area, making note of everything that happens, every reaction, every response. Or you can make an equally grand entrance with a blindfold on and ultimately, maybe both entrances are just as effective but in the case of the latter, you will unfortunately never know.