FOLLOW

FOLLOW

SHARE

6 Expert Tips For Testing Mobile User Engagement

Optimizing app user engagement is very much worth the time

29Mar

Abandonment anxiety. It is something just about every mobile marketer suffers from. It is that deep, paralyzing fear that app users will abandon, disengage or (the most horrible of all) uninstall an app from their smartphones.

This - totally justifiable - anxiety has a huge impact on the way marketers address user engagement testing. Mobile moments are notoriously short-lived. Mobile users are fickle and demanding. In this world, testing a green button vs. a red button (like when web-based A/B testing began) makes little sense. So, how should a mobile marketer go about testing and optimizing user engagement?

In-app engagement features are the key. Optimizing user engagement through the use of engagement features requires a new way of thinking about testing. Here are six basic rules to live by:

Get your hypotheses in place before you even think about testing

App user behavior can usually be explained with several hypotheses. Let’s take an app behavior such as user registration, for example. If user registration does not occur, there could be a few different explanations. The trick is to list all of the hypotheses and all possible solutions. When you do this, it is easier to see what should be tested first, and why.

In many cases, writing down your hypotheses can also make you more aware of other elements and metrics to test for various user segments and where they are in the user cycle.

A/B testing no longer means comparing apples to different colored, shaped and sized apples

A/B testing used to mean that you would compare two different versions of the same element to see which version converted or performed better. Now, in mobile, there are a number of different ways in which you can get a user to perform a desired action. So, why not compare all of those different methods to one another? Instead of comparing a green apple to a red apple, why not compare apples to oranges to see which one is more engaging?

For example, let’s say you have come to the conclusion that your users need more information about user registration before they’ll sign up. What should you do? How should you present them with this additional information in a way that will get them to start the registration process? Will you use a tooltip? A small banner at the bottom of the app screen? A tutorial video?

What is an app marketer to do?

This is where testing apples to apples (a green tooltip vs. a pink one) doesn’t make sense, and you should begin testing scenarios – apples and oranges. Divide users into groups, and serve 25% with the tooltip, 25% with the video, 25% with the banner and 25% with nothing at all. Then, see what happens.

Remember your segments

Maybe you already know that the majority of your new users prefer video tutorials. And you might already have numbers showing that experienced users react more favorably to the tooltip. You may have other users who you know prefer the banner, because they came to your app from a specific channel with a banner ad. The point is simple: you should always test for your segments and serve them accordingly. Many times, it will be necessary for you to deploy several solutions to the same problem in order to best serve your various segments.

Experiment with different times for serving engagement features

Let’s say you have devised an elaborate test to see how users respond to an interstitial with a coupon. But, after waiting in anticipation for the duration of the test, you discover that there were hardly any responses. If you served the test at the wrong time of day, week, month, or year (e.g. suggested users buy a swimsuit during the middle of winter), then you ignored the mobile moment.

This lack of awareness is why the test failed – users ignored the interstitial, almost like it was never put there in the first place.

Conclusion? It may seem obvious, but you should run your test at a time when users are more likely to respond to it. This doesn’t only mean serving tests at the appropriate time on a clock or a calendar; it also refers to running tests at times that coincide with specific user activity - such as when users just completed watching a video in your app and are (more) open to your other suggestions.

You need to experiment with timing from the point of view of the user; check when the user is most likely to interact with an engagement feature – before or after a certain process has begun. Refining timing is especially important for the more intrusive interactions, like surveys or full screen messages.

Be moderate, test capping

Capping tests is important. If someone isn’t responsive to a certain test – whether it be in-app messages, banners, surveys, etc. – stop the test with that user. Otherwise, you will increase the risk of the user uninstalling the app.

Remember, though, that you should set individual capping rules for each type of test. For example, video tutorial suggestion or survey request may be capped at one appearance, but a banner at the bottom of the screen could be capped at three appearances. The intrusiveness of the engagement feature being tested affects the capping rule.

Don’t forget to test your capping rules and amend them accordingly, as needed. You want to make sure you allow enough appearances of a particular element to make sure you have the most accurate results of the test. But, you don’t want to risk annoying your users to the point where they will uninstall the app.

Set up your tests with one constant

You may be feeling even more overwhelmed, now, than you did before you started reading this article. How can you even begin to create a testing plan that makes sense when you have to consider the various user segments, different engagement features, and timing to set?

Stay calm, and test on!

The key is to make one element of your test a constant and to test the other elements around it as variables. Let’s go back to the example in point #2 about users needing more information before starting the in-app registration process. As we discussed, you want to test the video, tooltip and banner for each of your segments. Here, in this example, the only thing you already know is what time is best for running this test.

In this scenario, the time should be the constant for the test. The different engagement features and segments should be the variables. You may find that sometimes your tests won’t be conclusive, but mobile engagement will grow. This means that the engagement features you chose to test helped to improve engagement, but there wasn’t one in particular that worked better than the others – your engagement features are, in fact, interchangeable.

In this case, you will find that engagement improved compared to the ‘control group’ (the users who weren’t tested on), and that’s what really counts. Optimizing app user engagement by testing engagement features, segments, and timing requires time and effort, but is very much worth it.

Sources

Polly Alluf is the VP of Marketing at Insert, the first Codeless Mobile Engagement Platform. She loves everything about marketing, from digital to field and from branding to automation. Polly’s passion is data driven marketing and she considers herself very lucky, being a marketer whose target audience is enterprise marketers.

Comments

comments powered byDisqus
Pharma small

Read next:

R&D Innovation Approach In Pharmaceutical Industry

i