Friday, 3 July 2015

How is your app doing? Here are some key questions to think about

yourstory.com
app_performance
Doing what we do at Konotor, we have friends in the app world — many of whom are product and marketing folks in the companies we engage with. We often discuss metrics they are tracking (or could be tracking) for their apps.
Here are a few key questions you may want to think about in relation to measuring your app’s performance.

1. Am I capturing all the key info?

This is the easier question, as there are many resources/guides online to help you get started here. The picture at the beginning of this article gives you a framework to think about whether or not you are capturing the critical metrics — think in buckets to see what metrics you capture for each bucket.
Some of the retention and engagement metrics are unique to mobile apps. Which metric to focus on depends on the genre of the app. A game should measure D7, D30 retention actively, while a commerce app would rather look at conversion funnels and monthly cohorts. It is important to be aware of which metrics are key to your own business/app.
The way apps are evolving to focus on a single purpose, it also becomes important for commerce and utility apps to measure what percentage of users who start using their app are able to get to the key action or experience in their app, and how soon it happens from app install. Getting users to the first ticket booking, first invoice generation, first payment through the app, before they uninstall your app has become very important in helping users understand your value sooner. Don’t ignore this!

2. Am I analyzing the captured data right?

In terms of slicing your data, there are a few ways to segment data that have emerged as more useful than others.
  • For any app that does paid user acquisition or has varied sources of acquiring new users, it is most critical to look at ALL of your data by the source of app install. Your retention metrics, LTV, Engagement, Virality (k-factor, percentage users sharing, etc), all in a way are linked to the quality of user your campaigns are bringing in. Slicing data this way will give you the true picture of how well a channel is doing for you.
  • It is always important to measure how your new app is doing as compared to a previous version. We all understand the app has to improve on an ongoing basis — but how do you prioritize what new features/capabilities to work on or what bugs to fix if you don’t measure the impact of those changes on your key metrics?
  • Are users re-engaged by push notifications behaving differently from those re-engaged via social re-targeting? Is one campaign working better than another in helping users see your value and turn into more engaged users? It may be as simple as the first few words you use in your push notification being right, to help the user quickly realize the value of your communication. Measure this by tracking the source of the app session.
  • Are users from a certain geography or age group or gender more engaged, retained and satisfied using your app than others? Slicing your metrics by demographic info can reveal which groups are finding it hard to derive value from your app and also figure if indeed your marketing is reaching the right audience.
  • How different are your most engaged (top 5 percentile in number of sessions or those using the app X times a day/week, etc) or most active users in their behavior from the average or the worst engaged users? What percent fall under these buckets? If it is about moving more users into that most engaged user bucket, find out how by hypothesizing on how some users end up seeing more value and getting much more out of your app, and testing those hypotheses. If you have a growing number in the most engaged bucket, you are doing well. (You should also see if the most engaged users correspond to a specific demographic or install source/campaign)

3. How do I know if a number I see is good or bad?

There are some benchmarks available for some of the metrics that are published or shared by some of the SDK players such as flurry, appsflyer, etc. that help us understand what is the average number for certain categories of apps. This deck by@deepakabbot also contains some great resources, pointers, and benchmarks.
  • D1 of 40–50%, D7 of 20–30% and D30 of 7–15% is considered very good for a game, but might be bad for a “launcher app”(home screen replacement on android) or a “social app”(like twitter or snapchat). An app that is a productivity tool or a social app might have a lower D1 or D7, but you would expect it not to taper rapidly afterwards. A bi-weekly cohort after 2 weeks of using a utility should be pretty steady post the first time period.
  • Uninstall rates of 50% within a week are normal, though again this depends on the user acquisition methods in place.
  • DAU/MAU works well as a metric for games, social apps. 20–25% is great for games while the best social products may have numbers as high as 60–70%. I personally prefer looking at DAU growth at a steady pace of user acquisition, since DAU/MAU could be misleading when you have a varying number of users acquired on a daily basis .
One way to get around searching for benchmarks to hit, is to focus on the trend that these metrics show, rather than the absolute number. We always need to improve on these metrics.
Whatever number/KPI rings an alarm bell for your business, prioritize making that look better.
  • e.g.: If you see that only 4% of users who started using your app accomplished the key action (the experience or utility for which they downloaded your app) even once, clearly something is amiss — so figure out your onboarding funnels and slice them by your user acquisition channels to see if it was uniform across channels.
Find slices (by install source, demographic, etc) of users with the best metrics for usage, engagement, retention, etc and use that as the benchmark to aim for across the board. (It doesn’t mean you can achieve those numbers with all users, but it shows you the potential. Of course, you can’t expect all people playing your games to behave like the whales, but studying the whales can also give you insights. Maybe you need to look at the top 10 percentile rather than top 5 percentile to get more realistic benchmarks to work towards.)
  • e.g.: If you see that time spent in your app for your best users is 50x the average, then figure what are the elements of your app experience that make it better for the top users
  • e.g.: If you figure your virality, uninstalls, DAU/MAU are all different by user acquisition channel, calculate the real effective cost of user acquisition and LTV for each of these channels and use the best one as your benchmark (caution: maybe exclude your “whales”, and use different benchmarks for organic and paid channels).
Finally, remember that every app and every business is unique. You will still need to determine what metrics are most important to your business, what are the right ways you want to slice your data, and of course, whether you are comfortable with where the numbers show your business is headed.
Keep in mind, not everything can be captured by metrics alone. Doing monthly user experience sessions, and communicating with your app users, are critical to improving upon your product. You will be able to form new hypotheses and identify new data to capture from these exercises.

No comments:

Post a Comment