Friday, 14 March 2014

5 A/B Testing Considerations for Mobile Apps (mobiledevhq.com)

9jgavju
Data is power. But for native mobile apps, the tools necessary to leverage the power of data are still maturing. A 'Mobile Data Paradox' is emerging whereby the data opportunity presented by round-the-clock connected devices is complicated by the lack of sophisticated tools available to leverage this data.
For most, mobile is still an unruly beast when compared to the web. Companies from Facebook to Walmart have encouraged their internal teams to deliver higher ROI on their native mobile efforts by adopting proven methods from the web, like A/B testing.
But the technical and contextual differences between web and mobile are such that each really warrants a separate strategy when implementing things like A/B testing. Here are our top five considerations when A/B testing your mobile apps:

1 - Mobile is Not Always Connected

A/B testing on the web is technically straightforward because a website is hosted on a server which you control, and can make changes to almost instantly. As compiled software, mobile apps are distributed (downloaded) and run by users locally on their own devices. There are few ways to push changes or tweak native app features or design unless a user updates to a new version or has an active connection.
This difference has a big implication for A/B testing mobile apps. Essentially, native A/B testing libraries for iOS or Android will select a bucket for each new user locally and then call a remote server to pull some variation data (usually string or JSON) for that bucket to display on the user’s device screen.
The problem is: what happens if a user is not connected? There are a couple different ways of dealing with this, but the best way is for the library to pull variation data ONLY on the app’s first launch, and then cache that data locally on the user’s device. This is how we’ve implemented mobile A/B testing at Splitforce, and it has the advantages of 1) preserving a consistent user experience, so that new users’ see the same variation throughout the duration of the experiment and 2) maintaining experimental rigour, so that you can be sure that a specific variation influenced a particular event.

2 - Test & Optimize for Different Mobile OS

The implications for A/B testing on Android vs. iOS really warrants a separate post of it’s own, but in the meantime here are some of the main differences to consider when developing your mobile testing strategy. Engagement and native UX is perhaps where Android and iOS differ the most. For example, Android’s ‘intents’ allows users to share content using any installed app from any other app - which doesn’t exist on iOS.
But the differences go beyond just UX conventions and native feature support, even the platforms’ user bases’ demographic composition and willingness-to-pay can have an affect on what works and what doesn’t. According to this comScore report, iOS users tend to be younger and wealthier. This may be why that despite Android’s impressive growth in market share of devices sold and active users, year after year iOS has been shown to better monetize users.
Source: Vision Mobile Developer Economics
Think critically about the differences in general audiences across the two platforms, and how you can play to those differences through different UIs, UXs, price points and features. For example, you may A/B test different monetization strategies on iOS and Android in order to capture the most overall value from both platforms. iOS users are generally more likely to download paid apps and make in-app purchases, whereas Android users may be more easily monetized through advertising and lead generation. You can also test different price points – and may find that one platform’s users have a greater tolerance for higher price points.

3 - Mobile UX is Often Not Linear

We’ve all seen the conversion funnel diagrams touted by web marketing evangelists. The ‘funnel’ usually looks something like this:
Source: StrategicMalta.com
This is a really good way to visualize the linear user experience that is characteristic of many websites. On web, new visitors hit a landing page, move to the next step of browsing, then the next step of signup, then the next step of purchase, and are then (for more sophisticated funnels) retained through repeat purchases or recurring monthly sales.
For mobile apps, the linear funnel paradigm could not be less relevant. In many mobile games, users are moving back-and-forth between rounds of play, engaging on social networks, passively consuming a narrative, or configuring their settings. But even in mobile commerce, social media and lifestyle apps, the experience is much more dynamic than in comparative web products.
As a result, at Splitforce we’ve seen this have an impact on which variables are worth testing. Whereas A/B testing for web usually addresses visual elements like layout, copy or colors - on mobile you can boost conversions by testing things like the user workflow itself or presence of specific features.

4 - Test & Optimize for Different Device Types

A key challenge to Android development is the relatively large range of different devices that use the platform. From Samsung, to HTC and Google’s own phones, Android represents a mosaic of different price points, screen sizes and resolutions, and hardware that can make it an unpredictable platform at times.
iOS runs on significantly fewer device types, but can nevertheless be complicated by different hardware that may not have the capability to run some apps’ features correctly or at all.
Source: JQueryMobile.com
When A/B testing new features or designs on mobile, segment by device type to account for how changes are affecting users with different makes and models. You may end up driving more desirable user behavior by pushing only certain devices new features while keeping the old feature set for other devices.

5 - Use a Mobile A/B Testing Platform

Mobile-first A/B testing platforms like Splitforce live and breathe mobile development, and have developed solutions based on a deep understanding of the challenges surrounding A/B testing native mobile apps. The inclusion of specific technical features like caching on first app launch, experimental design considerations like lazy assignment, and tracking options surrounding mobile-specific goal types make it inefficient for almost any app to reinvent the wheel by building such a tool in-house.
Moreover, mobile A/B testing tools can empower your design or product management team and conserve engineering resources by allowing non-technical team members to autonomously launch new tests and rollout improvements. In-browser editors and configurable test results dashboards make analyzing test results and taking action on test data is as easy as point-and-click.

No comments:

Post a Comment