A/B testing with Unless

Learn how to use Unless to run experiments, test on-site personalizations and split traffic using the control group.

First of all, you should know that website personalization is different from A/B testing.

Where A/B testing views your visitors as a homogenous group, personalization is about understanding who those visitors are and how to segment them into audiences. In other words, where A/B testing can help you achieve your "local maxima", personalization can help you reach your "global maxima". This has certain implications for testing.

Example: Let's say you create two personalizations. The first is targeted at visitors from the US and changes the CTA on the homepage. The second is targeted at mobile visitors and changes the headline on the homepage. Some (but not all) of your website visitors will see both personalizations - they're in the US and browse the site using a mobile device. The conversion rate goes up - but what caused the uplift? The changed CTA or the new headline?

As the example above illustrates, the nature of personalization (ie the "allowed overlap") makes data interpretation more difficult than in an A/B test.

You can think of personalization as a sequence of optimizations and data-scientists agree that the best way to reliably measure your personalization(s) ROI is by using a so-called holdback set. The holdback audience is like a constant control group. It is always shown the original version of your website, regardless of how many personalizations you're running.

You can make the holdback larger to get data faster, or shrink it if you’re very confident or get a lot of traffic. We recommend to set it to 5-10% so you can always monitor how well your personalization program is doing.

You can still experiment with content, CTAs, and design, just keep in mind that the performance is measured globally. Via the Insights page, you can analyze how your goals, audiences, and personalizations are performing.

Micro A/B tests

While keeping this in mind, at Unless we do offer the option to set up micro A/B tests. Let's say, you want to change the text of a CTA button and see whether it has an impact on the button getting clicked more or not, you can set up a micro A/B test!

You start by creating a new on-site personalization and once you are in the editor, you will see the option to Set up A/B test target on the right side, as marked in the image below.

microab-editor

After making the change(s) you wanted, you need to select an element on the page to track as the goal of your personalization. For this, you can use the target button; click it once and after you are over the element you'd like to select, click again. That's it! Save your draft and don't forget to publish.

To see the results of your micro A/B tests, you can go to the Personalizations tab of the Insights page and scroll down. Here you will see the percentage of visitors in the control group as well as participants (visitors who have seen the personalization) and the number of total conversions. You can also click Details to get more information.

microab-insights

Why personalization experiments are tricky

Generally, testing your personalizations is a good idea. However, running proper experiments requires a lot of traffic and the more personalizations you test, the harder it gets to reach statistical significance. Also, the longer you run a test, the higher the risk of results polluted by external factors. Lastly, personalizations influence each other, so with every additional personalization it gets harder and harder to pinpoint what caused a dip or uplift in your goals.

As a result, we recommend to put an emphasis on continuous monitoring and to use the holdback set. Visitors assigned to the holdback (aka control group) will see the original version of your website. This way you can always check how your personalizations perform in comparison to the unpersonalized experience.