Mastering A/B Testing: What to do Before, During & After Tests to Get More Wins — Review

Poli Dey Bhavsar
6 min readMay 2, 2021

As a CRO expert, wouldn’t you like to get more value out of your experimentation efforts? Well, A/B testing can be complex and you can encounter pitfalls!

With A/B testing tools evolving over time, we have come a long way when it comes to convenience, however, the success of your tests largely depends on you. How you use your most powerful tool — your brain — to design optimal testing strategies and determine the best outcome of your tests.

Image: A/B Testing: Find your winner between Control vs. Challenger.

To get more wins out of your optimization efforts, please stay tuned. I’m going to share my learnings from the course A/B testing mastery by Ton Wesseling at the CXL Institute. I’ll cover before the test, how you should plan, during the test, how you should execute and after the test, how you should make sense of the result.

Origin of A/B testing

A/B Testing, as the name suggests, is all about testing or comparing two versions of something, A (control) and B (Challenger), to find out which performs better. Do you know that it’s origin dates back to the early twentieth century? Yes, in the 1920’s statistician and biologist Ronald Fisher discovered the principles of A/B testing then known as controlled experiment.

This online testing method, however, has risen to popularity in the last couples of decades as companies began to evaluate almost everything from headlines to product descriptions to website design to CTA copy or button.

Let’s begin!

How to plan an A/B test?

While planning for your test, the most important question that arises is that: Do you have enough data to conduct A/B tests?

Ton Wesseling teaches in the course, how to use the ROAR model created by him to find an answer to this question.

ROAR is an acronym for:

R = Risk

O = Optimization

A = Automation

R = Re-think

Image: ROAR Model to figure out if you have enough data to conduct A/B tests

As per the ROAR model, if you are below 1000 conversions you cannot run a A/B test. You can use this free A/B test size calculator to find out the minimum sample size and the duration you have to run your tests and more.

If you have below 1000 conversions and you still run an A/B test, it would be really hard to find a winner with too low data. Even if you find a winner, chances are high that it’s not the real winner, i.e., it’s a false positive.

Why so? Because, at 1000 conversions your challenger needs to beat the control with 15 percent. A 15% uplift is needed to be recognised as a winner.

As per the model, when you have 10,000 conversions, you need 5 percent uplift to find a winning A/B test outcome.

Decide your goal metric

Next, it is highly important for you to have a goal metric to which everyone in your company is on the same page. Therefore, you need to have an overall evaluation criterion (OEC). For example, for a financial institution, the number of active monthly users of their banking solutions is their overall evaluation criterion.

Remember Power & Significance rule of thumb

  • Power — Try to test on pages with a high power, i.e., >80% to avoid chances of ending up with false negatives. In other words, when there is an effect to be detected you won’t detect them if power is lower.
  • Significance — Try to test against a high enough significance level (90%), otherwise you’ll end up with false positives, i.e., you’ll declare winners when in reality there isn’t an effect.

Generate user behaviour insights with 6V research model

There’s no point in starting your A/B tests if you’re not trying to solve any problem. Problems that add friction in your customers’ journeys. Understanding user behaviour is crucial as it would help you with inputs to set your hypothesis. The 6Vs in this model are Value, Versus, View, Validated, Verified and Voice.

Image: 6V Research Model to get insights on user behaviour.

Once you are done with all your research, you’re all set to write your hypothesis.

Write a proper hypothesis

Wondering why you need a hypothesis? Well, hypothesis gets everyone in your company aligned on why you’re running this growth or research experiment in the first place. A proper hypothesis does 3 things:

Describes a problem

Proposes a solution

Predicts an outcome

See below for a format for setting a concrete hypothesis.

Image: Here’s how to set a proper hypothesis for your A/B test.

Here’s how this format has been used to generate a hypothesis for addressing self-efficacy among users.

Image: Example of a concrete hypothesis.

Prioritize your A/B tests using PIPE framework

There are popular frameworks like PIE that stands for Potential, Importance, Ease or ICE that is an acronym for Impact, Confidence, Effort. However, Ton Wesseling recommends not falling trap to use these two frameworks as they lack Power. Therefore, he recommends the PIPE framework as shown below:

Image: PIPE model includes POWER that the PIE model lacks.

Here’s an example of how PIPE prioritization looks like:

Image: An example of PIPE prioritization.

How to execute an A/B test?

Now that you are done with all the planning and research, let’s move towards designing, developing and quality assuring your A/B tests.

Salient points to keep in mind:

  • Have one challenger only
  • You can do more than one change provided they’re aligned to your hypothesis and your budget permits them
  • Consider minimum detectable effect (MDE)
  • Don’t use the WYSIWYG code editor
  • Consider injecting client side code in the control also unless server-side coding
  • Consider QA of browsers and devices
  • Calculate the length of your A/B test using the free calculator (already mentioned in this article)
  • Monitor your experiment and stop if anything is broken
  • Check for Simple Ratio Mismatch (SRM) and stop test immediately
  • In case your company is losing money through this experiment then stop the test

How to make sense of the result?

There can be two possible outcomes of your A/B test — Either no winner (inconclusive) or Winner. Before we move on to the result here’s few things to keep in mind:

  • Analyze the result in your site’s Google Analytics or any other analytics tool rather than the test tool
  • Avoid sampling
  • Analyze users and not sessions
  • Analyze users who have converted instead of users and total conversions

Use this free A/B test result calculator.

You get a winner

  • Implement it as soon as possible
  • Dive in segments to understand who caused the effect and use this insight for writing a new hypothesis for your next experiment.

No winner

  • You can still implement the result
  • No need to dig in segments to find winners

So, that’s all for now. This blog is just an introduction to the A/B Testing Mastery course at the CXL Institute. I would recommend enrolling for the course to get a thorough knowledge about A/B testing.

Please feel free to drop your comments below. See you in the next blog. Until then stay tuned and stay safe!

--

--