X

The Modern Marketing Blog covers the latest in marketing strategy, technology, and innovation.

Optimization Shorts: A/A Testing

What is A/A testing? How can it be utilized for your campaigns?

You’re all familiar with A/B testing, but how much do you know about A/A testing? A/A testing is a useful method for gaining confidence around your tests and your testing tool, whereas A/B testing compares two alternative experiences.

A Validation Tool

A/A testing compares a default experience to itself in a single element test using two sets of randomly-generated audiences. An A/A test helps to validate a testing tool and your tests in the following ways:

  • Assuring accurate traffic allocation, in other words, verifying that random generation is in fact random by ensuring the generation count per experience is similar within the statistical bounds
  • Identifying the frequency of false positive results
  • Identifying validation 'bubbles' that help you better understand results of other tests

It is important to note that there will likely be some differences between the two experiences that are simply due to the randomness of the two sample groups and not indicative of a faulty tool or test. However, as the sample size increases this difference should decrease.

A/A tests should not run at the same time as other tests to ensure there are no action attribution issues. In other words, if you’re running an A/B or multivariate test on the same page as the A/A test with the same success metrics then you could unintentionally attribute a visitor's actions to the A/A test instead of the other campaign you are running simultaneously.

Understanding False Positives

In addition, an A/A test can be used to understand the frequency of false positive results. Simply put, if you use a 95% confidence level for any given campaign, then only 1 out of every 20 outcomes should show a false positive result. This can be validated through an A/A test by determining how many A/A tests conclude with a significant difference in conversion rates. For example, if you ran 20 A/A campaigns and saw more than two conclude with significant uplifts, this would show you that the false positive rate is potentially larger than the anticipated 5%.

The Variance ‘Bubble’

A/A tests will help determine the variance ‘bubble’ around a conversion rate that can be considered to have a minimal impact on future tests. This 'bubble variance,' when visible, does not indicate significant increases/decreases.

In addition to technical validity, A/A testing can provide the ‘bubble’ around your uplift that you are willing to accept. The difference between conversion rates in an A/A test can be taken as the ‘bubble’ for future campaigns' uplift resolution acceptability. For example, if the bubble is .1% in the A/A test, your campaign resolution is 3%, then you could accept uplifts in the 2.9%-3.1% range. If you pushed the test live and saw something like a .1% uplift, you know that it has not reached significance.

With A/A testing you will not be able to tell when, or if, your results will reach significance as this is based on the difference between the new variants and default conversion rates. Therefore, any error or confidence seen in A/A testing shouldn’t be used as a reference for future tests, as there should not be a significant difference in conversion rates in an A/A test.

This is the first in our new series, Optimization Shorts, where we strive to answer basic to complex questions about customer experience optimization in a concise way. If you’d like to submit an idea for an Optimization Short, do not hesitate to reach out via email or submit your question in the comments area!

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.