X

The Modern Marketing Blog covers the latest in marketing strategy, technology, and innovation.

When Does Multivariate Testing (MVT) Become an Absolute Necessity?

Dimitris Tsomokos
Senior Product Manager

There’s a zoo of testing methodologies out there, and it can be hard to understand what each methodology does and it’s intended purpose. In this post I will focus on two basic methodologies, A/B (or split) testing and multivariate testing (MVT), clarifying what each one does. Read on to find out when A/B testing is not enough and when you absolutely need to use MVT to optimize your site.

What are A/B & Multivariate Testing?

To begin, we need some terminology. Every page is composed of several elements. For instance, let’s use a typical landing page for illustration (below). The page includes a top navigation menu, a free-shipping message right under that menu, and a large rotating banner above the fold. Here is an image highlighting these different elements on the page.

ci_default_labels

With Maxymiser’s Visual Campaign Builder tool you can select the area that you want to consider as a distinct element on the page. Once you have identified this element, you then provide alternative variants for it (that is, alternative to the default variant, which is already being served on that page). The entirety of content that’s being served to a visitor is referred to as an experience.

ci_variants_experience

So there we have it: Any page or set of pages, as in a conversion funnel, is an experience. Every experience can be broken down into elements. And for each element we can have several variants that we want to compare against default.

  • A/B Testing compares different experiences against the default; it doesn’t report on the underlying elements.
  • MVT compares different elements as well as different experiences against the default; it reports on both.

How to Choose A/B or Multivariate Testing

Now let’s see how you can go about deciding which one to use. Of course, in either case, I’m assuming that the whole methodology you’re adopting is such that it leads to correct statistical results!

[Have a look at this blog post to make sure you’re on the right track: 5 Steps to Setting Up a Successful Test - Statistics Best Practices]

Question #1: Do you want to give the page a complete facelift, or are you looking to make incremental improvements?

Depending on the situation you may want to make radical changes to the page, for example as part of a broader revamp. In this case your goal is to find the best direction to follow. You are looking for big bold changes and you want to understand the impact that the sum total of those changes will make on the bottom line. In this case the granularity of elements isn’t your primary concern, instead you want to get as quickly as possible to the overall best performing experience (as well as the worst performing one, which is also important!

  • Tool to use: A/B Test. The simplest choice in this case is an A/B test.

On the other hand, you may want to make incremental changes. There are many situations where you need to understand the performance of each element, in addition to the overall performance of each experience. Perhaps you want to find out whether the top navigation bar should be made more prominent. Should it be on the left-hand side? Should the offers below the fold be made bigger and should there be fewer of them? Perhaps a different set of hero banners would work better that the current ones?

multivariate_testing_slide

One could argue that each of these questions can be answered with a separate A/B test: you can run a test on the top navigation menu’s style and position, a test on the offers below the fold, and yet another test on the hero banner. But you immediately run into the following problem: Will you run these tests simultaneously or one after the other? (Doing tests one after another is a risky technique known as Wave Testing). And if you choose to run them simultaneously, so as to avoid spurious seasonality effects from testing things at different times, then you have to make sure that a visitor who participates in one test also doesn’t participate in another. If a visitor participates in two or more tests at the same time, the attribution of that visitor’s actions to the particular experience seen in each test becomes very problematic. And of course, more tests to run means more resources committed to the lifecycle of these tests!

But let’s assume for a moment that you have the resources to run these tests, and let’s assume you design and run them correctly, you could still be missing out on a key piece of information. Depending on whether this piece of information is important to you, a multivariate test could be the only way forward. Please read on to see exactly what I mean by this.

Question #2: Do you want to know the impact of each incremental change on its own, or the total impact of all changes?

So you’re now beyond the point of making big bold changes, and you need to make incremental changes to your site. For instance, you’re running a test on the hero banner and the top navigation menu (Test 1). In addition to this test, you may also want to find the best size and style for the social media buttons at the top of the page. So you run an A/B test on these buttons (Test 2). The key point here is that you don’t expect any changes you make on the social media buttons (from Test 2) to influence changes you’ll make as a result of the banner test (Test 1). Therefore, in this case, you primarily want to know the impact of this change on its own, independently of any other changes.

In statistical language, we say that we ignore any correlations that may exist between different elements on the page. In other words, we assume that the result of Test 1 does not depend on the result of Test 2. Whether or not we change the style of the social media buttons, the winner of the other test is going to be the same. We assume that the different elements on the page are not correlated with each other.

Here’s the crux of the issue: With A/B testing you can’t possibly know if this holds true. You can’t be sure that the winner from one test “plays nicely” with the winner from the other test. The end result is that, remarkably, “Winner + Winner” doesn’t always make a Winner.

  • When the statistical correlations between different elements are very small, then “Winner + Winner = Winner”
  • When the statistical correlations become significant, then “Winner + Winner” does NOT give the overall Winner

I have two crucial points to make here: firstly, statistical correlations are common. Don’t think they are something you can generally ignore. In our R&D work, we get to look back at the data that’s available to us - we have found that as many as 82% of multivariate tests reveal the presence of correlations between elements. Secondly, statistical correlations really matter. Ignore them and you may be declaring a winner which actually isn’t the best performing experience.

So if you’re interested in understanding the overall, combined impact of multiple incremental changes on conversions or revenue, you have to use MVT. I’ve summarized these cases in the table below.

summary_table

I’ll leave you with one last example on this point. Let’s assume you’re doing a Funnel Test. By its very nature, such a test spans multiple pages (elements). Now, you clearly want to know the overall winner of the test, i.e., you want to answer the question: “What’s the best-performing funnel?” Nevertheless, you probably also need to analyse the impact of changes on every page in the funnel. The only way you can have your cake and eat it too is with an MVT.

I hope it’s clear by now, choosing the right testing methodology to optimize your site is critical. Ready to learn more? On Wednesday, January 28th at 2 pm ET, Trudi Miller, PhD, Maxymiser’s Technical Trainer is hosting a webinar: “To A/B or Multivariate Testing? That is the Question!” Register to find out even more about how knowing when to apply which testing methodology can greatly increase the value of your test results.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.