When running a multivariate testing (MVT) campaign, it’s tempting to include as many variants as possible for each element you’re analyzing. After all, the goal of running an optimization test is experimenting with variant combinations and seeing what sticks. Marketing teams have to be careful, though! Traditional MVT is full factorial, as opposed to partial factorial (or fractional factorial). Full factorial tests analyze every possible version of an experience, based on the number of variants each element has. This means you can quickly find yourself testing more versions than you have the traffic to sustain.
The more versions you test, the more thinly you must cut overall website traffic so it can be split between these versions; the thinner these cuts, the longer the campaign has to run before it provides trustworthy data results. This isn’t ideal! Companies can avoid this trap by understanding the role traffic and conversion rate play in multivariate testing, as well as the relationship these two factors share.
When I say “traffic,” I mean “unique visitor traffic”—which, no matter how thin it’s sliced, has to be distributed evenly between all the versions you’re testing. The rub here? There is a point at which the traffic slices for each version are cut so thin that no version will give you statistically significant data. (Statistical significance is, in a nutshell, a measure of how likely it is an observed change in conversion rate is not due to chance but to a specific change made to a specific variant.)
Put bluntly: Not all pages or sites are conducive to MVT. Some just don’t have enough unique visitors to support campaigns of four experiences or more. But how can a company know if it has enough traffic? While there’s no hard definition of how much traffic is “enough,” I’d like to (roughly) quote Thomas Edison: One of the great essentials to (marketing) achievement is common sense!
Not all pages or sites are conducive to MVT. Some just don’t have enough unique visitors to support campaigns of four experiences or more.
Say, for example, a company strives to improve a page that receives 40 unique visitors every day. It wants to test three elements, each of which has two variants. In all, the company must therefore test eight experiences (2 x 2 x 2 = 8). Also, since it has to split traffic evenly between all eight, each version will only receive five visitors daily for as long as the test runs! This business probably won’t receive valid, actionable insights from such a campaign, because each version is getting too few visitors to reveal any patterns in customer behavior. One way around this problem would be to let the test run longer; eventually, though, the law of diminishing returns could kick in, and that would render the campaign a supermassive time and money black hole.
But what if this example page had 200 unique visitors per day? While not a huge uptick, each experience would receive 25 unique visitors each day. Twenty-five still isn’t a lot, but it could be enough for the test to reach statistical significance with enough visitors—especially if the page’s primary action conversion rate is high enough.
Primary Action Conversion Rate
No matter what you’re testing—clickthroughs, purchases, form fills, or something else—improving conversion rate is the name of the game, and it keeps MVT grounded. You may test a wide range of versions in MVT, but each one is ultimately focused on boosting conversions for the same primary action. And unless visitors convert on this primary action frequently enough, companies can’t reach justifiable conclusions from their tests. In this way, traffic and conversion rate are directly proportional. The odds someone will convert go up as more people visit the page.
However (and here’s where things get tricky), when it comes to a test’s ability to reach statistical significance, unique traffic and conversion rate are inversely proportional. That’s because the higher a site’s primary conversion rate, the less unique traffic is required for its test to yield valid data. The opposite is also true: The more unique traffic a site has, the lower its primary conversion rate can be for a test to yield valid data. It’s a little confusing, but it does make sense! Basically, the bigger the sample size (traffic), the less actions people have to take for companies to see trends form. And the more actions people take, the fewer people have to visit for trends to emerge.
When it comes to a test’s ability to reach statistical significance, unique traffic and conversion rate are inversely proportional. The higher a site’s primary conversion rate, the less unique traffic is required for its test to yield valid data.
Let’s imagine there’s a page on your site that gets low traffic—only 20 unique visitors per day—but enjoys high conversion: About 10 people convert daily. While 20 visitors isn't a lot, the conversion rate would be high enough for you to notice if one of your tested versions only converted five people, and you'd probably conclude (with good reason!) that whatever you changed from default page to tested page should not go live.
While one of MVT’s main draws is how many variants it lets you analyze at once, don’t fall into the traffic trap! Before embarking, make sure you know the unique traffic and primary action conversion rate for the pages you want to test. This way, you ensure your campaigns give you statistically significant results you can act on quickly and confidently.