As analysts, we help our clients set up and run countless successful tests every month. And on our education-focused CXO Blog, we share insights on how you can do the same! This week, however, we’re going to get into a specific test to illustrate how some of these best practices can be put to use. Read on to explore our newest Mini Case Study.
The Client’s Objective
For this particular test, our client wanted to run a test on different promotional offers presented as homepage hero images. The goal was to see which promotion led to the biggest increase in conversion and revenue. Following our strategy for successful test setup, the client defined a clear hypothesis that would lead to actionable insights and clear metrics.
Setting Up the Test
We chose to run an A/B test as opposed to a multivariate test because we were only testing one element of change between the variants. With our help, the client decided to include three variants (or different test experiences), which were as follows:
This particular test was evaluated by the effect it had on purchase conversion, revenue, sales quantity, and banner engagement (i.e., clicks). The test ran for two weeks, our minimum recommended time for achieving statistical significance. In this case, we reached statistical significance in 3-4 days, but we still let the test run the full two weeks to ensure reliable data.
After running the test until it reached statistical significance, the client found that Challenger #2 led to a 146.35% uplift in purchase conversion and a 169.21% uplift in sales quantity. For the hero image click metric, it was Challenger #1 that to the greatest uplift, with an increase of 95.25%.
While numbers offer clarity and insight, it’s important to also explain these insights qualitatively. In this case, visitor behaviors were significantly impacted by the two promotional offer banners. The Challenge #2 experience resulted in the greatest increase in purchase conversion and sales quantity. Even with the low action count, each distinct uplift was close to or over 90% in confidence index.
As you can see, this relatively simple test had a significant uplift for several important conversion metrics. If the client had simply changed the homepage banner without first testing the impact of the experience, not only would it have risked harming its conversion metrics, but it could've also lost the crucial insights the test offered.
For many organizations the homepage can be a particularly contentious area to test: After all, it's the first, crucial step in the customer journey. Testing, however, can reduce contention through cold, hard data. By setting up a test for these promotional offers, our client stakeholders were able to prove the efficacy of their decision with concrete metrics that other decision makers in their company could easily understand.
We run tests to help companies confidently make positive changes to the user experience or customer journey. Therefore, each test comes with a set of recommendations from our analysts: for either a hard-coded change to the site or a decision to continue filtering traffic through a specific experience in the test environment.
In this case, our recommendation was to compare the additional revenue generated from the offer to the cost of the free product associated with the offer. If the cost is less than the revenue generated by the promotional offer, then the client should move to implement the one or both of the offer experiences for all visitors. We would also suggest examining other free products to promote, so we could discover which of these drives the greatest purchase incentive.
For other more in-depth case studies from our clients in retail, finance, travel, and other verticals, click here!