X

The Modern Marketing Blog covers the latest in marketing strategy, technology, and innovation.

3 Steps to More Responsible Reporting

Harrison Small
Principal Consultant for the Oracle Maxymiser Professional Services team

You may not realize how often you're testing. In fact, most of us don't. When you try on four suits at a department store before making a purchase, you're testing. When you attend a wine tasting before bringing home a bottle to share with friends and family, you're testing. When it comes to everyday tests like these, we tend to go with what "feels right." And there's nothing wrong with that. However, when it comes to testing digital content and design, you need more than just a gut feeling.

The good news is, reporting is there to remove "feelings" from your testing efforts. The people you report to don't care about your feelings (at least not when it comes to their business objectives).

With that in mind, here are some tips to take into account when reporting on digital content and design tests:

1. Ask the right questions (before you run the test)

The desire to measure everything is a common pitfall whenresponsible_reporting1 choosing the metrics which will determine the effectiveness of a test. After all, more data couldn't hurt. Right? Wrong.

Before deciding how you will measure success, ask yourself:

  • What is the goal of this test?
  • What content is being altered?
  • What are the indicators that your testing objectives have been met?

Simply put, anything that does not answer that last question should not be measured.

If the goal is to increase the number of people who initiate a registration, for example, it can be confusing and wasteful to report on data related to engagement with the navigation menu. Failing to ask the right questions before you test will result in a campaign that lacks focus and consequently, fails to provide actionable insights you can implement.

2. Let the Test Run its Course

Testing can cause businesses a great deal of discomfort. After all, a brand's digital presence has massive value. With that said, it is important not to panic.

Let's say a business is trying to measure the difference in the number of registration initiations between their default and a variant. After four days of limited traffic, they notice that the default is grossly outperforming the variant. In this scenario, do your absolute best to refrain from ending the test and re-directing traffic to the default only.We know the importance of sample size, but any number of factors could be influencing the results and it is essential to let the test run its course to see if things even out or reverse over time. There's always the possibility of a comeback. Remaining committed to a test allows for the collection of meaningful, statistically relevant data.

3. Report on Statistical Significance

A variant that has more registrations than the default is not sufficient justification for a change. But a variant with more registrations and a 95% confidence level, combined with other factors like low conversion rate error, is enough to serve as the foundation for scrapping the default.

When it comes to reporting metrics, statistical significance is one of the keys to uncovering actionable insights because it represents the reliability of the results that have been collected. A test that has positive results but yields low confidence indicates that if the test were run again, the results might be drastically different. However, a test that has high confidence indicates that the outcome is definitive. Consequently, when reporting, only tests that have reached a statistically significant confidence level should serve as the catalyst for taking final action. Tests that have yet to reach statistically significant confidence levels are candidates for further alterations and follow-up tests.

The Takeaway

Accurate and meaningful reporting is what gives tests their value. Failing to line up metrics with the purpose of the test, stopping tests early, and relying on insignificant data can all undermine a perfectly reasonable testing campaign. Adhering to the tips outlined above will allow testing practitioners to get the most out of their reporting data.

Harrison Small is a Digital Optimization Analyst on Maxymiser's Client Services Team. He works specifically for financial services organizations.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.