Why do people unsubscribe from email lists? Or reply with STOP to your SMS text messages? Or turn off your apps’ push messages? Irrelevant content and communications that arrive too frequently are unsubscribers' top two complaints. Both situations fall well within marketers’ control—and could be avoided—if they properly tested elements of these campaigns ahead of time.
Whether marketers are acutely aware of the reasoning behind testing or not, most understand that they need to do it. Thinking and doing, however, are two different stories. Very few of the marketers I engage with have a formal testing plan. Those that do test often have a haphazard approach at best. Their tests are fraught with issues around the configurations and the hypotheses they develop. At the end of the day, the results aren’t as meaningful as they could be.
Again, most marketers understand they need to test. Where they fall short is in understanding how to test, what to test and how to keep track of their results. A test is only as good as the thought and creative preparation that you put into it. Here’s a simple, four-step framework that will get marketers from half-hearted testing to tying their efforts to actual—and achievable—goals.
1. Think it.
First and foremost, what’s the objective of the test? Are you trying to drive offline transactions? Encourage customers to sign up for a new offer? If you’re going to test something without a goal in mind, you shouldn’t bother.
In a typical A/B test, you’ll have a test group and a control group. Here’s where your team will have to brainstorm and develop a hypothesis. What do you think that the test group is going to do differently? Maybe you expect the test to increase open rates or boost conversions. The key is to tie the hypothesis to the goal. If your ultimate goal is to drive in-store traffic, what items should you test to accomplish that goal?
Common areas for testing include messaging (subject lines, primary message or headline, calls-to-action), design (layout, imagery and photography, font and typography, icons and graphics), segmentation (using different sources of data like demographics, transactional, behavioral, social, preferences, etc.), channel (email, mobile, social, display and/or web) and timing.
2. Do it.
Once you have a hypothesis, it’s time to execute the test. This is where most marketers fail. If you are new to testing, start simple by changing only one element at a time in each of your tests. If you switch both the subject line and the call to action, you’ll have a harder time isolating which variable resulted in the higher click-through rate of an email communication.
Some marketers have visions of grandeur thinking up massively complicated tests which are difficult to execute and even more difficult to measure, so unless you have the support of a large analytics organization behind you, you may decide that you will always do A/B type tests versus trying out more complex, multivariate testing.
3. Review it.
Even after marketers test, many ignore the results. They check off a box for testing subject lines and think that their job is done. Not only do you need to pull the numbers to determine which test won, but you need to continually ask questions about those results: Do you need to run the test again to make sure that the results can be replicated? Were the results statistically significant? Was it the best test design? Was it flawed or skewed in some way that unintentionally influenced results? Perhaps most importantly, based on these results, what should we test next?
Be sure to document the results of all your testing by capturing both the metrics and the creative and share the findings with all parties involved so they too can benefit from new discoveries. Keep these tests in a repository that is accessible to everyone in your organization so they can be easily reviewed in the future. Without this kind of historical testing documentation, test results become the myths and legend of marketing long past: “…I remember testing out [concept A] in 2007 and we got a 13% lift in revenue...” or “…we’ve tested that before, and as I remember, it lost against the control…”
4. Apply it.
You’ve invested time and energy into designing these tests. The last thing you want is for lessons learned to be lost in the long term. As you test, you are learning what works, and sometimes, more importantly, what doesn’t work. Don’t be discouraged by a hypothesis that fails. The only failure that could come from a test would be to not learn something from it. The more you invest in testing, the more you can make informed decisions on how to best deliver the most relevant communications to your customers and subscribers that drive increased revenue and higher engagement, which will ultimately transform your program over time.
Even if you test every other campaign, the lessons you learn will be invaluable as you pursue your greater marketing goals. Testing is never finished. There are always new approaches to experiment with and previous tests to build upon. The trick is to make these tests worth your while by having a testing framework and plan in place. That way you'll incorporate each incremental finding in your relationship marketing strategies.