This week’s guest blog post is contributed by Elena Bond, Principal Analyst, Client Analytics, Oracle Data Cloud.
You’ve been there. Heart palpitations, evolving into a minor panic attack. You think, “I worked so hard to set up this A/B test, get all of the creative assets and the client is really counting on these results.” How could it possibly be flat?
Well the good news is that you’re not alone.
We help media agencies, creative agencies and advertisers derive insights via causal measurement on the offline sales impact of digital campaigns.
To help shed some light onto this challenge, we’ve gathered some of the most common reasons why A/B tests in digital or walled-garden campaigns fail and recommendations on how to combat them.
- You’re judging an A/B test by its overall results. If you’re A/B testing something, you probably expect A to win out over B. Looking at the aggregated results is misleading; the loser drags down the average of the test. Instead, evaluate how scaling the winner could impact the business moving forward.
- Your scale is insufficient. When you are considering launching a test, creative or otherwise, you need scale. And I don’t mean blasting all 123MM US households. I mean scale of the product being measured. To hit sample requirements in at least two cells instead of one, use broader penetration brands. If you have a broad portfolio, use the larger ones for testing and validate the results on the smaller ones.
- The frequency is too low. Brands grow by increasing household penetration in the long run. But for this test, are you trying to hone in on a particular outcome (e.g. creative)? Consider the ability to control frequency when selecting a publisher partner for your test. If you can’t control frequency, attempt to set frequency caps at appropriate levels for the audience and review how impressions are delivered throughout the campaign.
- The audience wasn’t right for the advertised product. Question: Why is a target of Moms, aged 25-54, wrong for a non-toxic dishwasher detergent? Answer: Not all US households have dishwashers. While targeting should be done to maximize reach, ensure it’s a relevant reach. You'll be unable to read A or B if neither of them perform.
- Creative failed to break through. This goes almost without saying, but creatives that are dark and not well branded don’t pass the “thumb test” for mobile scrolling, or that drive online engagement (think online coupon) are not likely to drive an offline lift. The $ lift is crucial to reading the test results. Where possible, consider pre-testing the creative. Remember, pre-testing means that creative assets need to be available well in advance of the campaign launch–but better to test than execute without impact.
- You are testing too many things. Keep it simple! If you’re testing creatives, keep the frequency and audience profiles identical. If you don’t, you won’t know if it was the creative or the audience that failed. Not even the diagnostics can help because there’s an undetectable interaction effect under the surface.
Have an upcoming test that you’d like to measure, and want some advice on how to set it up for success, contact your client analyst or The Data Hotline for more information. (What's The Data Hotline?)
About Elena Bond
Elena Bond is an analytics professional with 11+ years of experience dedicated to translating the sophistication and power of data science into actionable business insights. Her current role at Oracle Data Cloud focuses on helping consumer goods advertisers execute successful digital marketing campaigns by measuring and identifying optimization levers, as well as designing and measuring against digital learning agendas.
Stay up to date with all the latest in data-driven news by following @OracleDataCloud on Twitter and Facebook!