X

Oracle Data Cloud Blog

CRM onboarding: Don’t dismiss accuracy & performance dimensions

This week’s guest blog post is contributed by Curt Blattner, Head of OnRamp Solutions, Oracle Data Cloud.

Curt Blattner has lead OnRamp solutions at Oracle Data Cloud for the past four years, helping data-driven marketers accurately connect with their consumers online using high-quality data-sets. Before Oracle, Curt was VP, Strategic Partnerships at Connection Engine and responsible for digital data strategy at Merkle, the largest independent U.S. CRM Agency.

The CRM onboarding market has grown rapidly in sophistication. Just a short two years ago, head-to-head tests of different onboarding partners were a relatively infrequent phenomenon. Now, these tests are a routine part of the selection process.

Despite the good intentions of these testers, simply evaluating an onboarding provider on reach alone (typically determined by the number of 30-day active cookies associated with your audience), isn’t enough.

The downside to only testing reach

By only testing reach, onboarding providers are incented to provide as many cookies as possible – being ‘loose’ with their match logic to yield the maximum number of cookies. The problem: you can end up with a cookie universe where 40–50 percent or more of your cookies don’t represent the individuals and households you provided for onboarding.

For us, accuracy and performance are paramount. Over 80 percent of the clients we represent also utilize Oracle for measurement of their digital marketing campaigns.

What we find won’t surprise anyone: Cookies that are targeting the wrong households perform poorly. In fact, they perform significantly worse than your core target audience. So, results suffer and marketers end up wasting their display budget on poor performing users.

How to overcome this challenge

We’ve been encouraging the market to include an accuracy or performance aspect of their testing for some time. Recently, a major multi-channel retailer decided to create an incredibly thorough and rigorous “live” test comparing onboarding providers not only on reach, but also on the dimensions of accuracy and performance.

They compared how many conversions were generated from each provider, the media spend required to generate those conversions, as well as the accuracy of the conversion audience when compared to the initial offline target audience at an individual name and address level.

This was exactly the type of rigorous test that we encourage. It used a single "live" test to evaluate these solutions across multiple dimensions. And while it is more difficult for vendors to develop solutions that must optimize across multiple, often diametrically opposed dimensions, the end result is what marketers need: The most universal reach possible without compromising accuracy and performance to a significant degree.

There are multiple approaches to assessing the accuracy of your campaign, including collecting personally identifiable information (PII) directly against the user that is exposed to your ad (such as registration to receive a special offer). You can also compare the PII of online conversions against the original PII audience.

Here’s the bottom line on reach

It’s great that so many advertisers understand the value of comparing onboarding providers and running head-to-head tests. Just don’t forget that reach is only one dimension. We think that a test that also includes an accuracy/performance dimension is best practice.

If onboarders are incented to only care about that one dimension, it’s the advertisers that will pay the price with wasted media spend and poor ROI on their digital marketing campaigns. 

Stay up to date with all the latest in data-driven news by following @OracleDataCloud on Twitter and Facebook! Need data-related answers for your next marketing campaign or client partner? Contact The Data Hotline today. (What's The Data Hotline?) 

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.