X

Oracle Data Cloud Blog

Smart brand safety means more than a blacklist

This week’s post is contributed by Victor Gamez, Content Marketing Manager, Moat.

Our last blog post on brand safety explained why “brand-safe” must be defined by each individual brand.

Different audiences and values mean different ideas of where it’s appropriate for your brand to appear. Specific news events can be more problematic for an industry—car companies want to avoid news of auto accidents, for instance.

One of the most common tools of brand safety is a blacklist, which documents all the domains a brand definitively ruled out as an advertising channel.

But to stay brand-safe while maintaining reach—a key value of digital advertising—you need more than a blacklist.

Consider these hypothetical headlines:


Both could appear on the same domain and both describe violence—but one is about the most-watched TV series in recent memory and the other is a sobering news event.

For some advertisers, the former is on-brand, but not the latter.

Many publications cover a range of topics in this way, creating this issue all over the web.

In other words, blacklisting a whole site may limit valuable reach, but going through every directory of every website is unsustainable.

This conundrum is magnified on large social media platforms that rely on user-generated content (UGC), where audiences are large, but the volume of content is too large for comprehensive manual review.

In addition, easy ways to scan metadata on UGC video platforms—like the number of flags a channel has received—are rare.

Solutions for brands

The solution to this challenge requires technology to stop an impression when it detects unsafe content (such as those topics related to drugs, terrorism, or hate speech) through a full-page analysis using page text and metadata and, in the case of video platforms, visual identification.

When it comes to the open web, the industry is making headway. Tools such as Grapeshot are available that scan the keywords of a page using a probabilistic method to analyze webpage content in real time.

Take Vice as an example of that in action. Vice Media built a tool with Grapeshot to ensure brand safety for its advertisers, Digiday reported.

Vice’s properties boast a large readership and cover many topics, ranging from potentially brand-sensitive material, such as marijuana use and news about racial hate groups, to lighter fare like health recipes and film reviews.

The Vice and Grapeshot tool scans the text in an article to determine whether it’s about a subject matter brands might want to avoid. The most brand-safe conscious advertisers can run on what is sorted as uncontroversial.

Simultaneously, the tool catalogs other subject matter—for example, sports or entertainment—to sort readers into new segments and start original conversations with advertisers.

Off the open web and on closed video UGC platforms, however, there is still opportunity to develop ways to make content more readable by tech.

Among the type of information marketers and their partners are seeking from UGC platforms to create managed brand-safety whitelists and blacklists include the full, complete URL. Find a video here and the number of flag reports on the video.



Access to this metadata will make it easier to limit brand-safety risk on video UGC platforms, plus better track where impressions are delivered. Similar metadata from text-based pages on social platforms—like group pages—can further limit risk.

Technology also is being developed to quickly scan content and recognize it as extremist content. For instance, eGLYPH is a program recently announced by the Counter Extremism Project and developed by Dr. Hany Farid, a professor of computer science at Dartmouth College. It extracts a distinct fingerprint from an image, video or audio deemed extremist and automatically finds copies uploaded to a social media platform.

“eGLYPH allows companies to quickly and accurately remove content that violates their terms of service,” said Farid. “The (at times) difficult decision of what does and what does not violate terms of services needs to be made once and then that decision can be applied easily, accurately and consistently.”

It’s the type of technology that can, among other benefits, scale brand safety to new levels by enabling platforms to surface and delete problematic videos. This prevents advertisers from ever having the chance to be associated with them. 

We’ll return next time with another post about protecting brands from fake news. In the meantime, register for the upcoming December 5 webinar from Moat, New Approaches to Brand Safety.

This is the third part in an ongoing series on brand safety. Read previous posts, here: Brand safety isn’t new – but it’s more complicated than ever and Why brand safety is subjective.

Stay up to date with all the latest in data-driven news by following @OracleDataCloud on Twitter and Facebook.

Keep in the loop with Moat by following on TwitterLinkedIn and Facebook.

About Victor Gamez

Victor is the content marketing manager at Moat, an analytics and advertising measurement firm in the Oracle Data Cloud.

Prior to Moat, Victor provided guidance to marketing executives through original research at Percolate.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.