X

Oracle Data Cloud Blog

Combatting fake news requires real experts

This week’s post is contributed by Victor Gamez, Content Marketing Manager, Oracle Data Cloud.

In a previous blog, I explained why a simple blacklist isn’t enough for smart, scalable brand safety.

A more strategic solution to this challenge is to use technology that can detect unsafe subject matter in text and in video. This will achieve a better balance between online reach and reputational risk management for marketers.

But as effective as that technology might be, there are unfortunate limits to its efficacy.

In recent months, a new strand of unsafe content appeared that isn’t defined by its subject matter: fake news.

This graph below shows how Google searches for “fake news” were virtually nonexistent until the weeks leading up to the 2016 United States presidential election.

Before we go on, I believe it’s important to define “fake news,” which quickly became politicized and is frequently misused.

For our purposes, fake news doesn’t comprise satire, disagreeable opinions, or even unintentionally wrong news article. Instead, it is content that is intentionally made up and passed off as real.

Not fake news

Fake news

Satire

 

Fabricated content that is purportedly true

Disagreeable editorials

Unintentional misreporting


Fake news is often characterized as politically motivated, but that isn’t always the case. Financial motivations exist as well, considering how easy it is to make money with fake news.

For an illustration of that, refer to this WIRED article about a made-up story about a U.S. presidential candidate who supposedly slapped a potential voter.

It never happened, but for the teenager in Macedonia who created the story, he made a quick couple hundred bucks within a month of publishing the story—one of many he likely had out there.

And as another article from WIRED indicates, it’s simple to keep spreading fake news even after a site is shut down. “A fake news writer might publish a story, get caught, and get shut down—then copy the same story to 10 other sites and start the cycle all over again,” according to writer Davey Alba.

The fastest way to stop the cycle is to systematically cut ad funding to fake news articles.

Technology exists (machine learning techniques, for instance) that quickly scan surface web pages similar to ones already classified as “fake news” content.

But as we’ve covered before, fake news has less to do with the content itself than with the intentions of the content creator, meaning technology is limited in what it can do to identify it.

Instead, we must rely on human judgment and fact checking—skill sets most marketers and technology companies aren’t equipped with today.

The solution lies in trusting experts with those skills and training.

That is why Moat spearheaded the Open Brand Safety (OBS) framework, an initiative with representatives from academia and the modern digital media industry. The ultimate goal of OBS is to cut off funding to fake news and extremist content.

That begins with an unbiased professional lens to review online content for extremism and fake news, working with organizations like Storyful—a social news and insights company specializing in verifying and contextualizing social content and conversations—and the City University of New York’s Graduate School of Journalism, which is actively working on identifying fake news sites. 

Together, OBS will identify domains, URLs, and content online as fake news or extremist in nature, and then share that list for collaborators to access, use, and contribute.

It can form the foundation for managed blacklists and whitelists—meaning brands can use a trusted source to avoid fake news and terrorist content and present themselves as intended.

In the coming months, Moat will be launching a new metric: Potentially False Information.

It will report how often impressions appear on a domain OBS flagged as fake news. 

We believe that transparency into ongoing brand safety efforts will help the industry move closer to a comprehensive solution for making sure brands grab consumer attention in the right environment.

For more tips on brand safety and fake news, view Moat’s most recent webinar, New Approaches to Brand Safety.

Stay up to date with the latest in data-driven news by following @OracleDataCloud on Twitter and Facebook.

Keep in the loop by following Moat on TwitterLinkedIn, and Facebook.

About Victor Gamez

Victor is the content marketing manager at Moat, an analytics and advertising measurement firm in the Oracle Data Cloud.

Prior to Moat, Victor provided guidance to marketing executives through original research at Percolate.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.