Invalid traffic and ad fraud in advertising: Interview with an expert

May 26, 2020 | 6 minute read
Kori Wallace
Content Manager
Text Size 100%:

A vital mechanism of Moat by Oracle Data Cloud is its invalid traffic (IVT) detection capabilities. Before you can capitalize on the advanced attention metrics and measure the impact of your advertising, the first step is ensuring your investments were seen by an actual human.

To dive a little deeper into the topic, we sat down with Sam Mansour from the Moat team of experts who monitor and calibrate the constantly evolving sophisticated detection technology.

In the following interview, Sam distills complicated topics, giving insight into Moat’s methodology, the difference between general IVT and fraud, and what suspicious activity the team is tracking right now.

Want to learn even more about IVT and the work Sam does with his team at Moat? Check out this on-demand webinar:

Bots and Bogus Impressions: Modern IVT and Safeguarding Spend


What are some examples of clear indicators of nonhuman traffic? For instance, what are some of the most basic detections that Moat technology flags as IVT?

Sam Mansour: There are multiple ways to detect nonhuman traffic and we are constantly evolving our techniques to keep pace with the changes in technology and, in some cases, active adversaries. The most basic detections involve simple filtration of known spiders and bots as well as detecting traffic originating from a data center.

Moat has a long history of measuring human engagement, so we have a fairly good truth set for how it looks online. While sophisticated fraudsters code their bots to mimic human behavior, that is really hard to do accurately. All bots have tells that eventually give them away.


Does the discovery of a botnet always equate to fraud?

Sam: Clearly, the lens we view the world from is online advertising. It follows that most botnets we discover are culprits in some form of IVT. Depending on the culprit type, they are thoroughly researched and are usually found to be invalid. However, the term fraud has a loaded definition that means malicious intent was involved. In many cases, we cannot ascertain intent but can clearly see the ad impressions are invalid. When we do count an ad impression as invalid, we visibly label the reason, which provides our clients a clear understanding of the underlying reasons for the IVT determination.


How often does Moat detect smaller, less “harmful” botnets, perhaps not indicative of a larger fraud scheme? And what, if any, protocol is there for dealing with that traffic?

Sam: The true size of a botnet is hard to measure. We are often only seeing the portion that affected our clients. I wish we could say that we measure every digital ad served across the internet, but that is not true. So, with some of the smaller botnet impacts we’ve discovered, we must reasonably assume that the impact is much greater than what we’ve measured. I would also say that as it relates to protecting our clients and doing what’s right for the industry, IVT filtration is not about size. The botnet that is small by our measurement today, could easily evolve into a much larger threat tomorrow.


How long does it typically take from initial suspicion to declaration of a definite botnet or fraud scheme? 

Sam: Every fraud scheme we’ve uncovered had its own timeline. Sometimes it’s been clear right away that what we were observing in our measurement is IVT and detection happens almost automatically. In other cases, it’s more of a mystery novel, where we ask ourselves hard questions that take deeper digging to fill out the scheme. We will often engage with our security counterparts at Oracle to add resolution and a different perspective on our observations.


Once your team determines you are tracking a bot whose purpose is to defraud part of the programmatic supply chain, what are the steps you take to mitigate?

Sam: Well, the first step is to clearly classify it as IVT in our measurement. However, there are cases where we may not do that immediately as part of evidence-gathering activities. The step after that is to communicate it to impacted clients. The nature of our Pre-bid product means it will automatically be updated with our latest discoveries and provide immediate avoidance for its users.


From your experience, how sophisticated are the fraudulent bots? If you had to guess, how long does it take for the perpetrators to build their framework for fraud? (I’m imagining this could be a wide range depending on the level of sophistication, but curious to know how much work these fraudsters are putting into building their cons). 

Sam: There is a wide range of sophistication we’ve seen in bot operations. Some are extremely novice and are identified and filtered quite easily, where others are very advanced and require deep levels of expertise. We employee a clean room where we download malware-ridden software and de-obfuscate its code in order to identify some of these advanced bots.  


I’ve read that some bots collect cookies from high value sites. Will fraudsters be less incentivized as the industry shifts away from 3rd party cookie tracking? Is cookie fraud any significant portion of the IVT Moat detects?

Sam: Moat does not set or read cookies as part of our Privacy & Security stance and our bot detection does not rely on cookies in any way. We have observed bots going to high value sites before certain high dollar events. Moat detects and blocks these bots for our clients regardless of the cookies they have stuffed. All mobile application traffic has been without cookies from day one and traffic to mobile phones has eclipsed desktop. Ultimately, cookie stuffing is a symptom not the disease. Our techniques get to the root problem by catching the bot regardless of what cookies it is packing.


Can you talk more about Moat’s high client standards?

Sam: Having a robust Business Qualification Process (BPQ) is a requirement for any Media Rating Council (MRC) accredited service. Our BPQ goes beyond initial onboarding and has an ongoing monitoring aspect. We have a robust process, which is sometimes mediated by the MRC, for resolving issues related to consistently high IVT not being addressed. This led to us dropping clients in the past. We hold a strong belief that it takes everyone in the ecosystem doing the right thing to beat fraud and when we encounter folks not doing their part, we reserve the right to refuse our services. Ultimately the reputation we have in the industry is based on the level of trust and integrity that exists in our products. We take that responsibility seriously.


Is Moat tracking any suspicious activity currently that is pointing toward a large fraud scheme?

Sam: We are always researching suspicious activity and thinking about things like:

  • Who could be profiting from this?

  • Could this be a mistake in our detection or a messed-up integration?

  • Who could tell us more about it?

  • Do we have enough information to share with authorities?

  • What is the best way to filter this as IVT?

  • Can we catch it in more ways than one?

  • What is the impact to our clients? What broader impact do we think is likely?

So far this year, Moat has several new IVT fraud schemes that we’ve uncovered and put detections in place to filter. We typically are not as focused on using these discoveries as marketing tools. When things rise to the level of DrainerBot in specificity, we prefer to engage the industry through Trustworthy Accountability Group (TAG), which has a process for industry outreach and engaging law enforcement.


Are there any other notable themes about the IVT that Moat detects worth mentioning? Are their certain types of Sophisticated Invalid Traffic (SIVT) that are most common? What is rarer? Or is it all a random game of whack-a-mole?

Sam: Moat has numerous methods for detecting various forms of IVT. We use the MRC’s guidance to report them in nine metrics that roll up into either General Invalid Traffic (GIVT) or SIVT. Each of these nine metrics are constantly being improved with new enhancements. Overall, because of the ubiquitous nature of data centers and (self -declaring) spiders and bots, we tend to see those GIVT categories are more prevalent overall than SIVT detections.

We abide by the MRC guidelines of filtering out GIVT prior to reporting SIVT, so if you look at the top-level metrics you may miss the full impact of a given SIVT metric. SIVT is rarer, partly because it’s more difficult to detect, and partly because when dealing with an active adversary, they quickly adapt their behavior once they are detected. In that sense a game of whack-a-mole is way easier to beat since all the holes the mole can appear in are known and it’s just a game of speed. The trick is seeing the unknown holes and having your hammer ready!


For more information about IVT and fraud detection in advertising, check out these resources:

Kori Wallace

Content Manager

Kori Hill Wallace is a content specialist for Oracle Data Cloud. She loves appetizers, animals, athletics, and alliteration. (She what she did there?) 

Previous Post

How to measure consumer attention, and why it matters

Allan Stormon | 5 min read

Next Post

Is there a future for 3rd party audiences without cookies?

Tim Carr | 5 min read