In-game advertising is a popular form of digital media that has continued to grow with the rise of the programmatic industry. And, as history goes, where there is potential for profit, there are people trying to exploit those streams via illicit activity.
Through our team’s work detecting invalid traffic in the digital advertising space, we recently discovered a popular video game was being used as a cover to cheat advertisers of their ad spend.
What were the unique mechanisms this fraud scheme employed to cash in and how did our team make the discovery? We break it down and provide further insight into how advertisers and publishers can ensure they do not fall victim to the next in-game advertising fraud scheme.
We don’t plan to disclose the name of the game, as the game creators were in no way responsible for this fraudulent activity and likely did not know about it. In addition, no game players or consumers were affected by this scheme, unlike some other cases of ad fraud we’ve uncovered, such as DrainerBot.
The main victims here were advertisers who paid for an engaged gamer and instead received a bot. Therefore, we are sharing information about the threat vector so advertisers and their partners can take appropriate steps to mitigate similar threats in future.
As with most ad fraud, criminals often seek to exploit channels they know to be popular for two reasons: first, they know that advertisers will be there seeking to reach those engaged audiences; and second, to better hide among the massive numbers of real users.
So how did the fraudsters use this game as a cover for their scheme? The short explanation is they built a bot whose job was to view ads meant for in-game users. This was done by coding an automated browser with user-agent rotation to appear somewhat human. They then milked the advertising URLs, pocketing money from advertisers who thought they were trading ad views for in-game rewards.
To follow the fraudulent tactics they used, it helps to understand a few common elements of the game (and other similar games) and how in-game advertising is incorporated.
1. The concept of winning virtual currency
Many games allow users to earn virtual money or credits in order to buy in-game items. Some ad tech providers—always looking to tap into attentive audiences—set up a straightforward model for delivering media that leverages this component. Simply, players can earn virtual rewards, or currency, for engaging with paid advertising.
2. The ability for participants to set up game servers
Gamers (and others) who set up their own servers to host these types of games can add plugins, invite others to join, and do numerous other tasks associated with the gameplay—including enabling an ad for rewards offers.
3. The game’s chat functionality
Since the game in question is multiplayer, the chat functionality is how communication occurs and also is the way to trigger game commands, including those for the engagement with the reward feature that prompts paid advertising.
A player looking to watch paid advertising to cash in on rewards would send those prompts via the chat, which relays a request to the server offering the ads. Our team at Moat by Oracle Data Cloud discovered a bot was employed to consume these ads, using real gamer IDs, which never invoked the prerequisite commands.
This fraudulent activity has been flying just below the radar generating between 1 to 3 million impressions a month.
Our team uses a range of sophisticated invalid traffic detection mechanisms for one very important purpose: to find and weed out internet traffic that does not originate from a human.
In many cases, invalid traffic is not malicious and is instead a by-product of many legitimate activities on the web, including how it operates. For example, one of the most common “benevolent” forms of invalid traffic are the spiders that crawl web pages to categorize page content. This helps search engines like Google bring better search results.
But in the case of this game bot, the traffic claiming these specific rewards from the game leveraged several techniques including User Agent spoofing. Fraudsters using this technique typically build a bot that rotates through several hundred User Agents to portray the effect that traffic is coming from many different browsers and is legitimate. However, our team observed that many underlying features across all the User Agents was identical—a clear indicator of spoofing and fraudulent activity.
Compared to the larger schemes that net their creators millions of dollars, this game scheme wasn’t a particularly lucrative ploy from what we’ve seen in our measurement footprint. Although for us, it highlights two important points:
1. The creative ways bad actors will go to defraud all programmatic channels.
2. The importance of identifying and blocking new ad fraud vectors, as quickly as they emerge.
Accurate invalid traffic detection comes from understanding human behavior and the mechanics of the internet. Our team at Moat measures online ad impression and reports on hundreds of metrics that verify engagement—things like interaction, time spent, mouse movements, and touch events. Finding real people should be the first check mark in measuring the effectiveness of advertising.
As gaming and eSports continue to see market growth and increased audience engagement, the industry should expect that all forms of in-game advertising are vulnerable to fraud. But with proper measurement and validation mechanisms in place, brands can protect their budgets and publishers can prevent their profits from enriching criminals, rather than engaging users.