Optimizing the RTB process using real-time analytics increases revenue for publishers and improves consumer experience.
But joining, aggregating and analyzing the vast quantities of data created by modern advertising systems is a massive undertaking and can be very costly.
We can help.
Real-time bidding (RTB) allows advertising buyers to bid on publishers’ display inventory on a per-impression basis, via instantaneous programmatic auction. If the bid is won, the buyer’s ad is immediately accepted for display on the publisher’s website. In North America alone, millions of digital ad impressions are available every second, and over 80% of this inventory is filled via RTB.
Large supply-side platforms (SSPs) simultaneously consolidate ad inventory across many publishers. At any moment, a large SSP can receive 10s of millions of bids for ads vying to be placed across publishers’ sites.
In many cases, multiple streams of RTB data are generated across different data centers and ad matching servers. These must be consolidated by advertiser, publisher, campaign, ad, etc. and joined with real-time marketplace-level data. The results can inform algorithms to improve RTB decisioning.
Optimizing this process can help SSPs increase publisher revenue while reducing infrastructure load by optimizing bid solicitation.
There are three main challenges to overcome:
Volume of streaming data. At a single SSP, data volumes can approach “Twitter-scale” - 10s of millions of potential bids a second. Maintaining intermediate state can lead to datastore bottlenecks and/or very expensive infrastructure.
Infrastructure costs. Ad-tech companies are very sensitive to operational costs, which include the management, support and cost of data analysis infrastructure. Normal approaches to data analysis at this scale would result in a yearly cost of millions of dollars a year.
Out-of-order data. Events may arrive out of order with related events generated on different servers. Buffering over a time horizon to relate events can create significant complexity, especially at scale.
How Wallaroo Helps
Wallaroo Labs provides straightforward solutions to each of these challenges.
Our purpose-built framework easily scales horizontally to analyze millions of events a second while making very efficient use of the infrastructure, keeping your management overhead and infrastructure footprint extremely low. In fact, operational costs can be 90% lower than other approaches.
We can engineer systems to maintain intermediate state in memory with resilience, avoiding expensive and slow database updates and reads.
Finally, we can support event-time windowing and state expiration to deliver accurate analysis with minimal infrastructure.