Over the past year, I’ve seen a lot of exciting technology come together across Kinesso’s media measurement and optimization capabilities. One area that is particularly exciting is scaling experiments to drive incremental ROI optimization, and I hope to convince you to get excited about this area too.
Experiments are improving
When I first started in media, it was difficult to garner excitement about experiments. They were often underwhelming on insights, slow, and expensive. Think back to those Spot TV DMA tests and how long it took to align on test parameters, wait for the test to complete, analyze the results, and debug confounding factors like localized promotions.
But uncovering marketing truth is the priority, and randomized control trial experiments are the gold standard for measurement. Nothing else can guarantee to statistically isolate the true cause and effect relationship between data like a good RCT. Selection bias is the biggest problem for observational methods. As an example, media exposures are often triggered by consumer behavior, and that creates a huge problem for last touch attribution. Did the search ad cause the purchase, or did the consumer encounter the ad because they were already on the way?
Today, the experiment options are much improved and there are a few approaches that are promising. The first approach involves constructing treatment and control when constructing audiences for our media campaigns. We randomly hold out a segment and estimate the sample sizes and conversion rates needed to yield a statistically significant result ahead of time. Another approach is ghost ads and their variants. These are new, some still in beta, and run within DSPs and other media platforms. With ghost ads, users are randomly assigned to a control group at the time of the bid to ensure treatment and control are similar. I think of these similar to placebo ads but without the cost. A third option isn’t a true experiment but is to use non-viewable ads as a control group for the viewable ones. The challenge with this viewability method is proper controls, but has promise if validated by the first two methods. The advantage of all these approaches is that they are much more powerful and scalable than older experimental approaches like matched-market tests.
Experiments results database
As experiments are conducted, it is important to store the experiment results in a database and append all the relevant meta data. At Kinesso, we have a taxonomy platform that helps us manage categorical information (e.g., tactics, strategies, creative, audience), and digital data lakes for storing numerical proxy metrics like CPMs, frequency levels, viewability rates, and CTRs. The results database needs to contain information relevant for structuring future media campaigns. For example, we may want to analyze how incremental ROI varies by frequency level to structure frequency caps by audience.
Once there’s a sufficient sample in the experiments results database, we can begin training a meta model for ROI. This is a machine learning model where we try to predict ROI from the variety of categorical and numerical metrics we have available. Most importantly, the categorical and numerical metrics are available in real-time and contain all the relevant optimization levers because that data will be used for real-time campaign optimization engines.
Ultimately, we need this model to power optimization. There are several dimensions to optimization. One is inflight optimization. We don’t receive experiment results in real-time, instead receiving proxy signals which are predictive of ROI. From modeling, we can predict ROI in real-time and validate those predictions once the experimental results are available.
Another dimension is planning of future campaigns. The models are useful here because we have strong insights into the strategies and tactics that will generate solid ROI. Yet another consideration is the multi-armed bandit. We want to exploit known strategies that have proven to work in the past, but also explore new strategies that have never been tested. Human creativity is still important!
Finally, there is the budget coordination issue. All clients have budget constraints of where their media spends can realistically shift, so the optimization platform applies solvers that can handle complex budget constraints. We must also layer in business rules, for example custom guardrails for underpacing vs. overpacing campaigns. That is, spending less than you’re supposed to and having money left over vs. spending more during a certain time period.
I hope this gets you excited about experiments and optimization. If you’re afraid of the ominous AI, not to worry! To the extent that AI can automate some of the more tedious tasks of campaign configuration and management, this enables humans to focus more time on creative thinking and problem solving, something I think we all enjoy working in media and marketing.