论文标题
通过高维土匪进行广告媒体和目标受众的优化
Advertising Media and Target Audience Optimization via High-dimensional Bandits
论文作者
论文摘要
我们提出了一种数据驱动的算法,广告商可以用来自动化其在线发布者的数字广告广告。该算法使广告客户能够跨越可用的目标受众和AD-Media搜索通过在线实验找到其广告系列的最佳组合。找到最佳受众AD组合的问题使许多独特的挑战变得复杂,包括(a)需要积极探索以解决先前的不确定性并加快搜索可获利组合的搜索,(b)许多组合可供选择,引起高维搜索配方,以及(c)非常低的成功率,典型的是一级百分之一的一列。我们的算法(指定的LRDL,与Debiased Lasso的Logistic回归的首字母缩写)通过结合四个元素来解决这些挑战:一个用于主动探索的多层匪徒框架;套索惩罚功能以处理高维度;一个内置的偏见内核,可处理套索引起的正则化偏差;以及一个半参数回归模型,用于促进跨武器交叉学习的结果。该算法是作为汤普森采样器实施的,据我们所知,这是第一个实际上可以解决以上所有挑战的方法。具有真实和合成数据的模拟表明该方法是有效的,并记录了其在最近的高维匪徒文献中的几个基准测试中的出色性能。
We present a data-driven algorithm that advertisers can use to automate their digital ad-campaigns at online publishers. The algorithm enables the advertiser to search across available target audiences and ad-media to find the best possible combination for its campaign via online experimentation. The problem of finding the best audience-ad combination is complicated by a number of distinctive challenges, including (a) a need for active exploration to resolve prior uncertainty and to speed the search for profitable combinations, (b) many combinations to choose from, giving rise to high-dimensional search formulations, and (c) very low success probabilities, typically just a fraction of one percent. Our algorithm (designated LRDL, an acronym for Logistic Regression with Debiased Lasso) addresses these challenges by combining four elements: a multiarmed bandit framework for active exploration; a Lasso penalty function to handle high dimensionality; an inbuilt debiasing kernel that handles the regularization bias induced by the Lasso; and a semi-parametric regression model for outcomes that promotes cross-learning across arms. The algorithm is implemented as a Thompson Sampler, and to the best of our knowledge, it is the first that can practically address all of the challenges above. Simulations with real and synthetic data show the method is effective and document its superior performance against several benchmarks from the recent high-dimensional bandit literature.