论文标题
普遍的策略消除:非参数上下文匪徒的有效算法
Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits
论文作者
论文摘要
我们提出了广泛的策略消除(GPE)算法,这是一种受\ cite {dudik2011}的策略消除算法启发的甲骨文效率上下文强盗(CB)算法。我们证明了第一个遗憾的最优性保证了与具有无限VC差异的非参数类竞争的Oracle有效的CB算法的定理。具体来说,我们表明GPE对具有集成熵的政策类是遗憾的(超过对数因素)。对于具有较大熵的课程,我们表明用于分析GPE的核心技术可用于设计$ \ varepsilon $ - 绿色算法,遗憾的是与迄今为止最好的算法相匹配。我们用大型非参数策略类别的示例说明了我们的算法和定理的适用性,可以有效地实现相关优化的术语。
We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of \cite{dudik2011}. We prove the first regret optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an $\varepsilon$-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented.