论文标题
绕过怪物:在可靠性下的上下文匪徒的更快,更简单的最佳算法
Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability
论文作者
论文摘要
我们将一般的(随机)上下文强盗问题在可靠性假设(即预期奖励)下作为上下文和动作的函数属于一般函数类别$ \ MATHCAL {F} $。我们设计了一种快速而简单的算法,该算法仅使用$ {o}(\ log t)$调用统计上最佳的遗憾,拨打了所有$ t $ rounds的离线回归甲骨文。如果$ t $提前知道,则可以将Oracle调用的数量进一步减少为$ O(\ log \ log t)$。我们的结果提供了从上下文匪徒到离线回归的第一个通用和最佳减少,从而解决了上下文匪徒文献中一个重要的开放问题。我们结果的直接结果是,离线回归的任何进步都立即在统计和计算上转化为上下文匪徒。这会导致更快的算法和改善的遗憾保证了更广泛的上下文匪徒问题。
We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class $\mathcal{F}$. We design a fast and simple algorithm that achieves the statistically optimal regret with only ${O}(\log T)$ calls to an offline regression oracle across all $T$ rounds. The number of oracle calls can be further reduced to $O(\log\log T)$ if $T$ is known in advance. Our results provide the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem in the contextual bandit literature. A direct consequence of our results is that any advances in offline regression immediately translate to contextual bandits, statistically and computationally. This leads to faster algorithms and improved regret guarantees for broader classes of contextual bandit problems.