论文标题

无限损失的通用在线学习:记忆是您所需要的

Universal Online Learning with Unbounded Losses: Memory Is All You Need

论文作者

Blanchard, Moise, Cosson, Romain, Hanneke, Steve

论文摘要

我们解决了Hanneke的开放问题,内容涉及非i.i.d的普遍一致的在线学习。过程和无限损失。汉纳克(Hanneke)​​在最小的假设下研究学习理论的努力来定义了乐观的普遍学习规则的概念。如果给定的学习规则在数据生成过程使该目标可以通过某些学习规则实现的目标时,则认为它会在较低的长期平均损失上实现较低的平均损失,从而是乐观的。汉纳克(Hanneke)​​构成了一个空旷的问题,无论对于每一次无限的损失而言,承认普遍学习的过程的家族都几乎是那些肯定具有有限数量不同价值观的过程。在本文中,我们完全解决了这个问题,表明确实如此。结果,这也为任何无限损失提供了一个非常简单的公平性学习规则的简单表述:即,简单的记忆规则已经足够了。我们的证明依赖于构建实例空间的随机可测量分区,并且可以独立解决其他开放问题。我们将结果扩展到不可交流的设置,从而提供了乐观的通用贝叶斯一致的学习规则。

We resolve an open problem of Hanneke on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. Hanneke posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space and could be of independent interest for solving other open questions. We extend the results to the non-realizable setting thereby providing an optimistically universal Bayes consistent learning rule.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源