论文标题

Pac-Bayes框架的限制

A Limitation of the PAC-Bayes Framework

论文作者

Livni, Roi, Moran, Shay

论文摘要

PAC-Bayes是一个有用的框架,用于推导McAllester('98)引入的概括界限。该框架具有得出分布和算法依赖性边界的灵活性,这些边界通常比与VC相关的均匀收敛边界更紧。在此手稿中,我们提出了Pac-Bayes框架的限制。我们展示了一项简单的学习任务,不适合PAC-Bayes分析。 具体而言,我们考虑1D中线性分类的任务;众所周知,仅使用$ o(\ log(1/δ)/ε)$示例可以学习此任务。另一方面,我们表明无法使用PAC-Bayes分析证明这一事实:对于任何学习一维线性分类器的算法,存在一个(可实现的)分布,Pac-bayes绑定的限制是任意大的。

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution- and algorithm-dependent bounds, which are often tighter than VC-related uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task that is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just $O(\log(1/δ)/ε)$ examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源