论文标题

独立如何帮助概括? ERM对产品分布的样本复杂性

How Does Independence Help Generalization? Sample Complexity of ERM on Product Distributions

论文作者

Lin, Tao

论文摘要

尽管使用输入分布的特定结构可以提高学习绩效,但许多经典的可学习性(例如,PAC可学习性)是无分配的。例如,多维输入空间上的产品分布的结构比相关分布要简单得多。最近的一篇论文[GHTZ21]表明,在输入维度中,对产品分布的一般学习问题的样本复杂性是多项式的,这比相关分布的呈指数小。但是,他们使用的学习算法不是标准的经验风险最小化(ERM)算法。在本说明中,我们在产品分布的一般学习问题中表征了ERM的样本复杂性。我们表明,即使产品分布比相关的分布更简单,但ERM仍然需要指数级的样本来学习产品分布,而不是多项式。这得出的结论是,产品分布本身并不能使学习问题变得更加容易 - 需要专门为产品分销而设计的算法。

While many classical notions of learnability (e.g., PAC learnability) are distribution-free, utilizing the specific structures of an input distribution may improve learning performance. For example, a product distribution on a multi-dimensional input space has a much simpler structure than a correlated distribution. A recent paper [GHTZ21] shows that the sample complexity of a general learning problem on product distributions is polynomial in the input dimension, which is exponentially smaller than that on correlated distributions. However, the learning algorithm they use is not the standard Empirical Risk Minimization (ERM) algorithm. In this note, we characterize the sample complexity of ERM in a general learning problem on product distributions. We show that, even though product distributions are simpler than correlated distributions, ERM still needs an exponential number of samples to learn on product distributions, instead of a polynomial. This leads to the conclusion that a product distribution by itself does not make a learning problem easier -- an algorithm designed specifically for product distributions is needed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源