论文标题

理解为什么全身重新加权不能改善ERM

Understanding Why Generalized Reweighting Does Not Improve Over ERM

论文作者

Zhai, Runtian, Dan, Chen, Kolter, Zico, Ravikumar, Pradeep

论文摘要

在实践中已知经验风险最小化(ERM)是对训练和测试分布不同的分配转移的不可思议。已经提出了一系列方法,例如重要性加权和分布稳健优化的变体(DRO),以解决此问题。但是,最近的一系列工作已经经验表明,这些方法在随着分配转移的实际应用中的ERM并未显着改善。这项工作的目的是获得对这一有趣现象的全面理论理解。我们首先将广义重新加权(GRW)算法的类别视为广泛的方法,该方法基于迭代的培训样本重新加权而迭代地更新模型参数。我们表明,在GRW下训练过度参数化的模型时,所得模型接近ERM获得的模型。我们还表明,添加少量正则化,这不会极大地影响经验培训的准确性无济于事。总之,我们的结果表明,我们称之为GRW方法的广泛类别无法实现分布强劲的概括。因此,我们的工作具有以下清醒的外卖:为了在分配稳定的概括方面取得进步,我们要么必须开发非GRW方法,要么就设计了适合GRW方法的新型分类/回归损失函数。

Empirical risk minimization (ERM) is known in practice to be non-robust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization (DRO), have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway: to make progress towards distributionally robust generalization, we either have to develop non-GRW approaches, or perhaps devise novel classification/regression loss functions that are adapted to the class of GRW approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源