论文标题
对随机梯度下降的稳定性和概括的细粒分析
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
论文作者
论文摘要
最近,有大量的作品用于研究随机梯度下降(SGD)的算法稳定性和概括。但是,现有的稳定性分析需要对梯度的界限,强烈的平滑度和损失功能的凸性施加限制性假设。在本文中,我们通过实质上放松这些假设来对SGD的稳定性和概括进行细粒度分析。首先,我们通过删除现有的有限梯度假设来建立SGD的稳定性和概括。关键思想是引入一种新的稳定性度量,称为“平均模型稳定性”,为此我们开发了由SGD迭代风险控制的新颖界限。根据最佳模型的行为,这会产生概括界限,并通过稳定方法在低噪声设置中导致有史以来的第一个已知快速界限。其次,通过考虑使用持有人连续(子)梯度的损失函数来放松平稳性假设,我们表明,通过平衡计算和稳定性,仍可以实现最佳界限。据我们所知,这为SGD提供了有史以来的第一个已知稳定性和泛化界限,即使是不可差的损失函数。最后,我们研究了(强烈)凸目标的学习问题,但非凸面损失函数。
Recently there are a considerable amount of work devoted to the study of the algorithmic stability and generalization for stochastic gradient descent (SGD). However, the existing stability analysis requires to impose restrictive assumptions on the boundedness of gradients, strong smoothness and convexity of loss functions. In this paper, we provide a fine-grained analysis of stability and generalization for SGD by substantially relaxing these assumptions. Firstly, we establish stability and generalization for SGD by removing the existing bounded gradient assumptions. The key idea is the introduction of a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates. This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting using stability approach. Secondly, the smoothness assumption is relaxed by considering loss functions with Holder continuous (sub)gradients for which we show that optimal bounds are still achieved by balancing computation and stability. To our best knowledge, this gives the first-ever-known stability and generalization bounds for SGD with even non-differentiable loss functions. Finally, we study learning problems with (strongly) convex objectives but non-convex loss functions.