论文标题

在早期停止中界定非凸优化的预期运行时间

Bounding the expected run-time of nonconvex optimization with early stopping

论文作者

Flynn, Thomas, Yu, Kwang Min, Malik, Abid, D'Imperio, Nicolas, Yoo, Shinjae

论文摘要

这项工作研究了基于验证函数的早期停止的随机梯度优化算法的收敛性。我们认为的早期停止的形式是,当验证函数的梯度范围低于阈值时,优化将终止。我们得出确保该停止规则的条件明确定义,并为满足此标准所需的预期迭代次数和梯度评估提供了界限。保证说明了训练和验证集之间的距离,该训练和验证集以瓦斯坦斯坦距离测量。我们在一阶优化算法的一般环境中开发了方法,其可能有偏见的更新方向受几何漂移条件的约束。然后,我们在预期的运行时间上得出了几种算法的早期停止变体的界限,包括随机梯度下降(SGD),分散的SGD(DSGD)和随机方差降低梯度(SVRG)算法。最后,我们考虑通过早期停止返回的迭代的概括属性。

This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function. The form of early stopping we consider is that optimization terminates when the norm of the gradient of a validation function falls below a threshold. We derive conditions that guarantee this stopping rule is well-defined, and provide bounds on the expected number of iterations and gradient evaluations needed to meet this criterion. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach in the general setting of a first-order optimization algorithm, with possibly biased update directions subject to a geometric drift condition. We then derive bounds on the expected running time for early stopping variants of several algorithms, including stochastic gradient descent (SGD), decentralized SGD (DSGD), and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源