论文标题
希尔伯特量表中的随机梯度下降:平滑度,预处理和更早停止
Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping
论文作者
论文摘要
随机梯度下降(SGD)已成为解决广泛的机器学习问题的首选方法。但是,其某些学习属性仍然没有完全理解。我们考虑最小二乘在复制内核希尔伯特空间(RKHSS)中学习,并将经典的SGD分析扩展到希尔伯特量表中的学习环境,包括紧凑型利曼尼亚歧管上的Sobolev空间和扩散空间。我们表明,即使对于明确指定的模型,违反传统的基准平滑度假设也会对学习率产生巨大影响。此外,我们表明,对于未指定的模型,在适当的希尔伯特量表中进行预处理有助于减少迭代次数,即允许“较早停止”。
Stochastic Gradient Descent (SGD) has become the method of choice for solving a broad range of machine learning problems. However, some of its learning properties are still not fully understood. We consider least squares learning in reproducing kernel Hilbert spaces (RKHSs) and extend the classical SGD analysis to a learning setting in Hilbert scales, including Sobolev spaces and Diffusion spaces on compact Riemannian manifolds. We show that even for well-specified models, violation of a traditional benchmark smoothness assumption has a tremendous effect on the learning rate. In addition, we show that for miss-specified models, preconditioning in an appropriate Hilbert scale helps to reduce the number of iterations, i.e. allowing for "earlier stopping".