论文标题
Adam $^+$:一种具有自适应差异的随机方法
Adam$^+$: A Stochastic Method with Adaptive Variance Reduction
论文作者
论文摘要
亚当是一种用于深度学习应用的随机优化方法。尽管从业者更喜欢亚当,因为它需要较少的参数调整,但从理论角度来看,它的使用是有问题的,因为它可能不会融合。已经提出了亚当的变体,并具有可证明的融合保证,但在实际表现方面,它们往往与亚当没有竞争力。在本文中,我们提出了一种名为Adam $^+$的新方法(发音为Adam-Plus)。 Adam $^+$保留了亚当的一些关键组成部分,但它也有几个明显的差异:(i)它不能保持第二刻度估算的移动平均值,而是计算出在推断点处的第一刻度估算的移动平均值; (ii)它的自适应步长不是通过将第二刻估计的平方根分开而形成的,而是通过将第一个矩估计标准的词根分配而成。结果,Adam $^+$几乎不需要参数调整,因为Adam享有可证明的融合保证。我们的分析进一步表明,Adam $^+$享有自适应差异的降低,即随着算法的收敛,随机梯度估计器的差异会减少,因此享受自适应收敛。我们还提出了Adam $^+$具有不同自适应步进尺寸的更通用的变体,并确定其快速收敛速度。我们对各种深度学习任务(包括图像分类,语言建模和自动语音识别)的实证研究表明,Adam $^+$的表现极大地胜过Adam,并且在最良好的SGD和Momentum SGD方面取得了可比的性能。
Adam is a widely used stochastic optimization method for deep learning applications. While practitioners prefer Adam because it requires less parameter tuning, its use is problematic from a theoretical point of view since it may not converge. Variants of Adam have been proposed with provable convergence guarantee, but they tend not be competitive with Adam on the practical performance. In this paper, we propose a new method named Adam$^+$ (pronounced as Adam-plus). Adam$^+$ retains some of the key components of Adam but it also has several noticeable differences: (i) it does not maintain the moving average of second moment estimate but instead computes the moving average of first moment estimate at extrapolated points; (ii) its adaptive step size is formed not by dividing the square root of second moment estimate but instead by dividing the root of the norm of first moment estimate. As a result, Adam$^+$ requires few parameter tuning, as Adam, but it enjoys a provable convergence guarantee. Our analysis further shows that Adam$^+$ enjoys adaptive variance reduction, i.e., the variance of the stochastic gradient estimator reduces as the algorithm converges, hence enjoying an adaptive convergence. We also propose a more general variant of Adam$^+$ with different adaptive step sizes and establish their fast convergence rate. Our empirical studies on various deep learning tasks, including image classification, language modeling, and automatic speech recognition, demonstrate that Adam$^+$ significantly outperforms Adam and achieves comparable performance with best-tuned SGD and momentum SGD.