论文标题

在连续时间内分析随机梯度下降

Analysis of Stochastic Gradient Descent in Continuous Time

论文作者

Latz, Jonas

论文摘要

随机梯度下降是一种优化方法,将经典梯度下降与目标功能中的随机亚采样相结合。在这项工作中,我们引入了随机梯度下降的连续时间表示。随机梯度过程是一个动力学系统,它与有限状态空间上的连续时间马尔可夫过程相结合。动态系统 - 梯度流 - 表示梯度下降部分,有限状态空间上的过程表示随机的子采样。例如,这种类型的过程用于在波动环境中对克隆人群进行建模。引入它后,我们研究了随机梯度过程的理论特性:我们表明,随着学习率接近零,它相对于完整的目标函数弱收敛到梯度流动。我们给出的条件在于,在瓦斯坦的意义上,具有恒定学习率的随机梯度过程是指数级的。然后,我们研究了该案例,其中学习率足够缓慢地达到零,并且单个目标函数强烈凸出。在这种情况下,该过程薄弱地收敛到集中在完整目标函数的整体最小值中的点质量。表示该方法的一致性。在讨论了随机梯度过程和数值实验的离散策略之后,我们得出结论。

Stochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled with a continuous-time Markov process living on a finite state space. The dynamical system -- a gradient flow -- represents the gradient descent part, the process on the finite state space represents the random subsampling. Processes of this type are, for instance, used to model clonal populations in fluctuating environments. After introducing it, we study theoretical properties of the stochastic gradient process: We show that it converges weakly to the gradient flow with respect to the full target function, as the learning rate approaches zero. We give conditions under which the stochastic gradient process with constant learning rate is exponentially ergodic in the Wasserstein sense. Then we study the case, where the learning rate goes to zero sufficiently slowly and the single target functions are strongly convex. In this case, the process converges weakly to the point mass concentrated in the global minimum of the full target function; indicating consistency of the method. We conclude after a discussion of discretisation strategies for the stochastic gradient process and numerical experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源