论文标题
关于分布式非凸优化:网络中弱凸问题的投影亚级别方法
On Distributed Non-convex Optimization: Projected Subgradient Method For Weakly Convex Problems in Networks
论文作者
论文摘要
随机亚级别方法是一种广泛使用的算法,用于解决机器学习中引起的大规模优化问题。这些问题通常既不光滑也不是凸。最近,戴维斯等。 [1-2]表征了弱凸状情况的随机亚级别方法的收敛性,该方法包括许多重要的应用(例如,强大的相位检索,盲目反vlution,Biconvex,Biconvex压缩感应和字典学习)。实际上,预测随机亚级别方法(STODPSM)的分布式实现用于加快风险最小化。在本文中,我们提出了具有理论保证的随机亚级别方法的分布式实施。具体而言,我们使用Moreau Invelope Stentarity度量显示了STODPSM的全局收敛性。此外,在所谓的清晰度条件下,我们表明确定性的DPSM(适当的初始化)使用几何尺寸减小的尺寸,将线性收敛到锋利的最小值。我们提供数值实验来支持我们的理论分析。
The stochastic subgradient method is a widely-used algorithm for solving large-scale optimization problems arising in machine learning. Often these problems are neither smooth nor convex. Recently, Davis et al. [1-2] characterized the convergence of the stochastic subgradient method for the weakly convex case, which encompasses many important applications (e.g., robust phase retrieval, blind deconvolution, biconvex compressive sensing, and dictionary learning). In practice, distributed implementations of the projected stochastic subgradient method (stoDPSM) are used to speed-up risk minimization. In this paper, we propose a distributed implementation of the stochastic subgradient method with a theoretical guarantee. Specifically, we show the global convergence of stoDPSM using the Moreau envelope stationarity measure. Furthermore, under a so-called sharpness condition, we show that deterministic DPSM (with a proper initialization) converges linearly to the sharp minima, using geometrically diminishing step-size. We provide numerical experiments to support our theoretical analysis.