论文标题

线性约束强烈凸优化的新的惩罚随机梯度方法

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

论文作者

Li, Meng, Grigas, Paul, Atamturk, Alper

论文摘要

为了最大程度地减少受线性不平等约束的强大凸目标函数,我们考虑了一种惩罚方法,该方法允许人们利用随机方法来解决具有大量约束和/或目标函数项的问题。我们在解决方案与原始约束问题与罚款重新恢复之间的距离上提供了上限,从而确保了拟议方法的融合。我们给出了嵌套的加速随机梯度方法,并提出了一种新的方法,以更新惩罚函数的平滑度参数和阶梯尺寸。所提出的算法最多需要$ \ tilde o(1/\sqrtε)$预期的随机梯度迭代,以在预期的$ε$范围内生成一个解决方案,以实现原始问题的最佳解决方案,这是我们最熟悉的问题类别的最佳复杂性。我们还展示了如何在随机解决罚款重新纠正后查询近似双重解决方案,从而导致二元性差距的收敛性。此外,算法的嵌套结构和到达最佳解决方案的距离上的上限使人们可以安全地消除整个算法中最佳解决方案不活动的约束,从而改善了复杂性的结果。最后,我们提出的计算结果证明了算法的有效性和鲁棒性。

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, guaranteeing the convergence of the proposed approach. We give a nested accelerated stochastic gradient method and propose a novel way for updating the smoothness parameter of the penalty function and the step-size. The proposed algorithm requires at most $\tilde O(1/\sqrtε)$ expected stochastic gradient iterations to produce a solution within an expected distance of $ε$ to the optimal solution of the original problem, which is the best complexity for this problem class to the best of our knowledge. We also show how to query an approximate dual solution after stochastically solving the penalty reformulations, leading to results on the convergence of the duality gap. Moreover, the nested structure of the algorithm and upper bounds on the distance to the optimal solutions allows one to safely eliminate constraints that are inactive at an optimal solution throughout the algorithm, which leads to improved complexity results. Finally, we present computational results that demonstrate the effectiveness and robustness of our algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源