论文标题

大规模分布式非线性约束优化的增量梯度方法

An Incremental Gradient Method for Large-scale Distributed Nonlinearly Constrained Optimization

论文作者

Kaushik, Harshal D., Yousefian, Farzad

论文摘要

由传感器网络和机器学习引起的应用程序的激励,我们考虑了最小化无分量的凸函数的有限总和,其中每个组件函数与代理相关联和难以向项目的约束集关联。在解决有限总和问题的众所周知的途径中,有一个增量梯度(IG)方法,其中以环状或随机方式在每次迭代中选择单个组件函数。当问题受到限制时,现有的IG方案(包括投射的IG,近端IAG和SAGA)需要在每次迭代时在可行集合上进行投影步骤。因此,当问题包括:(1)非线性约束,或(2)大量线性约束时,这些方案的性能遭受了昂贵的预测。我们在本文中的重点在于解决这两个挑战。我们开发了一种称为平均迭代正规化增量梯度(AIR-IG)的算法,该算法不涉及任何难以项目的计算。在温和的假设下,我们得出了次优和不可行度量的非反应率。从数字上讲,我们表明所提出的方案优于分布式软性修理支持向量机问题的标准投影IG方法。

Motivated by applications arising from sensor networks and machine learning, we consider the problem of minimizing a finite sum of nondifferentiable convex functions where each component function is associated with an agent and a hard-to-project constraint set. Among well-known avenues to address finite sum problems is the class of incremental gradient (IG) methods where a single component function is selected at each iteration in a cyclic or randomized manner. When the problem is constrained, the existing IG schemes (including projected IG, proximal IAG, and SAGA) require a projection step onto the feasible set at each iteration. Consequently, the performance of these schemes is afflicted with costly projections when the problem includes: (1) nonlinear constraints, or (2) a large number of linear constraints. Our focus in this paper lies in addressing both of these challenges. We develop an algorithm called averaged iteratively regularized incremental gradient (aIR-IG) that does not involve any hard-to-project computation. Under mild assumptions, we derive non-asymptotic rates of convergence for both suboptimality and infeasibility metrics. Numerically, we show that the proposed scheme outperforms the standard projected IG methods on distributed soft-margin support vector machine problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源