论文标题

通过平滑的精确惩罚功能通过分布式梯度计算启用了网络优化

Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation

论文作者

Srivastava, Priyank, Cortes, Jorge

论文摘要

本文提出了一种针对代理网络网络的分布式算法,以通过可分离的目标函数和局部耦合约束来解决优化问题。我们的策略是基于重新解决原始约束问题,作为平滑(连续不同)精确的惩罚函数的不受约束的优化。即使在原始优化问题的可分离性假设下,以分布式方式计算此惩罚函数的梯度也很有挑战。我们的技术方法表明,该梯度的分布式计算问题可以作为由可分离问题数据定义的线性代数方程系统提出的。为了解决它,我们设计了一种指数迅速,输入到状态稳定的分布式算法,该算法不需要单个代理矩阵是可逆的。我们采用此策略来计算当前网络状态处的惩罚函数的梯度。我们针对原始约束优化问题的分布式算法求解器互连使该估计与使代理遵循结果方向相互连接。数值模拟说明了所提出算法的收敛性和鲁棒性特性。

This paper proposes a distributed algorithm for a network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained problem as the unconstrained optimization of a smooth (continuously differentiable) exact penalty function. Computing the gradient of this penalty function in a distributed way is challenging even under the separability assumptions on the original optimization problem. Our technical approach shows that the distributed computation problem for the gradient can be formulated as a system of linear algebraic equations defined by separable problem data. To solve it, we design an exponentially fast, input-to-state stable distributed algorithm that does not require the individual agent matrices to be invertible. We employ this strategy to compute the gradient of the penalty function at the current network state. Our distributed algorithmic solver for the original constrained optimization problem interconnects this estimation with the prescription of having the agents follow the resulting direction. Numerical simulations illustrate the convergence and robustness properties of the proposed algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源