论文标题

部分可观测时空混沌系统的无模型预测

CoDGraD: A Code-based Distributed Gradient Descent Scheme for Decentralized Convex Optimization

论文作者

Atallah, Elie, Rahnavard, Nazanin, Sun, Qiyu

论文摘要

在本文中,我们考虑了一个包含许多区域的大型网络,使每个区域都配备了具有一些数据处理和通信能力的工人。对于这样的网络,由于计算或通信的失败或严重的延迟,一些工人可能会成为散乱者。为了解决上述散布问题,最近提出了一种为每个工人引入某些冗余的编码方案,并开发了一个梯度编码范式,以解决网络具有集中式融合中心时解决凸优化问题。在本文中,我们提出了一种迭代分布式算法,称为基于代码的分布式梯度下降算法(CODGRAD),以解决分布式网络上的凸优化问题。在所提出的算法的每次迭代中,活性工人都与相邻区域的非stragggggling工人共享凸优化问题的局部梯度和近似解决方案。在本文中,我们还为Codgrad算法提供了共识和收敛分析,并通过数值模拟证明了其性能。

In this paper, we consider a large network containing many regions such that each region is equipped with a worker with some data processing and communication capability. For such a network, some workers may become stragglers due to the failure or heavy delay on computing or communicating. To resolve the above straggling problem, a coded scheme that introduces certain redundancy for every worker was recently proposed, and a gradient coding paradigm was developed to solve convex optimization problems when the network has a centralized fusion center. In this paper, we propose an iterative distributed algorithm, referred as Code-Based Distributed Gradient Descent algorithm (CoDGraD), to solve convex optimization problems over distributed networks. In each iteration of the proposed algorithm, an active worker shares the coded local gradient and approximated solution of the convex optimization problem with non-straggling workers at the adjacent regions only. In this paper, we also provide the consensus and convergence analysis for the CoDGraD algorithm and we demonstrate its performance via numerical simulations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源