论文标题
通过边缘计算,编码的随机ADMM用于分散的共识优化
Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge Computing
论文作者
论文摘要
大数据(包括具有高安全性要求的应用程序)通常会收集并存储在多个异质设备上,例如移动设备,无人机和车辆。由于通信成本和安全要求的局限性,以分散的方式提取信息而不是将数据汇总到融合中心至关重要。为了培训大规模的机器学习模型,Edge/Fog计算通常被用作集中学习的替代方案。我们考虑在多代理系统中学习模型参数的问题,该数据具有通过分布式边缘节点进行本地处理的数据。探索了一类微型批次随机交替方向方法(ADMM)算法以开发分布式学习模型。为了解决分布式网络中的两个主要关键挑战,即通信瓶颈和Straggler节点(响应缓慢的节点),研究了基于错误控制的基于随机递增的ADMM。给定适当的迷你批量尺寸,我们表明基于$ o(\ frac {1} {\ sqrt {k}})$的微型批次随机ADMM的方法收敛,其中$ k $表示迭代的数量。通过数值实验,发现所提出的算法是沟通效率的,在存在散乱算法的情况下,在存在Straggler节点的情况下迅速响应和稳健。
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles. Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center. To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes. A class of mini-batch stochastic alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model. To address two main critical challenges in distributed networks, i.e., communication bottleneck and straggler nodes (nodes with slow responses), error-control-coding based stochastic incremental ADMM is investigated. Given an appropriate mini-batch size, we show that the mini-batch stochastic ADMM based method converges in a rate of $O(\frac{1}{\sqrt{k}})$, where $k$ denotes the number of iterations. Through numerical experiments, it is revealed that the proposed algorithm is communication-efficient, rapidly responding and robust in the presence of straggler nodes compared with state of the art algorithms.