论文标题
边缘联邦学习的编码计算
Coded Computing for Federated Learning at the Edge
论文作者
论文摘要
联合学习(FL)是一个令人兴奋的新范式,它使能够从客户节点本地生成的数据培训全局模型,而无需将客户端数据移至集中式服务器。由于异质性和计算功率和通信链路跨客户的沟通链接质量的异质性和随机波动,在多访问边缘计算(MEC)网络中的性能遭受了缓慢的收敛性。最近的一项编码联合学习(CFL)的工作提议通过在MEC服务器上分配冗余计算来减轻散乱者并加快对线性回归任务的训练。 CFL中的编码冗余是通过利用计算和通信延迟的统计属性来计算的。我们开发了CodedFedl,该编码FEDL解决了将CFL扩展到分布式非线性回归和使用多输出标签的分类问题的艰巨任务。我们工作的关键创新是使用随机傅立叶功能来利用分布式内核嵌入,从而将训练任务转换为分布式线性回归。我们提供了用于负载分配的分析解决方案,并通过使用实用的网络参数通过基准数据集进行了实验来证明编码FEDL的显着性能提高。
Federated Learning (FL) is an exciting new paradigm that enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server. Performance of FL in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. A recent work, Coded Federated Learning (CFL), proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server. Coding redundancy in CFL is computed by exploiting statistical properties of compute and communication delays. We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels. The key innovation of our work is to exploit distributed kernel embedding using random Fourier features that transforms the training task into distributed linear regression. We provide an analytical solution for load allocation, and demonstrate significant performance gains for CodedFedL through experiments over benchmark datasets using practical network parameters.