论文标题

通过平衡的全球和本地更新来重新访问沟通效率的联合学习

Revisiting Communication-Efficient Federated Learning with Balanced Global and Local Updates

论文作者

Yan, Zhigang, Li, Dong, Zhang, Zhichao, He, Jiguang

论文摘要

在联合学习(FL)中,许多设备训练其本地模型,并将相应的参数或梯度上传到基站(BS),以更新全局模型,同时保护其数据隐私。但是,由于计算和通信资源有限,需要仔细选择本地培训的数量(又称本地更新)和聚合(又称全局更新)的数量。在本文中,我们调查并分析了本地培训数量和全球聚合的数量之间的最佳权衡,以加快融合并提高对现有作品的预测准确性。我们的目标是在延迟和能耗限制下最小化全球损失函数。为了使优化问题可解决,我们在损失函数上得出了一个新的紧密上限,这使我们能够获得本地培训数量和全局聚合的封闭形式表达式。仿真结果表明,我们提出的方案可以在预测准确性方面取得更好的性能,并且收敛速度比基线方案快得多。

In federated learning (FL), a number of devices train their local models and upload the corresponding parameters or gradients to the base station (BS) to update the global model while protecting their data privacy. However, due to the limited computation and communication resources, the number of local trainings (a.k.a. local update) and that of aggregations (a.k.a. global update) need to be carefully chosen. In this paper, we investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations to speed up the convergence and enhance the prediction accuracy over the existing works. Our goal is to minimize the global loss function under both the delay and the energy consumption constraints. In order to make the optimization problem tractable, we derive a new and tight upper bound on the loss function, which allows us to obtain closed-form expressions for the number of local trainings and that of global aggregations. Simulation results show that our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源