论文标题
快速结合的联合学习
Fast-Convergent Federated Learning
论文作者
论文摘要
联合学习最近已成为通过现代移动设备网络分发机器学习任务的有前途解决方案。最近的研究已经获得了通过每一轮联合学习实现的模型损失预期减少的下限。但是,融合通常需要大量的通信回合,这会导致模型培训的延迟,并且在网络资源方面成本高昂。在本文中,我们提出了一种称为FOLB的快速培养的联合学习算法,该算法在每轮模型训练中对设备进行智能采样,以优化预期的收敛速度。我们首先从理论上表征了一个改进的下限,如果根据预期的改进,他们的本地模型将为当前的全局模型提供了预期的改进,则可以在每个回合中获得。然后,我们表明FOLB通过加权设备根据其梯度信息进行更新而通过统一采样获得了这种绑定。 FOLB能够根据设备对更新的功能的估计来调整聚合来处理设备的通信和计算异质性。与现有联合学习算法相比,我们评估了FOLB,并在实验中显示了其在训练有素的模型准确性,收敛速度和/或模型稳定性中的改进,跨各种机器学习任务和数据集。
Federated learning has emerged recently as a promising solution for distributing machine learning tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved through each round of federated learning. However, convergence generally requires a large number of communication rounds, which induces delay in model training and is costly in terms of network resources. In this paper, we propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed. We first theoretically characterize a lower bound on improvement that can be obtained in each round if devices are selected according to the expected improvement their local models will provide to the current global model. Then, we show that FOLB obtains this bound through uniform sampling by weighting device updates according to their gradient information. FOLB is able to handle both communication and computation heterogeneity of devices by adapting the aggregations according to estimates of device's capabilities of contributing to the updates. We evaluate FOLB in comparison with existing federated learning algorithms and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.