论文标题

使用依次训练的超级熟悉者加速异质联邦学习

Speeding up Heterogeneous Federated Learning with Sequentially Trained Superclients

论文作者

Zaccone, Riccardo, Rizzardi, Andrea, Caldarola, Debora, Ciccone, Marco, Caputo, Barbara

论文摘要

联合学习(FL)允许在不需要本地数据共享的情况下启用边缘设备的合作,可以在隐私受限的方案中培训机器学习模型。由于本地数据集和客户的计算异质性的不同统计分布,这种方法引起了一些挑战。特别是,高度非i.i.d的存在。数据严重损害了训练有素的神经网络的性能及其收敛速度,增加了要求达到与集中式场景相当的绩效的通信次数。作为解决方案,我们提出了FedSeq,这是一个新颖的框架,利用了异质客户(即超级生物)的子组的顺序训练,以符合隐私的方式模仿集中式范式。鉴于固定的通信回合预算,我们表明,在最终性能和收敛速度方面,FedSeq胜过或匹配几种最新的联合算法。最后,我们的方法可以轻松地与文献中可用的其他方法集成。经验结果表明,将现有算法与FedSeq结合进一步提高了其最终性能和收敛速度。我们在CIFAR-10和CIFAR-100上测试了我们的方法,并证明了其在I.I.D的有效性。和non-i.i.d。方案。

Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing. This approach raises several challenges due to the different statistical distribution of the local datasets and the clients' computational heterogeneity. In particular, the presence of highly non-i.i.d. data severely impairs both the performance of the trained neural network and its convergence rate, increasing the number of communication rounds requested to reach a performance comparable to that of the centralized scenario. As a solution, we propose FedSeq, a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e. superclients, to emulate the centralized paradigm in a privacy-compliant way. Given a fixed budget of communication rounds, we show that FedSeq outperforms or match several state-of-the-art federated algorithms in terms of final performance and speed of convergence. Finally, our method can be easily integrated with other approaches available in the literature. Empirical results show that combining existing algorithms with FedSeq further improves its final performance and convergence speed. We test our method on CIFAR-10 and CIFAR-100 and prove its effectiveness in both i.i.d. and non-i.i.d. scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源