论文标题

异类联邦学习

Heterogeneous Federated Learning

论文作者

Yu, Fuxun, Zhang, Weishan, Qin, Zhuwei, Xu, Zirui, Wang, Di, Liu, Chenchen, Tian, Zhi, Chen, Xiang

论文摘要

联合学习通过融合本地节点的协作模型来从分散的数据中学习。但是,由于混乱的信息分布,模型融合可能会遭受无与伦比的参数的结构错位。在这项工作中,我们提出了一个新颖的联合学习框架,以通过协作模型建立公司结构信息对准来解决此问题。具体而言,我们设计了一种面向功能的调节方法({$ψ$ -NET}),以确保不同神经网络结构中的显式特征信息分配。将这种调节方法应用于协作模型,可以在非常早期的培训阶段初始化具有类似特征信息的可匹配结构。在IID或非IID场景下的联邦学习过程中,专用的协作计划进一步保证了以确定的结构匹配的订购信息分配,因此作为综合模型对齐。最终,该框架有效地增强了联合的学习适用性,适用于广泛的异质设置,同时提供了出色的收敛速度,准确性和计算/通信效率。

Federated learning learns from scattered data by fusing collaborative models from local nodes. However, due to chaotic information distribution, the model fusion may suffer from structural misalignment with regard to unmatched parameters. In this work, we propose a novel federated learning framework to resolve this issue by establishing a firm structure-information alignment across collaborative models. Specifically, we design a feature-oriented regulation method ({$Ψ$-Net}) to ensure explicit feature information allocation in different neural network structures. Applying this regulating method to collaborative models, matchable structures with similar feature information can be initialized at the very early training stage. During the federated learning process under either IID or non-IID scenarios, dedicated collaboration schemes further guarantee ordered information distribution with definite structure matching, so as the comprehensive model alignment. Eventually, this framework effectively enhances the federated learning applicability to extensive heterogeneous settings, while providing excellent convergence speed, accuracy, and computation/communication efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源