论文标题
联合的相互学习
Federated Mutual Learning
论文作者
论文摘要
联合学习(FL)可以协作对分散数据进行深入学习模型。但是,在FL设置中有三种类型的异质性为典型的联合学习算法带来了独特的挑战(FedAvg)。首先,由于数据的非IID性,全局共享模型的性能可能比仅在其私人数据上训练的本地模型要差。其次,中心服务器和客户端的目标可能不同,中央服务器寻求广义模型,而客户端则采用个性化模型,并且客户可能会运行不同的任务;第三,客户可能需要为各种场景和任务设计其自定义模型。在这项工作中,我们提出了一种新颖的联邦学习范式,称为联邦相互倾向(FML),以处理三种异质性。 FML允许客户共同培训广义模型和一个个性化模型,并设计其私人定制模型。因此,数据的非IID不再是一个错误,而是一个可以个人服务的功能。实验表明,FML可以在典型的FL设置中获得比替代方案更好的性能,并且可以通过不同的模型和任务从FML中受益。
Federated learning (FL) enables collaboratively training deep learning models on decentralized data. However, there are three types of heterogeneities in FL setting bringing about distinctive challenges to the canonical federated learning algorithm (FedAvg). First, due to the Non-IIDness of data, the global shared model may perform worse than local models that solely trained on their private data; Second, the objective of center server and clients may be different, where center server seeks for a generalized model whereas client pursue a personalized model, and clients may run different tasks; Third, clients may need to design their customized model for various scenes and tasks; In this work, we present a novel federated learning paradigm, named Federated Mutual Leaning (FML), dealing with the three heterogeneities. FML allows clients training a generalized model collaboratively and a personalized model independently, and designing their private customized models. Thus, the Non-IIDness of data is no longer a bug but a feature that clients can be personally served better. The experiments show that FML can achieve better performance than alternatives in typical FL setting, and clients can be benefited from FML with different models and tasks.