论文标题
FedDTG:通过三人生成对抗网络联合的无数据知识蒸馏
FedDTG:Federated Data-Free Knowledge Distillation via Three-Player Generative Adversarial Networks
论文作者
论文摘要
尽管现有的联合学习方法主要集中于汇总本地模型以构建全球模型,但在现实的设置中,由于包含对隐私敏感的信息,一些客户可能不愿意共享其私人模型。知识蒸馏可以在不访问模型参数的情况下提取模型知识,非常适合这种联合场景。但是,联邦学习(联合蒸馏)中的大多数蒸馏方法都需要代理数据集,这在现实世界中很难获得。因此,在本文中,我们引入了分布式的三人生成对抗网络(GAN)来实现无数据的相互蒸馏,并提出了一种称为FedDTG的有效方法。我们确认,GAN产生的假样品可以使联合蒸馏更有效和稳健。此外,客户之间的蒸馏过程可以提供良好的个人客户绩效,同时获得全球知识并保护数据隐私。我们在基准视觉数据集上进行的广泛实验表明,我们的方法在概括方面优于其他联合蒸馏算法。
While existing federated learning approaches primarily focus on aggregating local models to construct a global model, in realistic settings, some clients may be reluctant to share their private models due to the inclusion of privacy-sensitive information. Knowledge distillation, which can extract model knowledge without accessing model parameters, is well-suited for this federated scenario. However, most distillation methods in federated learning (federated distillation) require a proxy dataset, which is difficult to obtain in the real world. Therefore, in this paper, we introduce a distributed three-player Generative Adversarial Network (GAN) to implement data-free mutual distillation and propose an effective method called FedDTG. We confirmed that the fake samples generated by GAN can make federated distillation more efficient and robust. Additionally, the distillation process between clients can deliver good individual client performance while simultaneously acquiring global knowledge and protecting data privacy. Our extensive experiments on benchmark vision datasets demonstrate that our method outperforms other federated distillation algorithms in terms of generalization.