论文标题

通过知识蒸馏的联合学习

Federated Unlearning with Knowledge Distillation

论文作者

Wu, Chen, Zhu, Sencun, Mitra, Prasenjit

论文摘要

联合学习(FL)旨在通过仅传输模型而不是原始数据来保护每个客户的数据隐私。但是,训练有素的模型可以记住有关培训数据的某些信息。鉴于最新的立法被遗忘的权利,对于FL模型而言,具有忘记从每个客户中学到的知识的能力至关重要。我们提出了一种新颖的联邦化学方法,通过从模型中减去累积的历史更新并利用知识蒸馏方法来恢复模型的性能而无需使用客户的任何数据来消除客户的贡献。该方法对神经网络的类型没有任何限制,也不依赖客户的参与,因此在FL系统中既实用又有效。我们进一步在培训过程中引入了后门攻击,以帮助评估学习效果。三个规范数据集的实验证明了我们方法的有效性和效率。

Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源