论文标题
使用无噪声差异隐私的联合模型蒸馏
Federated Model Distillation with Noise-Free Differential Privacy
论文作者
论文摘要
传统的联合学习直接平均模型权重,这只能用于与均匀体系结构之间的模型之间的协作。共享预测而不是体重消除了这一障碍,并消除了传统联合学习中白盒推理攻击的风险。但是,本地模型的预测很敏感,并将培训数据隐私泄露给公众。为了解决这个问题,一种天真的方法是在预测中添加差异私人随机噪声,但是,这在隐私预算和模型绩效之间带来了重大的权衡。在本文中,我们提出了一个名为FEDMD-NFDP的新型框架,该框架将无噪声差异隐私(NFDP)机理应用于联合模型蒸馏框架中。我们在各种数据集上进行的广泛实验结果验证了FEDMD-NFDP不仅可以提供可比的效用和通信效率,而且还提供了无噪声的差异隐私保证。我们还通过考虑IID和非IID设置,异质模型体系结构以及来自不同分布的未标记的公共数据集来证明FEDMD-NFDP的可行性。
Conventional federated learning directly averages model weights, which is only possible for collaboration between models with homogeneous architectures. Sharing prediction instead of weight removes this obstacle and eliminates the risk of white-box inference attacks in conventional federated learning. However, the predictions from local models are sensitive and would leak training data privacy to the public. To address this issue, one naive approach is adding the differentially private random noise to the predictions, which however brings a substantial trade-off between privacy budget and model performance. In this paper, we propose a novel framework called FEDMD-NFDP, which applies a Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework. Our extensive experimental results on various datasets validate that FEDMD-NFDP can deliver not only comparable utility and communication efficiency but also provide a noise-free differential privacy guarantee. We also demonstrate the feasibility of our FEDMD-NFDP by considering both IID and non-IID setting, heterogeneous model architectures, and unlabelled public datasets from a different distribution.