论文标题

通过合奏跨域知识蒸馏保留联合学习的隐私

Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation

论文作者

Gong, Xuan, Sharma, Abhishek, Karanam, Srikrishna, Wu, Ziyan, Chen, Terrence, Doermann, David, Innanje, Arun

论文摘要

联合学习(FL)是一种机器学习范式,当地节点在培训数据保持分散时进行了协作训练中心模型。现有的FL方法通常共享模型参数或采用共同依据来解决数据分布不平衡的问题。但是,他们患有沟通瓶颈。更重要的是,它们有隐私泄漏的风险。在这项工作中,我们在FL框架中开发了一种隐私和沟通高效方法,并使用未标记的跨域公共数据进行一次性离线知识蒸馏。我们提出了一个量化的,嘈杂的本地预测集合,该预测是从经过全面训练的本地模型中进行的,以确保更强的隐私保证而无需牺牲准确性。基于有关图像分类和文本分类任务的广泛实验,我们表明我们的隐私方法优于基线FL算法,其精度和沟通效率都具有出色的性能。

Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model while the training data remains decentralized. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they suffer from communication bottlenecks. More importantly, they risk privacy leakage. In this work, we develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation using unlabeled, cross-domain public data. We propose a quantized and noisy ensemble of local predictions from completely trained local models for stronger privacy guarantees without sacrificing accuracy. Based on extensive experiments on image classification and text classification tasks, we show that our privacy-preserving method outperforms baseline FL algorithms with superior performance in both accuracy and communication efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源