论文标题
Dordis:有效的联邦学习,并具有辍学的差异隐私
Dordis: Efficient Federated Learning with Dropout-Resilient Differential Privacy
论文作者
论文摘要
联合学习(FL)越来越多地部署在多个客户中,以通过分散数据来培训共享模型。为了解决隐私问题,FL系统需要在暴露于未经信任的域时通过训练的模型来保护客户的数据在培训和控制数据泄漏过程中披露。分布式差异隐私(DP)在这方面提供了一个吸引人的解决方案,因为它在没有受信任的服务器的情况下实现了隐私和公用事业之间的平衡权衡。但是,在客户辍学的存在下,现有的分布式DP机制是不切实际的,导致隐私保证或退化的培训准确性不佳。此外,这些机制遭受了严重的效率问题。 我们提出了Dordis,这是一个分布式的私人FL框架,非常有效且对客户辍学。具体来说,我们开发了一种新颖的“加入”计划,即使某些采样的客户退出,在每个训练回合中都会准确地执行所需的噪声水平。这确保了尽管不可预测的客户动态,但仍谨慎使用隐私预算。为了提高性能,Dordis通过将通信和计算操作汇总为阶段,作为分布式并行体系结构运行。它会自动将全局模型聚合分为几个块 - 聚集任务,并将它们划分为最佳加速。大规模的部署评估表明,与现有的解决方案相比,Dordis有效地处理客户辍学的情况下,在各种现实的FL场景中,实现了最佳的隐私性权衡权衡和加速培训,并加速了高达2.4 $ \ times的培训。
Federated learning (FL) is increasingly deployed among multiple clients to train a shared model over decentralized data. To address privacy concerns, FL systems need to safeguard the clients' data from disclosure during training and control data leakage through trained models when exposed to untrusted domains. Distributed differential privacy (DP) offers an appealing solution in this regard as it achieves a balanced tradeoff between privacy and utility without a trusted server. However, existing distributed DP mechanisms are impractical in the presence of client dropout, resulting in poor privacy guarantees or degraded training accuracy. In addition, these mechanisms suffer from severe efficiency issues. We present Dordis, a distributed differentially private FL framework that is highly efficient and resilient to client dropout. Specifically, we develop a novel `add-then-remove' scheme that enforces a required noise level precisely in each training round, even if some sampled clients drop out. This ensures that the privacy budget is utilized prudently, despite unpredictable client dynamics. To boost performance, Dordis operates as a distributed parallel architecture via encapsulating the communication and computation operations into stages. It automatically divides the global model aggregation into several chunk-aggregation tasks and pipelines them for optimal speedup. Large-scale deployment evaluations demonstrate that Dordis efficiently handles client dropout in various realistic FL scenarios, achieving the optimal privacy-utility tradeoff and accelerating training by up to 2.4$\times$ compared to existing solutions.