论文标题
个性化改善联邦学习中的隐私准确性权衡
Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning
论文作者
论文摘要
大规模的机器学习系统通常涉及分布在各种用户中的数据。联合学习算法通过将模型更新传达到中央服务器而不是整个数据集来利用此结构。在本文中,我们研究了一个个性化联合学习设置的随机优化算法,该算法涉及符合用户级别(联合)差异隐私的本地和全球模型。在学习私人全球模型会引起隐私成本的同时,本地学习是完全私人的。我们提供概括保证,表明与私人集中学习协调本地学习可以产生一种普遍有用和改进的精度和隐私之间的权衡。我们通过有关合成和现实世界数据集的实验来说明我们的理论结果。
Large-scale machine learning systems often involve data distributed across a collection of users. Federated learning algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We provide generalization guarantees showing that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.