论文标题
通过权力下放进行隐私放大
Privacy Amplification by Decentralization
论文作者
论文摘要
分析几个政党拥有的数据,同时实现效用和隐私之间的良好权衡是联合学习和分析的关键挑战。在这项工作中,我们介绍了一种新颖的放松当地差异隐私(LDP),这些放松自然会出现在完全分散的算法中,即当参与者通过没有中央协调员的网络图的边缘进行交流时,参与者交换信息时。我们称之为Network DP的这种放松捕获了用户对系统的本地视图的事实。为了显示网络DP的相关性,我们研究了一个分散的计算模型,在该模型中,令牌在网络图上执行散步,并由接收它的党派进行顺序更新。对于诸如真实求和,直方图计算和梯度下降优化之类的任务,我们提出了在环和完整拓扑上的简单算法。我们证明,在Network DP下,我们的算法的隐私性权衡取舍可以显着改善,而在LDP下可以实现的问题,并且通常与受信任的策展人模型的实用性相匹配。我们的结果首次表明可以从全面的权力下放中获得正式的隐私收益。我们还提供了实验,以说明通过随机梯度下降进行分散训练的方法的改善。
Analyzing data owned by several parties while achieving a good trade-off between utility and privacy is a key challenge in federated learning and analytics. In this work, we introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized algorithms, i.e., when participants exchange information by communicating along the edges of a network graph without central coordinator. This relaxation, that we call network DP, captures the fact that users have only a local view of the system. To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it. For tasks such as real summation, histogram computation and optimization with gradient descent, we propose simple algorithms on ring and complete topologies. We prove that the privacy-utility trade-offs of our algorithms under network DP significantly improve upon what is achievable under LDP, and often match the utility of the trusted curator model. Our results show for the first time that formal privacy gains can be obtained from full decentralization. We also provide experiments to illustrate the improved utility of our approach for decentralized training with stochastic gradient descent.