论文标题
隐私的等级联合学习
Hierarchical Federated Learning with Privacy
论文作者
论文摘要
联合学习(FL),数据保留在联合客户端,并且仅与中央聚合器共享梯度更新是私人的。最近的工作表明,具有梯度级别访问权限的对手可以实现成功的推理和重建攻击。在这种情况下,众所周知,差异化(DP)学习可以提供弹性。但是,现状中使用的方法(\ ie中央和本地DP)引入了不同的公用事业与隐私权衡折衷。在这项工作中,我们迈出了通过{\ em等级fl(HFL)}来缓解此类权衡的第一步。我们证明,通过引入一个新的中介水平,可以添加校准的DP噪声,可以获得更好的隐私与公用事业权衡;我们称此{\ em分级DP(HDP)}。我们使用3个不同数据集的实验(通常用作FL的基准)表明HDP产生的模型与使用中央DP获得的模型一样准确,在中央聚集器中添加了噪声。这种方法还为推理对手提供了可比的好处,就像在当地的DP情况下一样,在当地的DP情况下,在联合客户的客户中添加了噪音。
Federated learning (FL), where data remains at the federated clients, and where only gradient updates are shared with a central aggregator, was assumed to be private. Recent work demonstrates that adversaries with gradient-level access can mount successful inference and reconstruction attacks. In such settings, differentially private (DP) learning is known to provide resilience. However, approaches used in the status quo (\ie central and local DP) introduce disparate utility vs. privacy trade-offs. In this work, we take the first step towards mitigating such trade-offs through {\em hierarchical FL (HFL)}. We demonstrate that by the introduction of a new intermediary level where calibrated DP noise can be added, better privacy vs. utility trade-offs can be obtained; we term this {\em hierarchical DP (HDP)}. Our experiments with 3 different datasets (commonly used as benchmarks for FL) suggest that HDP produces models as accurate as those obtained using central DP, where noise is added at a central aggregator. Such an approach also provides comparable benefit against inference adversaries as in the local DP case, where noise is added at the federated clients.