论文标题

结合随机防御以抵抗梯度反演:一项消融研究

Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study

论文作者

Scheliga, Daniel, Mäder, Patrick, Seeland, Marco

论文摘要

梯度反转(GI)攻击是联邦学习(FL)的普遍威胁,因为它们会利用梯度泄漏来重建所谓的私人培训数据。诸如差异隐私(DP)或随机隐私模块(PMS)之类的常见防御机制在梯度计算过程中引入随机性,以防止这种攻击。但是,我们提出,如果攻击者有效地模仿了客户的随机梯度计算,则攻击者可以规避辩护并重建客户的私人培训数据。本文介绍了一些有针对性的GI攻击,这些攻击利用了该原则绕过共同的防御机制。结果,我们证明没有个人辩护提供足够的隐私保护。为了解决这个问题,我们建议将多种防御措施结合起来。我们进行了广泛的消融研究,以评估防御措施对隐私保护和模型效用的各种组合的影响。我们观察到,只有DP和随机PM的组合足以将攻击成功率(ASR)从100%降低到0%,从而保留隐私。此外,我们发现这种防御能力的组合始终取得了隐私和模型效用之间的最佳权衡。

Gradient Inversion (GI) attacks are a ubiquitous threat in Federated Learning (FL) as they exploit gradient leakage to reconstruct supposedly private training data. Common defense mechanisms such as Differential Privacy (DP) or stochastic Privacy Modules (PMs) introduce randomness during gradient computation to prevent such attacks. However, we pose that if an attacker effectively mimics a client's stochastic gradient computation, the attacker can circumvent the defense and reconstruct clients' private training data. This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms. As a result, we demonstrate that no individual defense provides sufficient privacy protection. To address this issue, we propose to combine multiple defenses. We conduct an extensive ablation study to evaluate the influence of various combinations of defenses on privacy protection and model utility. We observe that only the combination of DP and a stochastic PM was sufficient to decrease the Attack Success Rate (ASR) from 100% to 0%, thus preserving privacy. Moreover, we found that this combination of defenses consistently achieves the best trade-off between privacy and model utility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源