论文标题

在联邦学习中隐藏敏感样品,以防止梯度泄漏

Concealing Sensitive Samples against Gradient Leakage in Federated Learning

论文作者

Wu, Jing, Hayat, Munawar, Zhou, Mingyi, Harandi, Mehrtash

论文摘要

联合学习(FL)是一个分布式学习范式,可以通过消除客户与服务器共享原始私人数据的需求来增强用户隐私。尽管取得了成功,但最近的研究揭示了FL对模型反转攻击的脆弱性,在此,对手通过窃听共享梯度信息来重建用户私人数据。我们假设这种攻击成功的关键因素是在随机优化过程中批处理中每个数据梯度之间的纠缠较低。这产生了一个脆弱性,对手可以利用重建敏感数据。在这种见解的基础上,我们提出了一种简单而有效的防御策略,它用隐藏的样本掩盖了敏感数据的梯度。为了实现这一目标,我们提出合成的隐藏样品,以模仿梯度级别的敏感数据,同时确保其与实际敏感数据的视觉差异。与以前的艺术相比,我们的经验评估表明,该提出的技术提供了最强的保护,同时保持了FL性能。

Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server. Despite the success, recent studies expose the vulnerability of FL to model inversion attacks, where adversaries reconstruct users private data via eavesdropping on the shared gradient information. We hypothesize that a key factor in the success of such attacks is the low entanglement among gradients per data within the batch during stochastic optimization. This creates a vulnerability that an adversary can exploit to reconstruct the sensitive data. Building upon this insight, we present a simple, yet effective defense strategy that obfuscates the gradients of the sensitive data with concealed samples. To achieve this, we propose synthesizing concealed samples to mimic the sensitive data at the gradient level while ensuring their visual dissimilarity from the actual sensitive data. Compared to the previous art, our empirical evaluations suggest that the proposed technique provides the strongest protection while simultaneously maintaining the FL performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源