论文标题

分析攻击配置对联邦学习中医学图像重建的影响

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

论文作者

Dahlgaard, Mads Emil, Jørgensen, Morten Wehlast, Fuglsang, Niels Asp, Nassar, Hiba

论文摘要

联合学习的想法是协作培训深度神经网络模型,并与多个参与者共享,而无需彼此展示他们的私人培训数据。由于患者的隐私记录,这在医疗领域非常有吸引力。但是,最近提出的一种方法称为梯度的深泄漏,使攻击者能够从共享梯度重建数据。这项研究表明,为不同的数据初始化方案和距离度量重建图像是多么容易。我们展示了数据和模型体系结构如何影响初始化方案和距离测量配置的最佳选择。我们证明,初始化方案和距离度量的选择可以显着提高收敛速度和质量。此外,我们发现最佳攻击配置在很大程度上取决于目标图像分布的性质和模型体系结构的复杂性。

The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源