论文标题
联合学习的认证鲁棒性
Certified Robustness in Federated Learning
论文作者
论文摘要
联邦学习最近由于其在私下培训机器学习模型中的有效性而引起了人们的关注和知名度。但是,就像在单个节点监督的学习设置中一样,接受过联合学习的训练的模型遭受了脆弱性,而不是易于察觉的输入转换,称为对抗性攻击,质疑其在与安全相关的应用程序中的部署。在这项工作中,我们研究了联合培训,个性化和认证的鲁棒性之间的相互作用。特别是,我们部署了一种随机平滑,一种广泛使用且可扩展的认证方法,以证明在联合设置中训练的深网,以针对输入扰动和转换进行培训。我们发现,与仅对本地数据进行培训相比,简单的联合平均技术不仅可以有效地构建更准确,而且更具认证模型。我们进一步分析了个性化,这是一种在联合培训中流行的技术,它增加了模型对本地数据的偏见,即鲁棒性。我们在通过更快的培训构建更强大的模型时(即,仅在本地数据和联合培训上进行培训)表明了个性化的几个优势。最后,我们探讨了全球和本地〜(即个性化)模型的混合物的鲁棒性,并发现本地模型与全球模型不同时的稳健性降低了
Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications. In this work, we study the interplay between federated training, personalization, and certified robustness. In particular, we deploy randomized smoothing, a widely-used and scalable certification method, to certify deep networks trained on a federated setup against input perturbations and transformations. We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models, compared to training solely on local data. We further analyze personalization, a popular technique in federated training that increases the model's bias towards local data, on robustness. We show several advantages of personalization over both~(that is, only training on local data and federated training) in building more robust models with faster training. Finally, we explore the robustness of mixtures of global and local~(i.e. personalized) models, and find that the robustness of local models degrades as they diverge from the global model