论文标题

通过偏差差异分解的对抗性鲁棒性:联合学习的新观点

Adversarial Robustness through Bias Variance Decomposition: A New Perspective for Federated Learning

论文作者

Zhou, Yao, Wu, Jun, Wang, Haixun, He, Jingrui

论文摘要

联合学习通过根据隐私保护约束从一组分布式客户那里汇总知识来学习神经网络模型。在这项工作中,我们表明该范式可能会继承集中式神经网络的对抗性脆弱性,即,在部署模型时,它在对抗性示例上的性能恶化。当联合学习范式旨在近似集中式神经网络的更新行为时,这甚至更令人震惊。为了解决这个问题,我们提出了一个具有对抗性的联合学习框架,名为Fed_bva,并具有改进的服务器和客户端更新机制。这是由于我们的观察结果是,联合学习中的概括误差可以自然地分解为由多个客户的预测触发的偏见和差异。因此,我们建议通过最大化服务器更新期间的偏差和差异来生成对抗性示例,并在客户端更新期间使用这些示例学习对抗性稳健的模型更新。结果,可以从这些改进的本地客户的模型更新中汇总对手强大的神经网络。实验是使用多个普遍的神经网络模型在多个基准数据集上进行的,经验结果表明,在IID和非IID设置下,我们的框架与白盒和黑盒对抗性损坏具有牢固性。

Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint. In this work, we show that this paradigm might inherit the adversarial vulnerability of the centralized neural network, i.e., it has deteriorated performance on adversarial examples when the model is deployed. This is even more alarming when federated learning paradigm is designed to approximate the updating behavior of a centralized neural network. To solve this problem, we propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms. This is motivated by our observation that the generalization error in federated learning can be naturally decomposed into the bias and variance triggered by multiple clients' predictions. Thus, we propose to generate the adversarial examples via maximizing the bias and variance during server update, and learn the adversarially robust model updates with those examples during client update. As a result, an adversarially robust neural network can be aggregated from these improved local clients' model updates. The experiments are conducted on multiple benchmark data sets using several prevalent neural network models, and the empirical results show that our framework is robust against white-box and black-box adversarial corruptions under both IID and non-IID settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源