论文标题

病例感知的对抗训练

Case-Aware Adversarial Training

论文作者

Fan, Mingyuan, Liu, Yang, Chen, Cen

论文摘要

神经网络(NN)成为各种信号处理应用中最加热的模型之一。但是,NN非常容易受到对抗示例(AES)的影响。为了捍卫AES,由于强化计算,对抗性训练(AT)被认为是最有效的方法,即在大多数应用中应用。在本文中,为了解决问题,我们设计了一种通用和高效的改进方案,即案例感知的对抗训练(CAT)。具体而言,直觉源于这样一个事实,即信息丰富的样本的一部分可能有助于大多数模型性能。另外,如果仅使用了最有用的AE,则可以降低AT的计算复杂性,以显着保持防御效应。为了实现这一目标,猫取得了两个突破。首先,提出了一种用于估算AE过滤的对抗示例信息程度的方法。其次,为了进一步丰富NN可以从AES获得的信息,CAT涉及权重估计和基于类平衡的采样策略,以增加每次迭代的AT多样性。广泛的实验表明,猫的速度比香草快3倍,同时达到竞争性防御效果。

The neural network (NN) becomes one of the most heated type of models in various signal processing applications. However, NNs are extremely vulnerable to adversarial examples (AEs). To defend AEs, adversarial training (AT) is believed to be the most effective method while due to the intensive computation, AT is limited to be applied in most applications. In this paper, to resolve the problem, we design a generic and efficient AT improvement scheme, namely case-aware adversarial training (CAT). Specifically, the intuition stems from the fact that a very limited part of informative samples can contribute to most of model performance. Alternatively, if only the most informative AEs are used in AT, we can lower the computation complexity of AT significantly as maintaining the defense effect. To achieve this, CAT achieves two breakthroughs. First, a method to estimate the information degree of adversarial examples is proposed for AE filtering. Second, to further enrich the information that the NN can obtain from AEs, CAT involves a weight estimation and class-level balancing based sampling strategy to increase the diversity of AT at each iteration. Extensive experiments show that CAT is faster than vanilla AT by up to 3x while achieving competitive defense effect.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源