论文标题

另一个中级攻击

Yet Another Intermediate-Level Attack

论文作者

Li, Qizhang, Guo, Yiwen, Chen, Hao

论文摘要

对抗性示例在深神经网络(DNN)模型中的可传递性是一系列黑盒攻击的关键。在本文中,我们提出了一种增强基线对抗示例的黑盒可传递性的新方法。通过建立中间级别差异的线性映射(在一组对抗输入及其良性对应物之间)来预测诱发的对抗性损失,我们的目标是充分利用多步基线攻击的优化程序。我们进行了广泛的实验,以验证方法对CIFAR-100和Imagenet的有效性。实验结果表明,它的表现要优于先前的最新面貌。我们的代码位于https://github.com/qizhangli/ila-plus-plus。

The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks. In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples. By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full advantage of the optimization procedure of multi-step baseline attacks. We conducted extensive experiments to verify the effectiveness of our method on CIFAR-100 and ImageNet. Experimental results demonstrate that it outperforms previous state-of-the-arts considerably. Our code is at https://github.com/qizhangli/ila-plus-plus.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源