论文标题

黑盒对抗示例生成正常流量

Black-box Adversarial Example Generation with Normalizing Flows

论文作者

Dolatabadi, Hadi M., Erfani, Sarah, Leckie, Christopher

论文摘要

深度神经网络分类器患有对抗性脆弱性:对输入数据的精心制作,不明显的更改可能会影响分类器的决策。在这方面,对强大的对抗攻击的研究可以帮助阐明这种恶意行为的来源。在本文中,我们提出了一种新型的黑盒对抗攻击,并使用标准化流动。我们通过搜索基于预训练的基于流动的模型基本分布来展示如何找到对手。这样,我们就可以生成对手,这些对手与扰动的形状相似。然后,我们证明了针对众所周知的黑盒对抗攻击方法的拟议方法的竞争性能。

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision. In this regard, the study of powerful adversarial attacks can help shed light on sources of this malicious behavior. In this paper, we propose a novel black-box adversarial attack using normalizing flows. We show how an adversary can be found by searching over a pre-trained flow-based model base distribution. This way, we can generate adversaries that resemble the original data closely as the perturbations are in the shape of the data. We then demonstrate the competitive performance of the proposed approach against well-known black-box adversarial attack methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源