论文标题

帕萨迪纳:感知意识和隐形的对抗性攻击

Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack

论文作者

Cheng, Yupeng, Guo, Qing, Juefei-Xu, Felix, Feng, Wei, Lin, Shang-Wei, Lin, Weisi, Liu, Yang

论文摘要

图像denoising可以消除由于低质量的成像传感器,不稳定的图像传输过程或低光条件而被多媒体设备捕获的图像中广泛存在的自然噪声。最近的作品还发现,图像denoinging有利于高级视觉任务,例如图像分类。在这项工作中,我们试图挑战这种常识,并探索一个全新的问题,即,是否可以赋予欺骗最先进的深神经网络(DNN)的能力,同时增强图像质量。为此,我们首次尝试从对抗性攻击的角度研究这个问题,并提出对抗性的denoise攻击。更具体地说,我们的主要贡献是三个方面:首先,我们确定了一项新任务,该任务偷偷地嵌入了图像Denoisising模块中广泛部署在多媒体设备中的模块,作为图像后处理操作,以同时增强视觉图像质量和傻瓜DNNS。其次,我们将这项新任务制定为图像过滤的内核预测问题,并提出了可以产生对抗性的无用核的对抗性降解的内核预测,以同时有效地转化和对抗性攻击。第三,我们实施了自适应感知区域的定位,以确定与语义相关的漏洞区域,攻击可以更有效,同时又不对denoing造成太大伤害。我们将提出的方法命名为Pasadena(具有感知意识和隐形的对抗性DeNoise攻击),并验证我们对Neurips'17对抗竞争数据集的方法,CVPR2021-AIC-VI:对Imagenet的无限制对抗性攻击等。全面的评估和分析表明,我们的方法不仅实现了脱氧,而且在最先进的攻击中取得了更高的成功率和可转移性。

Image denoising can remove natural noise that widely exists in images captured by multimedia devices due to low-quality imaging sensors, unstable image transmission processes, or low light conditions. Recent works also find that image denoising benefits the high-level vision tasks, e.g., image classification. In this work, we try to challenge this common sense and explore a totally new problem, i.e., whether the image denoising can be given the capability of fooling the state-of-the-art deep neural networks (DNNs) while enhancing the image quality. To this end, we initiate the very first attempt to study this problem from the perspective of adversarial attack and propose the adversarial denoise attack. More specifically, our main contributions are three-fold: First, we identify a new task that stealthily embeds attacks inside the image denoising module widely deployed in multimedia devices as an image post-processing operation to simultaneously enhance the visual image quality and fool DNNs. Second, we formulate this new task as a kernel prediction problem for image filtering and propose the adversarial-denoising kernel prediction that can produce adversarial-noiseless kernels for effective denoising and adversarial attacking simultaneously. Third, we implement an adaptive perceptual region localization to identify semantic-related vulnerability regions with which the attack can be more effective while not doing too much harm to the denoising. We name the proposed method as Pasadena (Perceptually Aware and Stealthy Adversarial DENoise Attack) and validate our method on the NeurIPS'17 adversarial competition dataset, CVPR2021-AIC-VI: unrestricted adversarial attacks on ImageNet,etc. The comprehensive evaluation and analysis demonstrate that our method not only realizes denoising but also achieves a significantly higher success rate and transferability over state-of-the-art attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源