论文标题
可转移,可控制和不起眼的对抗性攻击对人的重新识别,并严重误差
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
论文作者
论文摘要
DNNS的成功使人们重新识别(REID)的广泛应用进入了一个新时代。但是,Reid是否继承DNN的脆弱性仍然没有探索。检查REID系统的鲁棒性非常重要,因为REID系统的不安全感可能会造成严重的损失,例如,罪犯可能会使用对抗性扰动来欺骗CCTV系统。在这项工作中,我们通过提出一种学习至中心的公式来扰动系统输出的排名,从而研究当前最佳的REID模型的不安全感。由于跨数据集的可传递性在REID域至关重要,因此我们还通过开发一种新型的多阶段网络体系结构来执行后箱攻击,该构建具有不同级别的特征,以提取对抗性扰动的一般和可转移的特征。我们的方法可以通过使用可区分的多拍采样来控制恶意像素的数量。为了确保攻击的不符,我们还提出了新的感知损失,以实现更好的视觉质量。对四个最大的REID基准测试(即Market1501 [45],Cuhk03 [18],Dukemtmc [33]和MSMT17 [40])进行了广泛的实验,不仅显示了我们方法的有效性,而且还提供了REID系统鲁棒性的未来改善的指导。例如,表现最佳的REID系统之一的准确性从我们的方法攻击后从91.8%急剧下降到1.4%。一些攻击结果如图1所示。该代码可在https://github.com/whj363636/asperarial-attack-on-person-person-person-reid-with-with-deep-mis-ranking中找到。
The success of DNNs has driven the extensive applications of person re-identification (ReID) into a new era. However, whether ReID inherits the vulnerability of DNNs remains unexplored. To examine the robustness of ReID systems is rather important because the insecurity of ReID systems may cause severe losses, e.g., the criminals may use the adversarial perturbations to cheat the CCTV systems. In this work, we examine the insecurity of current best-performing ReID models by proposing a learning-to-mis-rank formulation to perturb the ranking of the system output. As the cross-dataset transferability is crucial in the ReID domain, we also perform a back-box attack by developing a novel multi-stage network architecture that pyramids the features of different levels to extract general and transferable features for the adversarial perturbations. Our method can control the number of malicious pixels by using differentiable multi-shot sampling. To guarantee the inconspicuousness of the attack, we also propose a new perception loss to achieve better visual quality. Extensive experiments on four of the largest ReID benchmarks (i.e., Market1501 [45], CUHK03 [18], DukeMTMC [33], and MSMT17 [40]) not only show the effectiveness of our method, but also provides directions of the future improvement in the robustness of ReID systems. For example, the accuracy of one of the best-performing ReID systems drops sharply from 91.8% to 1.4% after being attacked by our method. Some attack results are shown in Fig. 1. The code is available at https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking.