论文标题
单眼估计器:漏洞和攻击
Monocular Depth Estimators: Vulnerabilities and Attacks
论文作者
论文摘要
神经网络的最新进展导致可靠的单眼深度估计。单眼深度估计的技术比传统的深度估计技术占上风,因为它在推理过程中只需要一个图像。深度估计是机器人技术中的重要任务之一,单眼深度估计具有各种各样的安全至关重要的应用,例如自动驾驶汽车和手术设备。因此,这种技术的鲁棒性至关重要。最近的作品中已经显示,这些深度神经网络非常容易受到分类,检测和分割等任务的对抗样本的影响。这些对抗性样本可能会完全破坏系统的输出,从而使它们的实时部署信誉值得怀疑。在本文中,我们研究了针对对抗攻击的最先进的单眼深度估计网络的鲁棒性。我们的实验表明,肉眼看不见的图像上的微小扰动(扰动攻击)和腐败小于图像的1%(贴片攻击)的大约1%,可能会严重影响深度估计。我们引入了一种新颖的深层an灭损失,破坏了隐藏的特征空间表示迫使网络解码器输出差的深度图。白色框和黑框测试补充了拟议攻击的有效性。我们还执行对抗性示例可传递性测试,主要是交叉数据可传递性。
Recent advancements of neural networks lead to reliable monocular depth estimation. Monocular depth estimated techniques have the upper hand over traditional depth estimation techniques as it only needs one image during inference. Depth estimation is one of the essential tasks in robotics, and monocular depth estimation has a wide variety of safety-critical applications like in self-driving cars and surgical devices. Thus, the robustness of such techniques is very crucial. It has been shown in recent works that these deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation. These adversarial samples can completely ruin the output of the system, making their credibility in real-time deployment questionable. In this paper, we investigate the robustness of the most state-of-the-art monocular depth estimation networks against adversarial attacks. Our experiments show that tiny perturbations on an image that are invisible to the naked eye (perturbation attack) and corruption less than about 1% of an image (patch attack) can affect the depth estimation drastically. We introduce a novel deep feature annihilation loss that corrupts the hidden feature space representation forcing the decoder of the network to output poor depth maps. The white-box and black-box test compliments the effectiveness of the proposed attack. We also perform adversarial example transferability tests, mainly cross-data transferability.