论文标题

研究深度学习中多模式数据融合的对抗性示例的脆弱性

Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning

论文作者

Yu, Youngjoon, Lee, Hong Joo, Kim, Byeong Cheon, Kim, Jung Uk, Ro, Yong Man

论文摘要

多模式数据融合在深度学习中的成功似乎归因于多个输入数据之间的补充物质的使用。与他们的预测性能相比,相对较少的注意力专用于多模式融合模型的鲁棒性。在本文中,我们研究了当前的多模式融合模型是否利用互补情报来防御对抗攻击。我们在MFNET上应用了基于梯度的白盒攻击,例如FGSM和PGD,这是用于语义分割的主要多光谱(RGB,热)融合深度学习模型。我们验证,即使仅攻击了一个传感器,也很容易受到对抗性攻击的多模式融合模型,即使只有一个传感器也很容易受到对抗攻击。因此,很难说现有的多模式数据融合模型正在完全利用多种模态之间的互补关系,从而在对抗性鲁棒性方面。我们认为,我们的观察结果为对多模式数据融合的对抗性攻击研究打开了新的视野。

The success of multimodal data fusion in deep learning appears to be attributed to the use of complementary in-formation between multiple input data. Compared to their predictive performance, relatively less attention has been devoted to the robustness of multimodal fusion models. In this paper, we investigated whether the current multimodal fusion model utilizes the complementary intelligence to defend against adversarial attacks. We applied gradient based white-box attacks such as FGSM and PGD on MFNet, which is a major multispectral (RGB, Thermal) fusion deep learning model for semantic segmentation. We verified that the multimodal fusion model optimized for better prediction is still vulnerable to adversarial attack, even if only one of the sensors is attacked. Thus, it is hard to say that existing multimodal data fusion models are fully utilizing complementary relationships between multiple modalities in terms of adversarial robustness. We believe that our observations open a new horizon for adversarial attack research on multimodal data fusion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源