论文标题

视觉攻击和辩护文字

Visual Attack and Defense on Text

论文作者

Liu, Shengjun, Jiang, Ningkang, Wu, Yuanbin

论文摘要

将一段文本的字符修改为其视觉相似的字符通常在垃圾邮件中贴上pear,以欺骗检查系统和其他条件,我们认为这是对神经模型的一种对抗性攻击。我们实现一种产生这种视觉文本攻击的方式,并表明攻击文本是人类可读的,但误导了神经分类器。我们是基于视觉的模型和对抗性训练,以防御攻击而不会失去理解正常文本的能力。我们的结果还表明,视觉攻击非常复杂且多样化,需要做更多的工作来解决此问题。

Modifying characters of a piece of text to their visual similar ones often ap-pear in spam in order to fool inspection systems and other conditions, which we regard as a kind of adversarial attack to neural models. We pro-pose a way of generating such visual text attack and show that the attacked text are readable by humans but mislead a neural classifier greatly. We ap-ply a vision-based model and adversarial training to defense the attack without losing the ability to understand normal text. Our results also show that visual attack is extremely sophisticated and diverse, more work needs to be done to solve this.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源