论文标题

离线作者独立的签名验证中的机器人和生成的对抗攻击

Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification

论文作者

Bird, Jordan J.

论文摘要

这项研究探讨了如何使用机器人和生成方法来对签名验证系统进行成功的虚假感受性攻击。最初,探索和调整了卷积神经网络拓扑和数据增强策略,从而产生了87.12%的准确模型,用于验证2,640个人类签名。然后,将两个机器人用于锻造50个签名,其中25个用于验证攻击,其余25个用于调整模型来防御它们。对系统的对抗攻击表明,存在信息安全风险; Line-US机器人手臂可以有24%的时间欺骗系统,而IDRAW 2.0机器人有32%的时间。有条件的甘也发现了类似的成功,大约30%的锻造签名被错误分类为真实。在对机器人和生成数据进行微调传输学习后,机器人和gan都将对抗攻击降低到模型阈值以下。据观察,调整模型会将机器人攻击的风险降低到8%和12%,并且当出现25张图像时,有条件的生成对抗攻击可以减少到4%,而当出现1000张图像时,则可以将其降低到4%。

This study explores how robots and generative approaches can be used to mount successful false-acceptance adversarial attacks on signature verification systems. Initially, a convolutional neural network topology and data augmentation strategy are explored and tuned, producing an 87.12% accurate model for the verification of 2,640 human signatures. Two robots are then tasked with forging 50 signatures, where 25 are used for the verification attack, and the remaining 25 are used for tuning of the model to defend against them. Adversarial attacks on the system show that there exists an information security risk; the Line-us robotic arm can fool the system 24% of the time and the iDraw 2.0 robot 32% of the time. A conditional GAN finds similar success, with around 30% forged signatures misclassified as genuine. Following fine-tune transfer learning of robotic and generative data, adversarial attacks are reduced below the model threshold by both robots and the GAN. It is observed that tuning the model reduces the risk of attack by robots to 8% and 12%, and that conditional generative adversarial attacks can be reduced to 4% when 25 images are presented and 5% when 1000 images are presented.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源