论文标题

发展和击败对抗性例子

Developing and Defeating Adversarial Examples

论文作者

McDiarmid-Sterling, Ian, Moser, Allan

论文摘要

机器学习的突破导致了最新的深层神经网络(DNNS)在安全至关重要的应用中执行分类任务。最近的研究表明,DNN可以通过对抗性示例攻击,这些示例对输入数据是导致DNN错误分类对象的输入数据的小扰动。 DNNS的扩散引起了有关设计系统对对抗性例子的强大的重要安全问题。在这项工作中,我们开发了攻击Yolo V3对象检测器[1],然后研究检测和中和这些示例的策略。该项目的Python代码可从https://github.com/ianmcdiarmidsterling/Adversarial获得

Breakthroughs in machine learning have resulted in state-of-the-art deep neural networks (DNNs) performing classification tasks in safety-critical applications. Recent research has demonstrated that DNNs can be attacked through adversarial examples, which are small perturbations to input data that cause the DNN to misclassify objects. The proliferation of DNNs raises important safety concerns about designing systems that are robust to adversarial examples. In this work we develop adversarial examples to attack the Yolo V3 object detector [1] and then study strategies to detect and neutralize these examples. Python code for this project is available at https://github.com/ianmcdiarmidsterling/adversarial

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源