论文标题

卡拉模拟器上的自动驾驶代理中的攻击和故障注射 - 经验报告

Attacks and Faults Injection in Self-Driving Agents on the Carla Simulator -- Experience Report

论文作者

Piazzesi, Niccolò, Hong, Massimo, Ceccarelli, Andrea

论文摘要

机器学习应用在自主驾驶的基础上得到了认可,因为它们是大多数驾驶任务的促成技术。但是,在汽车系统中加入训练有素的代理会使车辆遭受新颖的攻击和故障,这可能会对传动系统任务造成安全威胁。在本文中,我们报告了我们的实验活动,内容涉及在驾驶模拟器中运行的自动驾驶代理中对抗性攻击和软件故障的注入。我们表明,在训练有素的代理商中注入的对抗性攻击和缺陷会导致错误的决定并严重危害安全性。该论文根据开源模拟和工具展示了一种可行且易于复制的方法,结果显然激发了保护性措施和广泛的测试活动的需求。

Machine Learning applications are acknowledged at the foundation of autonomous driving, because they are the enabling technology for most driving tasks. However, the inclusion of trained agents in automotive systems exposes the vehicle to novel attacks and faults, that can result in safety threats to the driv-ing tasks. In this paper we report our experimental campaign on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator. We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety. The paper shows a feasible and easily-reproducible approach based on open source simula-tor and tools, and the results clearly motivate the need of both protective measures and extensive testing campaigns.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源