论文标题

关于自动驾驶汽车轨迹预测的对抗性鲁棒性

On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

论文作者

Zhang, Qingzhao, Hu, Shengtuo, Sun, Jiachen, Chen, Qi Alfred, Mao, Z. Morley

论文摘要

轨迹预测是自动驾驶汽车(AV)进行安全计划和导航的关键组成部分。但是,很少有研究分析了轨迹预测的对抗性鲁棒性,或者研究了最坏的预测是否仍然可以导致安全计划。为了弥合这一差距,我们通过提出一种新的对抗性攻击来研究轨迹预测模型的对抗性鲁棒性,该攻击使正常的车辆轨迹变为最大化预测误差。我们对三个模型和三个数据集的实验表明,对抗性预测将预测误差增加了150%以上。我们的案例研究表明,如果对手在对抗轨迹之后驾驶靠近目标AV的车辆,则AV可能会做出不准确的预测,甚至做出不安全的驾驶决定。我们还通过数据增强和轨迹平滑探索了可能的缓解技术。该实现是https://github.com/zqzqz/advtraightoreprediction的开源。

Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing. The implementation is open source at https://github.com/zqzqz/AdvTrajectoryPrediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源