论文标题
通过历史偏差建模中的自动驾驶系统中对象跟踪的可认证的安全补丁
A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling
论文作者
论文摘要
自动驾驶汽车(SDC)通常会实施感知管道来检测周围的障碍并跟踪其移动轨迹,这为随后的驾驶决策过程奠定了基础。尽管对SDC中障碍物检测的安全性进行了深入的研究,但直到最近,攻击者才开始利用跟踪模块的脆弱性。与仅攻击对象探测器相比,这种新的攻击策略以更少的攻击预算更有效地影响了驾驶决策。但是,关于揭示的脆弱性是否在端到端的自动驾驶系统中仍然有效,如果是的话,如何减轻威胁。 在本文中,我们介绍了SDC中对象跟踪安全性的首次系统研究。通过一项关于受欢迎的开源自动驾驶系统Baidu的Apollo的全面案例研究,我们证明了基于Kalman Filter(KF)的主流多对象跟踪器(MOT),即使具有启用的多传感器融合机制也是不安全的。我们的根本原因分析表明,脆弱性与基于KF的MOT的设计具有先天性,这将误解对象探测器的预测结果,但是当采用的KF算法易于在其偏离预测时更容易相信观察结果。为了解决这个设计缺陷,我们为基于KF的MOT提出了一个简单而有效的安全贴,其核心是一种适应性策略,可以平衡KF的重点根据观察预测偏差的异常指数在观测和预测上,并已证明了针对广义封建攻击模型的有效性。对基于$ 4 $ kf的现有MOT实施(包括2D和3D,学术和阿波罗的)的广泛评估验证了我们方法的防御效果和琐碎的绩效开销。
Self-driving cars (SDC) commonly implement the perception pipeline to detect the surrounding obstacles and track their moving trajectories, which lays the ground for the subsequent driving decision making process. Although the security of obstacle detection in SDC is intensively studied, not until very recently the attackers start to exploit the vulnerability of the tracking module. Compared with solely attacking the object detectors, this new attack strategy influences the driving decision more effectively with less attack budgets. However, little is known on whether the revealed vulnerability remains effective in end-to-end self-driving systems and, if so, how to mitigate the threat. In this paper, we present the first systematic research on the security of object tracking in SDC. Through a comprehensive case study on the full perception pipeline of a popular open-sourced self-driving system, Baidu's Apollo, we prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism. Our root cause analysis reveals, the vulnerability is innate to the design of KF-based MOT, which shall error-handle the prediction results from the object detectors yet the adopted KF algorithm is prone to trust the observation more when its deviation from the prediction is larger. To address this design flaw, we propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions according to the anomaly index of the observation-prediction deviation, and has certified effectiveness against a generalized hijacking attack model. Extensive evaluation on $4$ KF-based existing MOT implementations (including 2D and 3D, academic and Apollo ones) validate the defense effectiveness and the trivial performance overhead of our approach.