论文标题

使用异质图表示可解释的运动预测

Towards Explainable Motion Prediction using Heterogeneous Graph Representations

论文作者

Limeros, Sandra Carrasco, Majchrowska, Sylwia, Johnander, Joakim, Petersson, Christoffer, Llorca, David Fernández

论文摘要

运动预测系统旨在捕获交通情况的未来行为,使自动驾驶汽车能够执行安全有效的计划。这些场景的演变高度不确定,取决于代理与场景中静态和动态对象的相互作用。基于GNN的方法最近引起了人们的关注,因为它们非常适合自然地对这些相互作用进行建模。但是,尚未探索的主要挑战之一是如何解决这些模型的复杂性和不透明性,以应对自动驾驶系统的透明度要求,其中包括可解释性和解释性等方面。在这项工作中,我们旨在通过使用不同的方法来提高运动预测系统的解释性。首先,我们提出了一个基于交通场景和巷道遍历的异质表示的新的可解释的基于图形的策略(XHGP)模型,该模型使用对象级别和类型级别的关注来学习相互作用行为。该学识渊博的注意力提供了有关场景中最重要的代理和互动的信息。其次,我们通过gnnexplainer提供的解释来探索相同的想法。第三,我们应用反事实推理,通过探索受过训练的模型的灵敏度对对输入数据的更改的敏感性,即掩盖场景的某些元素,修改轨迹,添加或删除动态剂,来提供所选单个场景的解释。从用户,开发人员和监管机构的角度来看,本文提供的解释性分析是迈向更透明和可靠的运动预测系统的第一步。重现这项工作的代码可在https://github.com/sancarlim/explainable-mp/tree/v1.1上公开获得。

Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源