论文标题
一种以人为中心的方法来产生自主行动计划的自然语言的因果解释
A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning
论文作者
论文摘要
难以理解的AI系统很难信任,尤其是当它们在自动驾驶(例如自动驾驶)等安全环境中运行时。因此,有必要建立透明且可查询的系统以提高信任水平。我们提出了一种基于现有的称为IGP2的白盒系统的自动驾驶汽车运动计划和预测的透明,以人为中心的解释生成方法。我们的方法将贝叶斯网络与无上下文生成规则相结合,并可以为自动驾驶汽车的高级驾驶行为提供因果自然语言解释。对模拟方案的初步测试表明,我们的方法捕获了自动驾驶汽车行动背后的原因,并产生了具有不同复杂性的可理解解释。
Inscrutable AI systems are difficult to trust, especially if they operate in safety-critical settings like autonomous driving. Therefore, there is a need to build transparent and queryable systems to increase trust levels. We propose a transparent, human-centric explanation generation method for autonomous vehicle motion planning and prediction based on an existing white-box system called IGP2. Our method integrates Bayesian networks with context-free generative rules and can give causal natural language explanations for the high-level driving behaviour of autonomous vehicles. Preliminary testing on simulated scenarios shows that our method captures the causes behind the actions of autonomous vehicles and generates intelligible explanations with varying complexity.