论文标题
在安全至关重要的驾驶场景中,通过无模型的强化学习自动学习后卫策略
Automatically Learning Fallback Strategies with Model-Free Reinforcement Learning in Safety-Critical Driving Scenarios
论文作者
论文摘要
当学习在安全至关重要的随机环境中(例如在交通中驾驶车辆)时,人类驾驶员自然而然地计划后备策略作为备份,如果环境发生意外变化。知道要期望出乎意料的结果,并计划这种结果,可以提高我们变得强大的看不见的情况,并可能有助于防止灾难性的失败。自动驾驶汽车(AVS)的控制特别有兴趣知道出于安全的利益而了解何时以及如何使用后备策略。由于AV可用的有关其环境的不完美信息,因此在Ready中拥有其他策略很重要,该策略可能不会从原始培训数据分布中推导出来。 在本文中,我们提出了一种无模型增强学习(RL)代理的原则方法,以捕获环境中多种行为模式。我们向奖励模型介绍了一个额外的伪奖励术语,以鼓励探索与最佳政策特权领域不同的国家空间领域。我们将此奖励术语基于代理轨迹之间的距离度量,以强迫政策专注于与初始探索代理相比,将其关注不同的状态空间领域。在整个论文中,我们将这种特殊的培训范式称为学习后备策略。 我们将此方法应用于自主驾驶方案,并表明我们能够学习有用的政策,否则在培训期间会错过,并且在执行控制算法时无法使用。
When learning to behave in a stochastic environment where safety is critical, such as driving a vehicle in traffic, it is natural for human drivers to plan fallback strategies as a backup to use if ever there is an unexpected change in the environment. Knowing to expect the unexpected, and planning for such outcomes, increases our capability for being robust to unseen scenarios and may help prevent catastrophic failures. Control of Autonomous Vehicles (AVs) has a particular interest in knowing when and how to use fallback strategies in the interest of safety. Due to imperfect information available to an AV about its environment, it is important to have alternate strategies at the ready which might not have been deduced from the original training data distribution. In this paper we present a principled approach for a model-free Reinforcement Learning (RL) agent to capture multiple modes of behaviour in an environment. We introduce an extra pseudo-reward term to the reward model, to encourage exploration to areas of state-space different from areas privileged by the optimal policy. We base this reward term on a distance metric between the trajectories of agents, in order to force policies to focus on different areas of state-space than the initial exploring agent. Throughout the paper, we refer to this particular training paradigm as learning fallback strategies. We apply this method to an autonomous driving scenario, and show that we are able to learn useful policies that would have otherwise been missed out on during training, and unavailable to use when executing the control algorithm.