论文标题

上下文层次结构逆增强学习

Context-Hierarchy Inverse Reinforcement Learning

论文作者

Gao, Wei, Hsu, David, Lee, Wee Sun

论文摘要

逆增强学习(IRL)代理人通过观察专家示范和学习专家的基本奖励功能来聪明地行动。尽管从演示中学习奖励功能在各种任务中取得了巨大成功,但其他一些挑战大多被忽略。首先,现有的IRL方法尝试从头开始学习奖励功能,而无需依赖任何先验知识。其次,传统的IRL方法假定所有演示中的奖励功能都是均匀的。一些现有的IRL方法设法扩展到了异质示范。但是,他们仍然假设一个隐藏的变量会影响行为,并学习基本的隐藏变量以及演示的奖励。为了解决这些问题,我们提出了上下文层次结构IRL(Chirl),这是一种新的IRL算法,利用上下文来扩展IRL并学习复杂行为的奖励功能。 Chirl将上下文层次建模为有向的无环图;它代表奖励函数是相应的模块化神经网络,该网络将每个网络模块与上下文层次结构的节点相关联。上下文层次结构和模块化奖励表示可以在多种上下文和州抽象之间共享数据共享,从而显着提高了学习绩效。当上下文层次结构代表子任务分解时,Chirl与层次任务计划有自然的联系。它使其能够合并子任务的因果关系依赖性的先验知识,并能够通过将其脱钩到几个子任务中并征服每个子任务来解决每个子任务以解决原始任务。基准任务的实验,包括Carla模拟器中的大规模自动驾驶任务,在扩大IRL的范围中,显示出具有复杂奖励功能的任务的结果。

An inverse reinforcement learning (IRL) agent learns to act intelligently by observing expert demonstrations and learning the expert's underlying reward function. Although learning the reward functions from demonstrations has achieved great success in various tasks, several other challenges are mostly ignored. Firstly, existing IRL methods try to learn the reward function from scratch without relying on any prior knowledge. Secondly, traditional IRL methods assume the reward functions are homogeneous across all the demonstrations. Some existing IRL methods managed to extend to the heterogeneous demonstrations. However, they still assume one hidden variable that affects the behavior and learn the underlying hidden variable together with the reward from demonstrations. To solve these issues, we present Context Hierarchy IRL(CHIRL), a new IRL algorithm that exploits the context to scale up IRL and learn reward functions of complex behaviors. CHIRL models the context hierarchically as a directed acyclic graph; it represents the reward function as a corresponding modular deep neural network that associates each network module with a node of the context hierarchy. The context hierarchy and the modular reward representation enable data sharing across multiple contexts and state abstraction, significantly improving the learning performance. CHIRL has a natural connection with hierarchical task planning when the context hierarchy represents subtask decomposition. It enables to incorporate the prior knowledge of causal dependencies of subtasks and make it capable of solving large complex tasks by decoupling it into several subtasks and conquering each subtask to solve the original task. Experiments on benchmark tasks, including a large scale autonomous driving task in the CARLA simulator, show promising results in scaling up IRL for tasks with complex reward functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源