论文标题
分层加强学习,以进行有效的探索和转移
Hierarchical reinforcement learning for efficient exploration and transfer
论文作者
论文摘要
稀疏的奖励领域对于增强学习算法具有挑战性,因为在第一次遇到奖励之前需要进行大量探索。分层增强学习可以通过减少获得奖励之前的决策数量来促进探索。在本文中,我们基于一系列任务共有的不变状态空间的压缩,提出了一个新颖的分层增强学习框架。该算法引入了子任务,该子任务包括在压缩引起的状态分区之间移动。结果表明,该算法可以成功地解决复杂的稀疏奖励域,并转移知识以更快地解决新的,以前看不见的任务。
Sparse-reward domains are challenging for reinforcement learning algorithms since significant exploration is needed before encountering reward for the first time. Hierarchical reinforcement learning can facilitate exploration by reducing the number of decisions necessary before obtaining a reward. In this paper, we present a novel hierarchical reinforcement learning framework based on the compression of an invariant state space that is common to a range of tasks. The algorithm introduces subtasks which consist of moving between the state partitions induced by the compression. Results indicate that the algorithm can successfully solve complex sparse-reward domains, and transfer knowledge to solve new, previously unseen tasks more quickly.