论文标题

Easpace:增强政策转移的行动空间

EASpace: Enhanced Action Space for Policy Transfer

论文作者

Zhang, Zheng, Zhang, Qingrui, Zhu, Bo, Wang, Xiaohan, Hu, Tianjiang

论文摘要

将专家政策制定为宏观行动,有望通过结构化探索和有效的信用分配来减轻长途问题。但是,传统的基于选项的多政策转移方法遭受了对宏观动作长度的效率低下,并且对有用的长期宏观动作的利用不足。在本文中,提出了一种名为Easpace(增强的动作空间)的新型算法,该算法以替代形式制定了宏观动作,以使用多个可用的亚最佳专家政策加速学习过程。具体而言,EASPACE将每个专家策略制定为具有不同执行{times}的多个宏动作。然后将所有宏观动作直接集成到原始动作空间中。引入了与宏观动作的执行时间成正比的固有奖励,以鼓励开发有用的宏观动作。采用类似于选项内Q学习的相应学习规则来提高数据效率。提出了理论分析以显示拟议的学习规则的融合。 Easpace的效率通过基于网格的游戏和多代理追求问题来说明。所提出的算法也在物理系统中实施,以验证其有效性。

Formulating expert policies as macro actions promises to alleviate the long-horizon issue via structured exploration and efficient credit assignment. However, traditional option-based multi-policy transfer methods suffer from inefficient exploration of macro action's length and insufficient exploitation of useful long-duration macro actions. In this paper, a novel algorithm named EASpace (Enhanced Action Space) is proposed, which formulates macro actions in an alternative form to accelerate the learning process using multiple available sub-optimal expert policies. Specifically, EASpace formulates each expert policy into multiple macro actions with different execution {times}. All the macro actions are then integrated into the primitive action space directly. An intrinsic reward, which is proportional to the execution time of macro actions, is introduced to encourage the exploitation of useful macro actions. The corresponding learning rule that is similar to Intra-option Q-learning is employed to improve the data efficiency. Theoretical analysis is presented to show the convergence of the proposed learning rule. The efficiency of EASpace is illustrated by a grid-based game and a multi-agent pursuit problem. The proposed algorithm is also implemented in physical systems to validate its effectiveness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源