论文标题
制定合作政策,用于多阶段强化学习任务
Developing cooperative policies for multi-stage reinforcement learning tasks
论文作者
论文摘要
许多分层的增强学习算法利用一系列独立技能作为在更高水平的推理中解决任务的基础。这些算法不考虑使用合作而不是独立的技能的价值。本文提出了协同连续策略(CCP)方法,即使连续代理能够合作解决长期的视野多阶段任务。通过修改每个代理商的策略以最大化当前和下一个代理的评论家来实现此方法。合作最大化批评者,每个代理商都可以采取有益于其任务以及后续任务的行动。在多房间的迷宫域中使用此方法和在孔操作域中的PEG,合作策略能够超越一组幼稚的策略,一个在整个域中训练的单个代理,以及另一种顺序的HRL算法。
Many hierarchical reinforcement learning algorithms utilise a series of independent skills as a basis to solve tasks at a higher level of reasoning. These algorithms don't consider the value of using skills that are cooperative instead of independent. This paper proposes the Cooperative Consecutive Policies (CCP) method of enabling consecutive agents to cooperatively solve long time horizon multi-stage tasks. This method is achieved by modifying the policy of each agent to maximise both the current and next agent's critic. Cooperatively maximising critics allows each agent to take actions that are beneficial for its task as well as subsequent tasks. Using this method in a multi-room maze domain and a peg in hole manipulation domain, the cooperative policies were able to outperform a set of naive policies, a single agent trained across the entire domain, as well as another sequential HRL algorithm.