论文标题

使用单个演示的近端政策优化的指导探索

Guided Exploration with Proximal Policy Optimization using a Single Demonstration

论文作者

Libardi, Gabriele, De Fabritiis, Gianni

论文摘要

通过探索来解决稀疏的奖励任务是深度强化学习的主要挑战之一,尤其是在三维,部分观察的环境中。至关重要的是,本文提出的算法使用单一的人类演示来解决硬性探索问题。我们培训代理商的示范和自身经验,以解决具有可变初始条件的问题。我们适应了这个想法,并将其与近端策略优化(PPO)集成在一起。代理商能够通过根据获得的奖励和最大轨迹值重新重播其过去的轨迹来提高其性能并解决更严重的问题。我们将该算法的不同变化与在动物-AI奥运会环境中的一组硬探索任务上进行的行为克隆进行了比较。据我们所知,在仅使用一个人类示范之前,从未考虑过在三维环境中学习任务。

Solving sparse reward tasks through exploration is one of the major challenges in deep reinforcement learning, especially in three-dimensional, partially-observable environments. Critically, the algorithm proposed in this article uses a single human demonstration to solve hard-exploration problems. We train an agent on a combination of demonstrations and own experience to solve problems with variable initial conditions. We adapt this idea and integrate it with the proximal policy optimization (PPO). The agent is able to increase its performance and to tackle harder problems by replaying its own past trajectories prioritizing them based on the obtained reward and the maximum value of the trajectory. We compare different variations of this algorithm to behavioral cloning on a set of hard-exploration tasks in the Animal-AI Olympics environment. To the best of our knowledge, learning a task in a three-dimensional environment with comparable difficulty has never been considered before using only one human demonstration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源