论文标题

男人:多进攻网络学习

MAN: Multi-Action Networks Learning

论文作者

Wang, Keqin, Bartsch, Alison, Farimani, Amir Barati

论文摘要

在探索中,由于当前的效率低下而引起的强化学习领域,具有较大离散动作空间的学习控制政策是一个具有挑战性的问题。在高维操作空间的情况下,每个单独的维度都有许多潜在的动作,以了解政策。在这项工作中,我们介绍了深入的增强学习(DRL)算法呼叫多动作网络(MAN)学习,以解决高维大离散动作空间的挑战。我们提出将N维操作空间分解为N 1维组件(称为子表演),从而为每个子行动创建一个值神经网络。然后,人使用时间差异学习来同步训练网络,这比训练直接动作输出的单个网络要简单。为了评估所提出的方法,我们在三种情况下测试了人:N维迷宫任务,块堆叠任务,然后扩展人类从Atari Arcade学习环境中使用18个动作空间来处理12个游戏。我们的结果表明,人学习的速度比深Q学习和双重Q学习更快,这意味着我们的方法比当前可用于大型离散动作空间的方法更好地执行同步时间差异算法。

Learning control policies with large discrete action spaces is a challenging problem in the field of reinforcement learning due to present inefficiencies in exploration. With high dimensional action spaces, there are a large number of potential actions in each individual dimension over which policies would be learned. In this work, we introduce a Deep Reinforcement Learning (DRL) algorithm call Multi-Action Networks (MAN) Learning that addresses the challenge of high-dimensional large discrete action spaces. We propose factorizing the N-dimension action space into N 1-dimensional components, known as sub-actions, creating a Value Neural Network for each sub-action. Then, MAN uses temporal-difference learning to train the networks synchronously, which is simpler than training a single network with a large action output directly. To evaluate the proposed method, we test MAN on three scenarios: an n-dimension maze task, a block stacking task, and then extend MAN to handle 12 games from the Atari Arcade Learning environment with 18 action spaces. Our results indicate that MAN learns faster than both Deep Q-Learning and Double Deep Q-Learning, implying our method is a better performing synchronous temporal difference algorithm than those currently available for large discrete action spaces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源