论文标题

学习动作翻译器用于稀疏奖励任务的元加强学习

Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks

论文作者

Guo, Yijie, Wu, Qiucheng, Lee, Honglak

论文摘要

Meta强化学习(META-RL)旨在同时学习解决一组培训任务的政策,以适应新任务。它需要大量从培训任务中汲取的数据,以推断任务之间共享的共同结构。如果没有沉重的奖励工程,长期任务中的稀疏奖励加剧了元RL样品效率的问题。 Meta-RL中的另一个挑战是任务之间难度级别的差异,这可能会导致一个简单的任务主导共享策略的学习,从而排除对新任务的策略适应。这项工作介绍了一个新颖的目标功能,以在培训任务中学习动作翻译。从理论上讲,我们通过操作转换器的转移策略的价值可以接近源策略的值和我们的目标函数(大约)上限的值差异。我们建议将动作转换器与基于上下文的元元算法相结合,以更好地收集数据,并在元训练期间更有效地探索。我们的方法从经验上提高了在稀疏奖励任务上的样本效率和元rl算法的性能。

Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks. It requires massive amounts of data drawn from training tasks to infer the common structure shared among tasks. Without heavy reward engineering, the sparse rewards in long-horizon tasks exacerbate the problem of sample efficiency in meta-RL. Another challenge in meta-RL is the discrepancy of difficulty level among tasks, which might cause one easy task dominating learning of the shared policy and thus preclude policy adaptation to new tasks. This work introduces a novel objective function to learn an action translator among training tasks. We theoretically verify that the value of the transferred policy with the action translator can be close to the value of the source policy and our objective function (approximately) upper bounds the value difference. We propose to combine the action translator with context-based meta-RL algorithms for better data collection and more efficient exploration during meta-training. Our approach empirically improves the sample efficiency and performance of meta-RL algorithms on sparse-reward tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源