论文标题

深厚的加强和信息学习

Deep Reinforcement and InfoMax Learning

论文作者

Mazoure, Bogdan, Combes, Remi Tachet des, Doan, Thang, Bachman, Philip, Hjelm, R Devon

论文摘要

我们首先从一个假设开始:一个无模型的代理,其表示的代理可以预测未来状态的特性(超出预期奖励)将更有能力解决和适应新的RL问题。为了检验该假设,我们引入了一个基于深层信息(DIM)的目标,该目标通过在其连续时间段的内部表示之间最大化相互信息来训练代理来预测未来。我们在几个合成环境中测试了我们的方法,它成功地学习了预测未来的表示形式。最后,我们以我们的时间昏暗的目标增强了C51,这是一个强大的RL基线,并在连续学习任务和最近引入的Procgen环境中表现出改善的性能。

We begin with the hypothesis that a model-free agent whose representations are predictive of properties of future states (beyond expected rewards) will be more capable of solving and adapting to new RL problems. To test that hypothesis, we introduce an objective based on Deep InfoMax (DIM) which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps. We test our approach in several synthetic settings, where it successfully learns representations that are predictive of the future. Finally, we augment C51, a strong RL baseline, with our temporal DIM objective and demonstrate improved performance on a continual learning task and on the recently introduced Procgen environment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源