论文标题

基于模型的强化学习中的上下文感知动力学模型

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

论文作者

Lee, Kimin, Seo, Younggyo, Lee, Seunghyun, Lee, Honglak, Shin, Jinwoo

论文摘要

基于模型的增强学习(RL)通过学习环境动态模型来享有多种好处,例如数据效率和计划。但是,学习一个可以跨越不同动态的全球模型是一项具有挑战性的任务。为了解决这个问题,我们将学习全局动态模型学习的任务分解为两个阶段:(a)学习一个捕获本地动态的潜在矢量,然后(b)预测下一个状态。为了将特定于动态的信息编码到上下文中,我们引入了一种新颖的损失函数,该功能鼓励上下文潜在向量可用于预测前进和向后动态。与现有的RL方案相比,所提出的方法可以在各种模拟机器人技术和控制任务上实现卓越的概括能力。

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics. However, learning a global model that can generalize across different dynamics is a challenging task. To tackle this problem, we decompose the task of learning a global dynamics model into two stages: (a) learning a context latent vector that captures the local dynamics, then (b) predicting the next state conditioned on it. In order to encode dynamics-specific information into the context latent vector, we introduce a novel loss function that encourages the context latent vector to be useful for predicting both forward and backward dynamics. The proposed method achieves superior generalization ability across various simulated robotics and control tasks, compared to existing RL schemes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源