论文标题

部分可观测时空混沌系统的无模型预测

Optimizing the Long-Term Behaviour of Deep Reinforcement Learning for Pushing and Grasping

论文作者

Chau, Rodrigo

论文摘要

储层计算是预测湍流的有力工具,其简单的架构具有处理大型系统的计算效率。然而,其实现通常需要完整的状态向量测量和系统非线性知识。我们使用非线性投影函数将系统测量扩展到高维空间,然后将其输入到储层中以获得预测。我们展示了这种储层计算网络在时空混沌系统上的应用,该系统模拟了湍流的若干特征。我们表明,使用径向基函数作为非线性投影器,即使只有部分观测并且不知道控制方程,也能稳健地捕捉复杂的系统非线性。最后,我们表明,当测量稀疏、不完整且带有噪声,甚至控制方程变得不准确时,我们的网络仍然可以产生相当准确的预测,从而为实际湍流系统的无模型预测铺平了道路。

We investigate the "Visual Pushing for Grasping" (VPG) system by Zeng et al. and the "Hourglass" system by Ewerton et al., an evolution of the former. The focus of our work is the investigation of the capabilities of both systems to learn long-term rewards and policies. Zeng et al. original task only needs a limited amount of foresight. Ewerton et al. attain their best performance using an agent which only takes the most immediate action under consideration. We are interested in the ability of their models and training algorithms to accurately predict long-term Q-Values. To evaluate this ability, we design a new bin sorting task and reward function. Our task requires agents to accurately estimate future rewards and therefore use high discount factors in their Q-Value calculation. We investigate the behaviour of an adaptation of the VPG training algorithm on our task. We show that this adaptation can not accurately predict the required long-term action sequences. In addition to the limitations identified by Ewerton et al., it suffers from the known Deep Q-Learning problem of overestimated Q-Values. In an effort to solve our task, we turn to the Hourglass models and combine them with the Double Q-Learning approach. We show that this approach enables the models to accurately predict long-term action sequences when trained with large discount factors. Our results show that the Double Q-Learning technique is essential for training with very high discount factors, as the models Q-Value predictions diverge otherwise. We also experiment with different approaches for discount factor scheduling, loss calculation and exploration procedures. Our results show that the latter factors do not visibly influence the model's performance for our task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源