论文标题
从像素到腿:四足运动的分层学习
From Pixels to Legs: Hierarchical Learning of Quadruped Locomotion
论文作者
论文摘要
腿部机器人在现实世界中导航拥挤的场景和复杂地形需要执行动态腿部运动,同时处理视觉输入以避免障碍物和路径计划。我们表明,四倍的机器人可以通过分层增强学习(HRL)获得这两种技能。由于它们的层次结构,我们的政策学会通过同时学习高级(HL)和低水平(LL)神经网络政策来隐式分解这个联合问题。这两个级别通过低维隐藏层连接,我们称之为潜在命令。 HL收到了第一人称摄像头视图,而LL从HL和机器人的板载传感器接收潜在命令来控制其执行器。我们训练政策在两个不同的环境中行走:弯曲的悬崖和迷宫。我们表明,分层策略可以同时学会在这些环境中进行机车和导航,并表明它们比非层次结构神经网络策略更有效。该体系结构还允许跨任务重复使用知识。在新环境中,可以将经过一项任务培训的LL网络转移到新任务中。最后,可以在较低的频率和不同的频率上评估处理相机图像的HL,从而减少了计算时间和带宽要求。
Legged robots navigating crowded scenes and complex terrains in the real world are required to execute dynamic leg movements while processing visual input for obstacle avoidance and path planning. We show that a quadruped robot can acquire both of these skills by means of hierarchical reinforcement learning (HRL). By virtue of their hierarchical structure, our policies learn to implicitly break down this joint problem by concurrently learning High Level (HL) and Low Level (LL) neural network policies. These two levels are connected by a low dimensional hidden layer, which we call latent command. HL receives a first-person camera view, whereas LL receives the latent command from HL and the robot's on-board sensors to control its actuators. We train policies to walk in two different environments: a curved cliff and a maze. We show that hierarchical policies can concurrently learn to locomote and navigate in these environments, and show they are more efficient than non-hierarchical neural network policies. This architecture also allows for knowledge reuse across tasks. LL networks trained on one task can be transferred to a new task in a new environment. Finally HL, which processes camera images, can be evaluated at much lower and varying frequencies compared to LL, thus reducing computation times and bandwidth requirements.