论文标题

手势2 Path:模仿手势感知导航的学习

Gesture2Path: Imitation Learning for Gesture-aware Navigation

论文作者

Cuan, Catie, Lee, Edward, Fisher, Emre, Francis, Anthony, Takayama, Leila, Zhang, Tingnan, Toshev, Alexander, Pirk, Sören

论文摘要

随着机器人越来越多地进入以人为本的环境,他们不仅必须能够在人类周围安全地驾驶,而且还必须遵守复杂的社会规范。人类通常会通过手势和面部表情依靠非语言交流,尤其是在密集的空间中。因此,机器人还需要能够将手势解释为解决社会导航任务的一部分。为此,我们提出了一种新型的社会导航方法,将基于图像的模仿学习与模型预测性控制结合在一起。手势是基于在图像流中运行的神经网络来解释的,而我们使用最先进的模型预测控制算法来求解点对点导航任务。我们将方法部署在真实的机器人上,并展示我们的方法对四个手势游动场景的有效性:左/右,跟随我,然后绕圈。我们的实验表明,我们的方法能够成功地解释复杂的人类手势,并将其用作信号,以生成具有社会符合社会的轨迹进行导航任务。我们基于与机器人相互作用的参与者的原位等级验证了我们的方法。

As robots increasingly enter human-centered environments, they must not only be able to navigate safely around humans, but also adhere to complex social norms. Humans often rely on non-verbal communication through gestures and facial expressions when navigating around other people, especially in densely occupied spaces. Consequently, robots also need to be able to interpret gestures as part of solving social navigation tasks. To this end, we present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control. Gestures are interpreted based on a neural network that operates on streams of images, while we use a state-of-the-art model predictive control algorithm to solve point-to-point navigation tasks. We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios: left/right, follow me, and make a circle. Our experiments indicate that our method is able to successfully interpret complex human gestures and to use them as a signal to generate socially compliant trajectories for navigation tasks. We validated our method based on in-situ ratings of participants interacting with the robots.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源