论文标题

Mogaze:包括工作空间几何和眼睛的全身运动数据集

MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze

论文作者

Kratzer, Philipp, Bihlmaier, Simon, Midlagajni, Niteesh Balachandra, Prakash, Rohit, Toussaint, Marc, Mainprice, Jim

论文摘要

随着机器人在开放的人类环境中变得越来越出现,对机器人系统的理解和预测人类运动将变得至关重要。这种功能在很大程度上取决于运动捕获数据的质量和可用性。但是,现有的全身运动数据集很少包括1)长序列操纵任务,2)工作空间几何形状的3D模型和3)眼睛凝视,当机器人需要预测人类在近距离近端的运动时,这都是重要的。因此,在本文中,我们介绍了日常操纵任务的全身运动的新颖数据集,其中包括上述任务。使用基于反射标记的传统运动捕获系统捕获运动数据。我们还使用可穿戴的瞳孔跟踪装置捕捉了眼睛注视。正如我们在实验中所显示的那样,数据集可用于全身运动预测算法的设计和评估。此外,我们的实验表明眼睛注视是人类意图的有力预测指标。该数据集包括180分钟的运动捕获数据,其中包含1627个选件和正在执行的位置操作。它可在https://humans-to-robots-motion.github.io/mogaze上找到,并计划在不久的将来将其扩展到与两个人的协作任务。

As robots become more present in open human environments, it will become crucial for robotic systems to understand and predict human motion. Such capabilities depend heavily on the quality and availability of motion capture data. However, existing datasets of full-body motion rarely include 1) long sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and 3) eye-gaze, which are all important when a robot needs to predict the movements of humans in close proximity. Hence, in this paper, we present a novel dataset of full-body motion for everyday manipulation tasks, which includes the above. The motion data was captured using a traditional motion capture system based on reflective markers. We additionally captured eye-gaze using a wearable pupil-tracking device. As we show in experiments, the dataset can be used for the design and evaluation of full-body motion prediction algorithms. Furthermore, our experiments show eye-gaze as a powerful predictor of human intent. The dataset includes 180 min of motion capture data with 1627 pick and place actions being performed. It is available at https://humans-to-robots-motion.github.io/mogaze and is planned to be extended to collaborative tasks with two humans in the near future.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源