论文标题

库 - 马哈德(Czu-Mhad):使用深度摄像头和10个可穿戴惯性传感器的多式联运数据集

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

论文作者

Chao, Xin, Hou, Zhenjie, Mo, Yujian

论文摘要

人类行动识别已在许多生活领域广泛使用,许多人类行动数据集已同时出版。但是,大多数多模式数据库在传感器的布局和数量中都有一些缺点,这些缺点无法完全代表动作功能。关于问题,本文提出了一个可免费获得的数据集,名为Czu-Mhad(C​​hangzhou University:全面的多模式人类行动数据集)。它由22个动作和三种模式时间同步数据组成。这些模式包括来自Kinect V2相机的深度视频和骨骼位置,以及10个可穿戴传感器的惯性信号。与单模态传感器相比,多模式传感器可以收集不同的模态数据,因此使用多模式传感器可以更准确地描述动作。此外,Czu-Mhad通过与它们的结合惯性传感器来获得10个主要运动接头的3轴加速度和3轴角速度,并同时捕获了这些数据。提供了实验结果,以表明该数据集可用于研究人体不同部分之间的结构关系,并在执行涉及多模式传感器数据的融合方法和融合方法时。

Human action recognition has been widely used in many fields of life, and many human action datasets have been published at the same time. However, most of the multi-modal databases have some shortcomings in the layout and number of sensors, which cannot fully represent the action features. Regarding the problems, this paper proposes a freely available dataset, named CZU-MHAD (Changzhou University: a comprehensive multi-modal human action dataset). It consists of 22 actions and three modals temporal synchronized data. These modals include depth videos and skeleton positions from a kinect v2 camera, and inertial signals from 10 wearable sensors. Compared with single modal sensors, multi-modal sensors can collect different modal data, so the use of multi-modal sensors can describe actions more accurately. Moreover, CZU-MHAD obtains the 3-axis acceleration and 3-axis angular velocity of 10 main motion joints by binding inertial sensors to them, and these data were captured at the same time. Experimental results are provided to show that this dataset can be used to study structural relationships between different parts of the human body when performing actions and fusion approaches that involve multi-modal sensor data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源