论文标题

Imutube:从视频中自动提取虚拟体内加速度计的人类活动识别

IMUTube: Automatic Extraction of Virtual on-body Accelerometry from Video for Human Activity Recognition

论文作者

Kwon, Hyeokhyen, Tong, Catherine, Haresamudram, Harish, Gao, Yan, Abowd, Gregory D., Lane, Nicholas D., Ploetz, Thomas

论文摘要

缺乏大规模,标记的数据集阻碍了开发基于体内传感器的人类活动识别(HAR)的健壮和广义预测模型的进展。人类活动识别中的标记数据很少,很难获得,因为传感器数据收集很昂贵,并且注释耗时且容易出错。为了解决此问题,我们介绍了Imutube,这是一种自动处理管道,该管道将现有的计算机视觉和信号处理技术集成了,以将人类活动的视频转换为IMU数据的虚拟流。这些虚拟IMU流代表了人体各个位置的加速度测定法。我们展示了几乎生成的IMU数据如何改善已知HAR数据集上各种模型的性能。我们的最初结果非常有前途,但是这项工作的更大希望在于计算机愿景,信号处理和活动识别社区的集体方法,以我们概述的方式扩展这项工作。这应该导致基于传感器的HAR成为大型数据物突破的另一个成功故事。

The lack of large-scale, labeled data sets impedes progress in developing robust and generalized predictive models for on-body sensor-based human activity recognition (HAR). Labeled data in human activity recognition is scarce and hard to come by, as sensor data collection is expensive, and the annotation is time-consuming and error-prone. To address this problem, we introduce IMUTube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of IMU data. These virtual IMU streams represent accelerometry at a wide variety of locations on the human body. We show how the virtually-generated IMU data improves the performance of a variety of models on known HAR datasets. Our initial results are very promising, but the greater promise of this work lies in a collective approach by the computer vision, signal processing, and activity recognition communities to extend this work in ways that we outline. This should lead to on-body, sensor-based HAR becoming yet another success story in large-dataset breakthroughs in recognition.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源