论文标题

选择性时空聚合基于基于姿势改进系统:在现实世界中了解人类活动

Selective Spatio-Temporal Aggregation Based Pose Refinement System: Towards Understanding Human Activities in Real-World Videos

论文作者

Yang, Di, Dai, Rui, Wang, Yaohui, Mallick, Rupayan, Minciullo, Luca, Francesca, Gianpiero, Bremond, Francois

论文摘要

如今,利用人类姿势数据来理解人类活动已引起了很多关注。但是,由于遮挡,截断和低分辨率在现实世界中未经通知的视频中,最先进的姿势估计量在获得高质量的2D或3D姿势数据方面难以获得高质量的姿势姿势数据。 Hence, in this work, we propose 1) a Selective Spatio-Temporal Aggregation mechanism, named SST-A, that refines and smooths the keypoint locations extracted by multiple expert pose estimators, 2) an effective weakly-supervised self-training framework which leverages the aggregated poses as pseudo ground-truth instead of handcrafted annotations for real-world pose estimation.进行了广泛的实验,不仅要评估上游姿势的完善,还评估了四个数据集的下游动作识别性能,Toyota Smarthome,NTU-RGB+D,Charades和Kinetics-50。我们证明,我们的姿势填充系统(SSTA-PRS)完善的骨架数据有效地增强了各种现有的行动识别模型,从而实现了竞争性或最先进的性能。

Taking advantage of human pose data for understanding human activities has attracted much attention these days. However, state-of-the-art pose estimators struggle in obtaining high-quality 2D or 3D pose data due to occlusion, truncation and low-resolution in real-world un-annotated videos. Hence, in this work, we propose 1) a Selective Spatio-Temporal Aggregation mechanism, named SST-A, that refines and smooths the keypoint locations extracted by multiple expert pose estimators, 2) an effective weakly-supervised self-training framework which leverages the aggregated poses as pseudo ground-truth instead of handcrafted annotations for real-world pose estimation. Extensive experiments are conducted for evaluating not only the upstream pose refinement but also the downstream action recognition performance on four datasets, Toyota Smarthome, NTU-RGB+D, Charades, and Kinetics-50. We demonstrate that the skeleton data refined by our Pose-Refinement system (SSTA-PRS) is effective at boosting various existing action recognition models, which achieves competitive or state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源