论文标题
结合视频中人类姿势估计的检测和跟踪
Combining detection and tracking for human pose estimation in videos
论文作者
论文摘要
我们提出了一种新颖的自上而下方法,可以解决视频中多人姿势估计和跟踪的问题。与现有自上而下的方法相反,我们的方法不受其人探测器的性能的限制,并且可以预测未本地化的人实例的姿势。它通过及时传播已知的人位置并在这些地区寻找姿势来实现这一能力。我们的方法由三个组成部分组成:(i)剪辑跟踪网络,该网络在小型视频剪辑上同时执行身体联合检测和跟踪; (ii)一个视频跟踪管道将剪辑跟踪网络产生的固定长度轨迹合并为任意长度轨道; (iii)一种时空合并程序,根据空间和时间平滑项来完善关节位置。得益于我们的剪辑跟踪网络的精确度和合并程序,我们的方法产生了非常准确的联合预测,并且可以在诸如纠结的人之类的严重场景上解决常见的错误。我们的方法在2017年和2018年的数据集以及所有自上而下和底下的方法上都在联合检测和跟踪方面取得了最先进的结果。
We propose a novel top-down approach that tackles the problem of multi-person human pose estimation and tracking in videos. In contrast to existing top-down approaches, our method is not limited by the performance of its person detector and can predict the poses of person instances not localized. It achieves this capability by propagating known person locations forward and backward in time and searching for poses in those regions. Our approach consists of three components: (i) a Clip Tracking Network that performs body joint detection and tracking simultaneously on small video clips; (ii) a Video Tracking Pipeline that merges the fixed-length tracklets produced by the Clip Tracking Network to arbitrary length tracks; and (iii) a Spatial-Temporal Merging procedure that refines the joint locations based on spatial and temporal smoothing terms. Thanks to the precision of our Clip Tracking Network and our merging procedure, our approach produces very accurate joint predictions and can fix common mistakes on hard scenarios like heavily entangled people. Our approach achieves state-of-the-art results on both joint detection and tracking, on both the PoseTrack 2017 and 2018 datasets, and against all top-down and bottom-down approaches.