论文标题

窥视被遮挡的关节:人群姿势估计的新型框架

Peeking into occluded joints: A novel framework for crowd pose estimation

论文作者

Qiu, Lingteng, Zhang, Xuanye, Li, Yanran, Li, Guanbin, Wu, Xiaojun, Xiong, Zixiang, Han, Xiaoguang, Cui, Shuguang

论文摘要

尽管自然界广泛存在闭塞,并且仍然是姿势估计的根本挑战,但现有的基于热图的方法会严重降解闭塞。他们的内在问题是,它们基于视觉信息直接定位关节。但是,看不见的关节缺乏。与本地化相反,我们的框架通过提出图像引导的渐进式GCN模块来估算不可见的关节,该模块对图像上下文和姿势结构都提供了全面的理解。此外,现有的基准测试包含有限的遮挡以进行评估。因此,我们彻底解决了这个问题,并提出了一个新颖的OPEC-NET框架,并带有带有9K注释图像的新的遮挡姿势(OCPOSE)数据集。对基准的广泛定量和定性评估表明,OPEC-NET比最近的领先工作取得了重大改进。值得注意的是,相对于相邻实例之间的平均IOU,我们的OCPOSE是最复杂的闭塞数据集。源代码和Ocpose将公开可用。

Although occlusion widely exists in nature and remains a fundamental challenge for pose estimation, existing heatmap-based approaches suffer serious degradation on occlusions. Their intrinsic problem is that they directly localize the joints based on visual information; however, the invisible joints are lack of that. In contrast to localization, our framework estimates the invisible joints from an inference perspective by proposing an Image-Guided Progressive GCN module which provides a comprehensive understanding of both image context and pose structure. Moreover, existing benchmarks contain limited occlusions for evaluation. Therefore, we thoroughly pursue this problem and propose a novel OPEC-Net framework together with a new Occluded Pose (OCPose) dataset with 9k annotated images. Extensive quantitative and qualitative evaluations on benchmarks demonstrate that OPEC-Net achieves significant improvements over recent leading works. Notably, our OCPose is the most complex occlusion dataset with respect to average IoU between adjacent instances. Source code and OCPose will be publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源