论文标题

运动指导3D姿势估算视频

Motion Guided 3D Pose Estimation from Videos

论文作者

Wang, Jingbo, Yan, Sijie, Xiong, Yuanjun, Lin, Dahua

论文摘要

我们提出了一个新的损失函数,称为运动损失,即来自2D姿势的单眼3D人类姿势估计的问题。在计算运动损失时,引入了一种简单而有效的按键运动的表示,称为成对运动编码。我们设计了一个新的图形卷积网络体系结构,U形GCN(UGCN)。它捕获了短期和长期运动信息,以充分利用运动损失的其他监督。我们在两个大规模基准上进行运动损失训练UGCN:Human3.6M和MPI-INF-3DHP。我们的模型超过了其他最先进的模型。它还显示出强大的能力,可以产生平滑的3D序列和恢复关键点运动。

We propose a new loss function, called motion loss, for the problem of monocular 3D Human pose estimation from 2D pose. In computing motion loss, a simple yet effective representation for keypoint motion, called pairwise motion encoding, is introduced. We design a new graph convolutional network architecture, U-shaped GCN (UGCN). It captures both short-term and long-term motion information to fully leverage the additional supervision from the motion loss. We experiment training UGCN with the motion loss on two large scale benchmarks: Human3.6M and MPI-INF-3DHP. Our model surpasses other state-of-the-art models by a large margin. It also demonstrates strong capacity in producing smooth 3D sequences and recovering keypoint motion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源