论文标题

超出单眼3D人体姿势估计的视角较弱

Beyond Weak Perspective for Monocular 3D Human Pose Estimation

论文作者

Kissos, Imry, Fritz, Lior, Goldman, Matan, Meir, Omer, Oks, Eduard, Kliger, Mark

论文摘要

我们考虑了带有皮肤多人线性(SMPL)模型的单眼视频的3D关节位置和方向预测的任务。我们首先通过现成的姿势估计算法推断2D关节位置。我们使用自旋算法并估计深层回归神经网络的身体姿势,形状和摄像头参数的初始预测。然后,我们遵守接收那些初始参数的Smplify算法,并优化它们,以便从SMPL模型推断出的3D接头将适合2D接头位置。该算法涉及2D图像平面3D关节的投影步骤。常规的方法是遵循使用临时焦距的弱视角假设。通过对野生(3DPW)数据集中3D姿势的实验,我们表明,使用完整的透视投影,具有正确的摄像头中心和近似焦距,可提供好的结果。我们的算法导致了3DPW挑战的获胜条目,在关节方向准确性中获得了第一名。

We consider the task of 3D joints location and orientation prediction from a monocular video with the skinned multi-person linear (SMPL) model. We first infer 2D joints locations with an off-the-shelf pose estimation algorithm. We use the SPIN algorithm and estimate initial predictions of body pose, shape and camera parameters from a deep regression neural network. We then adhere to the SMPLify algorithm which receives those initial parameters, and optimizes them so that inferred 3D joints from the SMPL model would fit the 2D joints locations. This algorithm involves a projection step of 3D joints to the 2D image plane. The conventional approach is to follow weak perspective assumptions which use ad-hoc focal length. Through experimentation on the 3D Poses in the Wild (3DPW) dataset, we show that using full perspective projection, with the correct camera center and an approximated focal length, provides favorable results. Our algorithm has resulted in a winning entry for the 3DPW Challenge, reaching first place in joints orientation accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源