论文标题

神经姿势通过空间自适应实例归一化转移

Neural Pose Transfer by Spatially Adaptive Instance Normalization

论文作者

Wang, Jiashun, Wen, Chao, Fu, Yanwei, Lin, Haitao, Zou, Tianyun, Xue, Xiangyang, Zhang, Yinda

论文摘要

已经研究了姿势转移数十年,其中将源网姿势应用于目标网格。特别是在本文中,我们有兴趣转移源源人网格以变形目标人网格,而源和目标网格可能具有不同的身份信息。传统研究认为,配对源和目标网格与用户注释的地标/网格点的点对应关系存在,这需要大量的标签工作。另一方面,当源和目标网格具有不同的身份时,深层模型的概括能力是有限的。为了打破此限制,我们提出了第一个神经姿势转移模型,该模型通过最新技术来解决姿势转移以进行图像样式传递,利用新提出的组件 - 空间自适应实例归一化。我们的模型不需要源和目标网格之间的任何对应关系。广泛的实验表明,所提出的模型可以有效地将变形从源转移到目标网格,并且具有良好的概括能力来处理看不见的身份或网格姿势。代码可在https://github.com/jiashunwang/neural-pose-transfer上获得。

Pose transfer has been studied for decades, in which the pose of a source mesh is applied to a target mesh. Particularly in this paper, we are interested in transferring the pose of source human mesh to deform the target human mesh, while the source and target meshes may have different identity information. Traditional studies assume that the paired source and target meshes are existed with the point-wise correspondences of user annotated landmarks/mesh points, which requires heavy labelling efforts. On the other hand, the generalization ability of deep models is limited, when the source and target meshes have different identities. To break this limitation, we proposes the first neural pose transfer model that solves the pose transfer via the latest technique for image style transfer, leveraging the newly proposed component -- spatially adaptive instance normalization. Our model does not require any correspondences between the source and target meshes. Extensive experiments show that the proposed model can effectively transfer deformation from source to target meshes, and has good generalization ability to deal with unseen identities or poses of meshes. Code is available at https://github.com/jiashunwang/Neural-Pose-Transfer .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源