论文标题
PONA:人姿势转移的姿势引导的非本地关注
PoNA: Pose-guided Non-local Attention for Human Pose Transfer
论文作者
论文摘要
在许多应用中,旨在将给定人的外观转移到目标姿势的人类姿势转移非常具有挑战性,而且很重要。先前的工作忽略了姿势特征的指导,或者仅使用局部注意机制,从而导致不可行的和模糊的结果。我们建议使用具有简化级联块的生成对抗网络(GAN)提出一种新的人类姿势转移方法。在每个块中,我们提出了一个具有远距离依赖方案的姿势引导的非本地关注(PONA)机制,以选择更重要的图像特征区域进行转移。我们还设计了预装图像引导的姿势功能更新和后置姿势引导的图像功能更新,以更好地利用姿势和图像特征。我们的网络简单,稳定且易于训练。 Market-1501和DeepFashion数据集的定量和定性结果显示了我们模型的功效和效率。与最先进的方法相比,我们的模型可以生成更清晰,更逼真的图像,具有丰富的细节,同时具有较少的参数和更快的速度。此外,我们生成的图像可以帮助减轻人们重新识别的数据不足。
Human pose transfer, which aims at transferring the appearance of a given person to a target pose, is very challenging and important in many applications. Previous work ignores the guidance of pose features or only uses local attention mechanism, leading to implausible and blurry results. We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks. In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer. We also design pre-posed image-guided pose feature update and post-posed pose-guided image feature update to better utilize the pose and image features. Our network is simple, stable, and easy to train. Quantitative and qualitative results on Market-1501 and DeepFashion datasets show the efficacy and efficiency of our model. Compared with state-of-the-art methods, our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed. Furthermore, our generated images can help to alleviate data insufficiency for person re-identification.