论文标题
vae/wgan基于面部图像中姿势无缝身份替代的图像表示学习
VAE/WGAN-Based Image Representation Learning For Pose-Preserving Seamless Identity Replacement In Facial Images
论文作者
论文摘要
我们提出了一种基于Wasserstein损失的新型变分生成对抗网络(VGAN),以从面部图像中学习潜在的表示,该图像不变,但可以保留头置信息。这有助于与给定输入图像相同的头部姿势的逼真的面部图像的合成,但具有不同的身份。该网络的一种应用是对隐私敏感的方案;在图像中更换身份后,仍然可以恢复效用(例如头姿势)。对在3个威胁场景下执行的合成和真实人面图像数据集进行了广泛的实验验证,这证实了所提出的网络保留输入图像的头部姿势的能力,掩盖输入身份并综合了所需身份的良好质量现实面孔图像。我们还表明,我们的网络可用于执行姿势性的身份变形和保护身份的姿势变形。根据定量指标以及合成的图像质量,提出的方法改进了最近的最新方法。
We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss to learn a latent representation from a face image that is invariant to identity but preserves head-pose information. This facilitates synthesis of a realistic face image with the same head pose as a given input image, but with a different identity. One application of this network is in privacy-sensitive scenarios; after identity replacement in an image, utility, such as head pose, can still be recovered. Extensive experimental validation on synthetic and real human-face image datasets performed under 3 threat scenarios confirms the ability of the proposed network to preserve head pose of the input image, mask the input identity, and synthesize a good-quality realistic face image of a desired identity. We also show that our network can be used to perform pose-preserving identity morphing and identity-preserving pose morphing. The proposed method improves over a recent state-of-the-art method in terms of quantitative metrics as well as synthesized image quality.