论文标题

域嵌入式多模型生成对抗网络,用于基于图像的面部介绍

Domain Embedded Multi-model Generative Adversarial Networks for Image-based Face Inpainting

论文作者

Zhang, Xian, Wang, Xin, Kong, Bin, Shi, Canghong, Yin, Youbing, Song, Qi, Lyu, Siwei, Lv, Jiancheng, Shi, Canghong, Li, Xiaojie

论文摘要

对面部形状和结构的先验知识在面部覆盖中起着重要作用。但是,传统的脸部入介方法主要集中于在不考虑人脸的特殊特殊性的情况下分辨出生成的图像分辨率,并且通常会产生不和谐的面部零件。为了解决这个问题,我们提出了一个嵌入的域嵌入式多模型生成的对手模型,用于在较大的裁切区域内涂上面部图像。我们首先仅代表使用潜在变量作为域知识的面部区域,并将其与非面部零件纹理相结合,以生成具有合理内容的高质量面部图像。最终使用两个对抗歧视者来判断生成的分布是否接近实际分布。它不仅可以合成新颖的图像结构,而且可以显式利用嵌入式的面部域知识来产生更好的预测,并在结构和外观上保持一致性。 Celeba和Celeba-HQ面部数据集的实验表明,我们提出的方法实现了最先进的性能,并产生了比现有方法更高的质量入内效果。

Prior knowledge of face shape and structure plays an important role in face inpainting. However, traditional face inpainting methods mainly focus on the generated image resolution of the missing portion without consideration of the special particularities of the human face explicitly and generally produce discordant facial parts. To solve this problem, we present a domain embedded multi-model generative adversarial model for inpainting of face images with large cropped regions. We firstly represent only face regions using the latent variable as the domain knowledge and combine it with the non-face parts textures to generate high-quality face images with plausible contents. Two adversarial discriminators are finally used to judge whether the generated distribution is close to the real distribution or not. It can not only synthesize novel image structures but also explicitly utilize the embedded face domain knowledge to generate better predictions with consistency on structures and appearance. Experiments on both CelebA and CelebA-HQ face datasets demonstrate that our proposed approach achieved state-of-the-art performance and generates higher quality inpainting results than existing ones.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源