论文标题
Maggan:使用面具引导的生成对抗网络的高分辨率面部属性编辑
MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network
论文作者
论文摘要
我们提出了针对高分辨率面部属性编辑的掩模引导的生成对抗网络(MAGGAN),其中使用了预训练的面部面膜的语义面膜来指导细粒度的图像编辑过程。随着引入口罩引导的重建损失,Maggan学会了仅编辑与所需属性更改相关的面部零件,同时保留属性 - irrex-irrexextions(例如,帽子,围巾,用于修改````秃鹰'''''')。此外,引入了一种新型的面具引导的调节策略,以将每个属性变化的影响区域纳入发生器。此外,还提出了一个多级贴剂判别结构,以扩展我们的高分辨率($ 1024 \ times 1024 $)面部编辑模型。 Celeba基准测试的实验表明,就图像质量和编辑性能而言,所提出的方法在先前的最新方法上都显着优于先前的方法。
We present Mask-guided Generative Adversarial Network (MagGAN) for high-resolution face attribute editing, in which semantic facial masks from a pre-trained face parser are used to guide the fine-grained image editing process. With the introduction of a mask-guided reconstruction loss, MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes, while preserving the attribute-irrelevant regions (e.g., hat, scarf for modification `To Bald'). Further, a novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator. In addition, a multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution ($1024 \times 1024$) face editing. Experiments on the CelebA benchmark show that the proposed method significantly outperforms prior state-of-the-art approaches in terms of both image quality and editing performance.