论文标题
stylegan2蒸馏馈送图像操纵
StyleGAN2 Distillation for Feed-forward Image Manipulation
论文作者
论文摘要
stylegan2是生成逼真图像的最先进网络。此外,对其进行了明确的训练,可以在潜在空间中脱离方向,从而可以通过改变潜在因素来有效的图像操纵。编辑现有图像需要将给定的图像嵌入stylegan2的潜在空间中。通过反向传播的潜在代码优化通常用于对现实世界图像的定性嵌入,尽管对于许多应用而言,它非常缓慢。我们提出了一种将stylegan2的特定图像操作提炼成以配对方式训练的图像到图像网络的方法。由此产生的管道是对现有gan的替代方法,该管道接受了未配对数据的培训。我们提供了人类面部转变的结果:性别互换,衰老/恢复活力,样式转移和图像变形。我们表明,使用我们的方法的生成质量可与这些特定任务中的stylegan2反向传播和当前最新方法相媲美。
StyleGAN2 is a state-of-the-art network in generating realistic images. Besides, it was explicitly trained to have disentangled directions in latent space, which allows efficient image manipulation by varying latent factors. Editing existing images requires embedding a given image into the latent space of StyleGAN2. Latent code optimization via backpropagation is commonly used for qualitative embedding of real world images, although it is prohibitively slow for many applications. We propose a way to distill a particular image manipulation of StyleGAN2 into image-to-image network trained in paired way. The resulting pipeline is an alternative to existing GANs, trained on unpaired data. We provide results of human faces' transformation: gender swap, aging/rejuvenation, style transfer and image morphing. We show that the quality of generation using our method is comparable to StyleGAN2 backpropagation and current state-of-the-art methods in these particular tasks.