论文标题

自dield stylegan:从互联网照片中生成

Self-Distilled StyleGAN: Towards Generation from Internet Photos

论文作者

Mokady, Ron, Yarom, Michal, Tov, Omer, Lang, Oran, Cohen-Or, Daniel, Dekel, Tali, Irani, Michal, Mosseri, Inbar

论文摘要

众所周知,Stylegan会产生高保真的图像,同时还提供前所未有的语义编辑。但是,这些迷人的能力仅在有限的数据集中证明了这些能力,这些数据集通常在结构上排列且精心策划。在本文中,我们展示了如何适应从Internet收集的原始未经经过的图像的stylegan。这样的图像集对StyleGan构成了两个主要挑战:它们包​​含许多离群图像,其特征是多模式分布。此类原始图像收集的培训样式导致图像合成质量降低。为了应对这些挑战,我们提出了一种基于样式的自我验证方法,该方法由两个主要组成部分组成:(i)基于生成的基于生成的自我过滤数据集以消除图像,以消除较高的图像,以便生成适当的训练集,以及(ii)对生成的图像的感知聚类来检测本质的数据模态,以改善该模态,以改善该模态。提出的技术可以产生高质量的图像,同时最大程度地减少数据多样性的损失。通过定性和定量评估,我们证明了我们从互联网收集的新挑战和不同领域的方法的力量。可在https://self-distald-stylegan.github.io/上获得新的数据集和预训练模型。

StyleGAN is known to produce high-fidelity images, while also offering unprecedented semantic editing. However, these fascinating abilities have been demonstrated only on a limited set of datasets, which are usually structurally aligned and well curated. In this paper, we show how StyleGAN can be adapted to work on raw uncurated images collected from the Internet. Such image collections impose two main challenges to StyleGAN: they contain many outlier images, and are characterized by a multi-modal distribution. Training StyleGAN on such raw image collections results in degraded image synthesis quality. To meet these challenges, we proposed a StyleGAN-based self-distillation approach, which consists of two main components: (i) A generative-based self-filtering of the dataset to eliminate outlier images, in order to generate an adequate training set, and (ii) Perceptual clustering of the generated images to detect the inherent data modalities, which are then employed to improve StyleGAN's "truncation trick" in the image synthesis process. The presented technique enables the generation of high-quality images, while minimizing the loss in diversity of the data. Through qualitative and quantitative evaluation, we demonstrate the power of our approach to new challenging and diverse domains collected from the Internet. New datasets and pre-trained models are available at https://self-distilled-stylegan.github.io/ .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源