论文标题
何时,为什么以及哪些预估计的甘套有用?
When, Why, and Which Pretrained GANs Are Useful?
论文作者
论文摘要
这些文献提出了几种方法来对新数据集进行固定的甘斯进行预估计,这通常与从头开始的训练相比,尤其是在有限的数据制度中,其性能更高。但是,尽管GAN进行了预处理的明显经验益处,但其内部机制尚未深入分析,并且对其作用的理解尚不完全清楚。此外,基本的实际细节,例如选择一个适当的GAN检查站,目前没有严格的接地,通常由反复试验确定。 这项工作旨在剖析GAN FINETUNT的过程。首先,我们表明,通过验证的检查点初始化GAN训练过程主要会影响模型的覆盖范围,而不是单个样本的保真度。其次,我们明确描述了鉴定的发电机和歧视者如何有助于填充过程,并解释了先前的证据证明两者的重要性。最后,作为我们分析的直接实际好处,我们描述了一个简单的配方,可以选择一个适合的GAN检查点,该检查点最适合于特定目标任务。重要的是,对于大多数目标任务而言,尽管视觉质量较差,但带有Imagenet预测的GAN似乎是填充填充的绝佳起点,类似于典型的判别计算机视觉模型的预处理场景。
The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime. However, despite the apparent empirical benefits of GAN pretraining, its inner mechanisms were not analyzed in-depth, and understanding of its role is not entirely clear. Moreover, the essential practical details, e.g., selecting a proper pretrained GAN checkpoint, currently do not have rigorous grounding and are typically determined by trial and error. This work aims to dissect the process of GAN finetuning. First, we show that initializing the GAN training process by a pretrained checkpoint primarily affects the model's coverage rather than the fidelity of individual samples. Second, we explicitly describe how pretrained generators and discriminators contribute to the finetuning process and explain the previous evidence on the importance of pretraining both of them. Finally, as an immediate practical benefit of our analysis, we describe a simple recipe to choose an appropriate GAN checkpoint that is the most suitable for finetuning to a particular target task. Importantly, for most of the target tasks, Imagenet-pretrained GAN, despite having poor visual quality, appears to be an excellent starting point for finetuning, resembling the typical pretraining scenario of discriminative computer vision models.