论文标题
F2GAN:融合和填充的GAN,以生成几种图像
F2GAN: Fusing-and-Filling GAN for Few-shot Image Generation
论文作者
论文摘要
为了生成给定类别的图像,现有的深层生成模型通常依赖于丰富的训练图像。但是,在现实世界应用中,必须从有限的数据中获得广泛的数据获取,并且从有限的数据中进行快速学习能力。同样,这些现有方法不适合快速适应新类别。 几乎没有拍摄的图像生成,目的是从几个图像中生成新类别的图像,引起了一些研究兴趣。在本文中,我们提出了一个融合和填充的生成对抗网络(F2GAN),以生成只有几个图像的新类别的现实图像。在我们的F2GAN中,融合发电机旨在将条件图像的高级特征与随机插值系数融合,然后使用非本地注意模块填充参加的低级细节,以产生新的图像。此外,我们的歧视者可以通过寻求损失和插值回归损失的模式来确保产生的图像的多样性。在五个数据集上进行的大量实验证明了我们提出的方法对几个图像产生的有效性。
In order to generate images for a given category, existing deep generative models generally rely on abundant training images. However, extensive data acquisition is expensive and fast learning ability from limited data is necessarily required in real-world applications. Also, these existing methods are not well-suited for fast adaptation to a new category. Few-shot image generation, aiming to generate images from only a few images for a new category, has attracted some research interest. In this paper, we propose a Fusing-and-Filling Generative Adversarial Network (F2GAN) to generate realistic and diverse images for a new category with only a few images. In our F2GAN, a fusion generator is designed to fuse the high-level features of conditional images with random interpolation coefficients, and then fills in attended low-level details with non-local attention module to produce a new image. Moreover, our discriminator can ensure the diversity of generated images by a mode seeking loss and an interpolation regression loss. Extensive experiments on five datasets demonstrate the effectiveness of our proposed method for few-shot image generation.