论文标题

Opengan:开放式生成对抗网络

OpenGAN: Open Set Generative Adversarial Networks

论文作者

Ditria, Luke, Meyer, Benjamin J., Drummond, Tom

论文摘要

许多现有的条件生成对抗网络(CGAN)仅限于预定和固定的类级语义标签或属性的条件。我们提出了一个开放式GAN体系结构(OpenGAN),该建筑(OpenGAN)是每个输入样本的条件,并具有从公制空间绘制的功能嵌入。使用编码类级和细粒语义信息的最先进的度量学习模型,我们能够生成与给定源图像在语义上相似的样本。公制学习模型提取的语义信息转移到分布外的新颖类中,从而使生成模型能够生成在训练分布之外的样本。我们表明,我们提出的方法能够从新颖类中生成256美元的256 $ 256分辨率图像,这些图像与培训课程相似。代替源图像,我们证明了度量空间的随机抽样还会导致高质量的样本。我们表明,特征空间和潜在空间中的插值会导致图像空间中的语义和视觉上合理的变换。最后,展示了生成的样品对数据增强的下游任务的有用性。我们表明,通过使用OpenGAN样本在GAN培训分配以外的课程上增加培训数据,可以显着提高分类器的性能。

Many existing conditional Generative Adversarial Networks (cGANs) are limited to conditioning on pre-defined and fixed class-level semantic labels or attributes. We propose an open set GAN architecture (OpenGAN) that is conditioned per-input sample with a feature embedding drawn from a metric space. Using a state-of-the-art metric learning model that encodes both class-level and fine-grained semantic information, we are able to generate samples that are semantically similar to a given source image. The semantic information extracted by the metric learning model transfers to out-of-distribution novel classes, allowing the generative model to produce samples that are outside of the training distribution. We show that our proposed method is able to generate 256$\times$256 resolution images from novel classes that are of similar visual quality to those from the training classes. In lieu of a source image, we demonstrate that random sampling of the metric space also results in high-quality samples. We show that interpolation in the feature space and latent space results in semantically and visually plausible transformations in the image space. Finally, the usefulness of the generated samples to the downstream task of data augmentation is demonstrated. We show that classifier performance can be significantly improved by augmenting the training data with OpenGAN samples on classes that are outside of the GAN training distribution.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源