论文标题

用于培训生成对抗网络的自适应加权歧视者

Adaptive Weighted Discriminator for Training Generative Adversarial Networks

论文作者

Zadorozhnyy, Vasily, Cheng, Qiang, Ye, Qiang

论文摘要

生成对抗网络(GAN)已成为经典无监督机器学习的最重要的神经网络模型之一。已经开发了各种歧视损失功能来训练GAN的歧视者,并且它们都有一个共同的结构:仅取决于实际和生成的数据的真实和假损失的总和。与两个损失的同等加权总和相关的一个挑战是,训练可能会使一个损失受益,但会损害另一个损失,我们表明这会导致不稳定和模式崩溃。在本文中,我们介绍了一个新的歧视损失功能系列,该功能采用了真实和假零件的加权总和,我们将其称为自适应加权损失功能或AW-loss功能。利用损失的真实和假部分的梯度,我们可以自适应地选择权重以使歧视器的方向训练一个有益于甘恩稳定性的方向。我们的方法可以可能应用于任何具有真实和假零件的损失的歧视器模型。实验验证了我们损失功能在无条件形象生成任务上的有效性,从而在Inception评分和FID中改善了基线结果的CIFAR-10,STL-10和CIFAR-100数据集的显着余量。

Generative adversarial network (GAN) has become one of the most important neural network models for classical unsupervised machine learning. A variety of discriminator loss functions have been developed to train GAN's discriminators and they all have a common structure: a sum of real and fake losses that only depends on the actual and generated data respectively. One challenge associated with an equally weighted sum of two losses is that the training may benefit one loss but harm the other, which we show causes instability and mode collapse. In this paper, we introduce a new family of discriminator loss functions that adopts a weighted sum of real and fake parts, which we call adaptive weighted loss functions or aw-loss functions. Using the gradients of the real and fake parts of the loss, we can adaptively choose weights to train a discriminator in the direction that benefits the GAN's stability. Our method can be potentially applied to any discriminator model with a loss that is a sum of the real and fake parts. Experiments validated the effectiveness of our loss functions on an unconditional image generation task, improving the baseline results by a significant margin on CIFAR-10, STL-10, and CIFAR-100 datasets in Inception Scores and FID.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源