论文标题

加速WGAN更新策略,损失变化率平衡

Accelerated WGAN update strategy with loss change rate balancing

论文作者

Ouyang, Xu, Agam, Gady

论文摘要

优化生成对抗网络(GAN)在内部训练循环中完成的判别器在计算上是过度的,并且在有限数据集中会导致过度拟合。为了解决这个问题,一种常见的更新策略是在鉴别器D的K优化步骤和生成器G的一个优化步骤之间进行交替。在各种GAN算法中重复此策略,在这些算法中,在经验中选择K。在本文中,我们表明,这种更新策略在准确性和收敛速度方面并不是最佳的,并为Wasserstein Gans(WGAN)和其他使用WGAN损失(例如Wgan-GP,DeBlur Gan和Super-Jolution Gan)提出了新的更新策略。提出的更新策略是基于G和D的损耗变化率比较。我们证明了所提出的策略提高了收敛速度和准确性。

Optimizing the discriminator in Generative Adversarial Networks (GANs) to completion in the inner training loop is computationally prohibitive, and on finite datasets would result in overfitting. To address this, a common update strategy is to alternate between k optimization steps for the discriminator D and one optimization step for the generator G. This strategy is repeated in various GAN algorithms where k is selected empirically. In this paper, we show that this update strategy is not optimal in terms of accuracy and convergence speed, and propose a new update strategy for Wasserstein GANs (WGAN) and other GANs using the WGAN loss(e.g. WGAN-GP, Deblur GAN, and Super-resolution GAN). The proposed update strategy is based on a loss change ratio comparison of G and D. We demonstrate that the proposed strategy improves both convergence speed and accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源