论文标题

使时刻的方法再次很棒? - 甘斯如何学习分布

Making Method of Moments Great Again? -- How can GANs learn distributions

论文作者

Li, Yuanzhi, Dou, Zehao

论文摘要

生成对抗网络(GAN)是广泛使用的模型,以学习复杂的现实世界分布。在GAN中,当鉴别器无法再将发电机的输出与一组训练示例区分开时,发电机的训练通常会停止。 gan的一个核心问题是,当训练停止时,生成的分布是否真的接近目标分布,以及训练过程如何有效地达到此类配置?在本文中,我们建立了一个理论结果,以理解这种生成器二歧视训练过程。我们从经验上观察到,在gan训练的早期阶段,鉴别器正试图迫使发电机与发电机的输出和目标分布之间的低度矩匹配。此外,只有通过在多个培训示例上与这些经验时刻相匹配,我们证明发电机已经可以学习一类显着的分布,包括可以由两层神经网络生成的分布。

Generative Adversarial Networks (GANs) are widely used models to learn complex real-world distributions. In GANs, the training of the generator usually stops when the discriminator can no longer distinguish the generator's output from the set of training examples. A central question of GANs is that when the training stops, whether the generated distribution is actually close to the target distribution, and how the training process reaches to such configurations efficiently? In this paper, we established a theoretical results towards understanding this generator-discriminator training process. We empirically observe that during the earlier stage of the GANs training, the discriminator is trying to force the generator to match the low degree moments between the generator's output and the target distribution. Moreover, only by matching these empirical moments over polynomially many training examples, we prove that the generator can already learn notable class of distributions, including those that can be generated by two-layer neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源