论文标题
平衡重建错误和变异自动编码器中的kullback-leibler差异
Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders
论文作者
论文摘要
在变异自动编码器的损失函数中,两个组成部分之间存在众所周知的张力:重建损失,提高所得图像的质量,而kullback-leibler Divergence充当潜在空间的常规化合物。正确平衡这两个组成部分是一个微妙的问题,很容易导致生成不良的行为。在最近的一项工作中,DAI和WIPF通过合适的损失功能,通过允许网络学习训练过程中的平衡因子,从而获得了明智的改进。在本文中,我们表明学习可以用简单的确定性计算取代,有助于理解潜在的机制,并导致更快,更准确的行为。在Cifar和Celeba等典型的数据集上,我们的技术明显优于所有先前的VAE架构。
In the loss function of Variational Autoencoders there is a well known tension between two components: the reconstruction loss, improving the quality of the resulting images, and the Kullback-Leibler divergence, acting as a regularizer of the latent space. Correctly balancing these two components is a delicate issue, easily resulting in poor generative behaviours. In a recent work, Dai and Wipf obtained a sensible improvement by allowing the network to learn the balancing factor during training, according to a suitable loss function. In this article, we show that learning can be replaced by a simple deterministic computation, helping to understand the underlying mechanism, and resulting in a faster and more accurate behaviour. On typical datasets such as Cifar and Celeba, our technique sensibly outperforms all previous VAE architectures.