论文标题

通过学习先验并删除潜在的正则化,从而使深层可变模型受益

Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization

论文作者

Morrow, Rogan, Chiu, Wei-Chen

论文摘要

存在多种形式的深层变量模型,例如变异自动编码器和对抗性自动编码器。无论特定模型类别如何,都存在一个隐含的共识,即即使在学习先验分布的情况下,也应将潜在分布正规化为先验。在研究潜在正规化对图像生成的影响时,我们的结果表明,在有足够表达的事先表达性的情况下,潜在的正则化是不需要的,并且实际上在图像质量的情况下可能是有害的。我们还研究了学先的先验对计算机视觉中两个常见问题的好处:潜在可变分离和图像到图像翻译的多样性。

There exist many forms of deep latent variable models, such as the variational autoencoder and adversarial autoencoder. Regardless of the specific class of model, there exists an implicit consensus that the latent distribution should be regularized towards the prior, even in the case where the prior distribution is learned. Upon investigating the effect of latent regularization on image generation our results indicate that in the case where a sufficiently expressive prior is learned, latent regularization is not necessary and may in fact be harmful insofar as image quality is concerned. We additionally investigate the benefit of learned priors on two common problems in computer vision: latent variable disentanglement, and diversity in image-to-image translation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源