论文标题

VMI-VAE:具有离散和连续先验的VAE的各种相互信息最大化框架

VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors

论文作者

Serdega, Andriy, Kim, Dae-Shik

论文摘要

变分自动编码器是一种可扩展的方法,用于学习复杂数据的潜在变量模型。它采用了一个明确的目标,可以轻松优化。但是,它不能明确衡量学习表现的质量。我们为VAE提出了一个差异性信息最大化框架,以解决此问题。它提供了一个目标,可最大程度地提高潜在代码和观测之间的相互信息。该目标是正规化器,迫使VAE不忽略潜在代码,并允许人们选择其特定组成部分,这对于观测值最有用。最重要的是,提出的框架提供了一种评估固定VAE模型的潜在代码和观察结果之间的相互信息的方法。

Variational Autoencoder is a scalable method for learning latent variable models of complex data. It employs a clear objective that can be easily optimized. However, it does not explicitly measure the quality of learned representations. We propose a Variational Mutual Information Maximization Framework for VAE to address this issue. It provides an objective that maximizes the mutual information between latent codes and observations. The objective acts as a regularizer that forces VAE to not ignore the latent code and allows one to select particular components of it to be most informative with respect to the observations. On top of that, the proposed framework provides a way to evaluate mutual information between latent codes and observations for a fixed VAE model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源