论文标题
几何学意识到的哈密顿变量自动编码器
Geometry-Aware Hamiltonian Variational Auto-Encoder
论文作者
论文摘要
事实证明,变异自动编码器(VAE)是一个合适的工具,可通过提取比数据小得多的尺寸空间中的潜在变量来降低维度的工具。当他们考虑其能力生成新的现实样本或在较小的空间中执行潜在有意义的插值时,可以轻松理解他们从数据中捕获有意义信息的能力。但是,当对许多现实生活领域(例如医学)中丰富的小型数据集进行培训时,这种生成模型的性能可能会很差。这可能是从缺乏潜在空间的结构的,其几何形状通常不考虑。因此,我们在本文中建议将潜在空间视为带有参数化度量的Riemannian歧管,并与编码器和解码器网络同时学习。然后将该指标用于我们所谓的Riemannian Hamiltonian Vae,该vae扩展了Arxiv:1805.11328引入的Hamiltonian VAE,以更好地利用潜在空间的基本几何形状。我们认为,这种潜在空间建模提供了有关其潜在结构的有用信息,从而导致更有意义的插值,更现实的数据生成和更可靠的聚类。
Variational auto-encoders (VAEs) have proven to be a well suited tool for performing dimensionality reduction by extracting latent variables lying in a potentially much smaller dimensional space than the data. Their ability to capture meaningful information from the data can be easily apprehended when considering their capability to generate new realistic samples or perform potentially meaningful interpolations in a much smaller space. However, such generative models may perform poorly when trained on small data sets which are abundant in many real-life fields such as medicine. This may, among others, come from the lack of structure of the latent space, the geometry of which is often under-considered. We thus propose in this paper to see the latent space as a Riemannian manifold endowed with a parametrized metric learned at the same time as the encoder and decoder networks. This metric is then used in what we called the Riemannian Hamiltonian VAE which extends the Hamiltonian VAE introduced by arXiv:1805.11328 to better exploit the underlying geometry of the latent space. We argue that such latent space modelling provides useful information about its underlying structure leading to far more meaningful interpolations, more realistic data-generation and more reliable clustering.