论文标题
表面差异图自动编码器
Epitomic Variational Graph Autoencoder
论文作者
论文摘要
变分自动编码器(VAE)是一种用于学习潜在表示的广泛使用的生成模型。 Burda等。在他们的开创性论文中表明,VAE的学习能力受到过度限制。这是一种现象,其中大量的潜在变量无法捕获有关输入数据的任何信息,并且相应的隐藏单元变得不活跃。这不利地影响学习多样化和可解释的潜在表示。随着变分图自动编码器(VGAE)扩展了用于图形结构的数据的VAE,它继承了过度的问题。在本文中,我们采用了一种基于模型的方法,并提出了表观上的VGAE(EVGAE),这是图形数据集的生成变分框架,成功地减轻了过度促进的问题并增强了VGAE的生成能力。我们认为evgae由多个稀疏VGAE模型组成,称为Epitomes,这些模型是共享潜在空间的潜在变量组。这种方法有助于增加主动单元,因为缩影竞争以了解图形数据的更好表示。我们通过在三个基准数据集上的实验验证我们的索赔。我们的实验表明,EVGAE具有比VGAE更好的生成能力。此外,EVGAE在引文网络中的链接预测任务上的表现优于VGAE。
Variational autoencoder (VAE) is a widely used generative model for learning latent representations. Burda et al. in their seminal paper showed that learning capacity of VAE is limited by over-pruning. It is a phenomenon where a significant number of latent variables fail to capture any information about the input data and the corresponding hidden units become inactive. This adversely affects learning diverse and interpretable latent representations. As variational graph autoencoder (VGAE) extends VAE for graph-structured data, it inherits the over-pruning problem. In this paper, we adopt a model based approach and propose epitomic VGAE (EVGAE),a generative variational framework for graph datasets which successfully mitigates the over-pruning problem and also boosts the generative ability of VGAE. We consider EVGAE to consist of multiple sparse VGAE models, called epitomes, that are groups of latent variables sharing the latent space. This approach aids in increasing active units as epitomes compete to learn better representation of the graph data. We verify our claims via experiments on three benchmark datasets. Our experiments show that EVGAE has a better generative ability than VGAE. Moreover, EVGAE outperforms VGAE on link prediction task in citation networks.