论文标题
随机图复发神经网络
Stochastic Graph Recurrent Neural Network
论文作者
论文摘要
由于其广泛的应用程序前景,对图形结构数据的表示形式进行了广泛研究。但是,以前的方法主要集中在静态图上,而许多现实图形随着时间的流逝而发展。建模这种演化对于预测看不见网络的属性很重要。为了解决这一挑战,我们提出了SGRNN,这是一种新型的神经结构,应用随机潜在变量同时捕获节点属性和拓扑中的演变。具体而言,在迭代过程中,确定性状态与随机状态分离以抑制相互干扰。通过将半幅变化的推断整合到SGRNN,提出了非高斯变异分布,以帮助进一步提高性能。此外,为了减轻SGRNN中的KL泛滥问题,基于KL-Divergence的下限提出了一种简单且可解释的结构。对现实世界数据集的广泛实验证明了该模型的有效性。代码可在https://github.com/stochasticgrnn/sgrnn上找到。
Representation learning over graph structure data has been widely studied due to its wide application prospects. However, previous methods mainly focus on static graphs while many real-world graphs evolve over time. Modeling such evolution is important for predicting properties of unseen networks. To resolve this challenge, we propose SGRNN, a novel neural architecture that applies stochastic latent variables to simultaneously capture the evolution in node attributes and topology. Specifically, deterministic states are separated from stochastic states in the iterative process to suppress mutual interference. With semi-implicit variational inference integrated to SGRNN, a non-Gaussian variational distribution is proposed to help further improve the performance. In addition, to alleviate KL-vanishing problem in SGRNN, a simple and interpretable structure is proposed based on the lower bound of KL-divergence. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed model. Code is available at https://github.com/StochasticGRNN/SGRNN.