论文标题

稀疏图的可扩展深度生成建模

Scalable Deep Generative Modeling for Sparse Graphs

论文作者

Dai, Hanjun, Nazi, Azade, Li, Yujia, Dai, Bo, Schuurmans, Dale

论文摘要

学习图生成模型是深度学习的挑战性任务,并且对化学,生物学和社会科学等一系列领域具有广泛的适用性。但是当前的深神经方法的可扩展性有限:对于具有$ n $节点和$ m $边缘的图形,现有的深神经方法需要$ω(n^2)$复杂性,通过构建邻接矩阵。另一方面,从某种意义上说,许多现实世界图实际上都很稀疏。基于此,我们开发了一种名为BigG的新型自动回归模型,该模型利用这种稀疏来避免产生完整的邻接矩阵,并且重要的是将图的生成时间复杂性降低到$ O((N + M)\ log n)$。此外,在训练期间,该自回归模型可以与$ O(\ log n)$同步阶段并行化,这使其比其他需要$ω(n)$的自动回忆模型更有效。几个基准的实验表明,通过深度自回归图生成模型的范围,所提出的方法不仅比以前更大的图级尺度缩放,而且还产生了更好的图生成质量。

Learning graph generative models is a challenging task for deep learning and has wide applicability to a range of domains like chemistry, biology and social science. However current deep neural methods suffer from limited scalability: for a graph with $n$ nodes and $m$ edges, existing deep neural methods require $Ω(n^2)$ complexity by building up the adjacency matrix. On the other hand, many real world graphs are actually sparse in the sense that $m\ll n^2$. Based on this, we develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to $O((n + m)\log n)$. Furthermore, during training this autoregressive model can be parallelized with $O(\log n)$ synchronization stages, which makes it much more efficient than other autoregressive models that require $Ω(n)$. Experiments on several benchmarks show that the proposed approach not only scales to orders of magnitude larger graphs than previously possible with deep autoregressive graph generative models, but also yields better graph generation quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源