论文标题
图形神经网络的迭代深图学习:更好,健壮的节点嵌入
Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings
论文作者
论文摘要
在本文中,我们提出了一个端到端的图形学习框架,即迭代的深图学习(IDGL),用于共同和迭代学习的图形结构和图形嵌入。 IDGL的关键原理是根据更好的节点嵌入学习更好的图形结构,反之亦然(即,基于更好的图形结构的更好的节点嵌入)。当学习的图结构接近足够接近的图表时,我们的迭代方法会动态停止。此外,我们将图形学习问题视为相似性度量学习问题,并利用自适应图正则化来控制学习图的质量。最后,结合了基于锚固的近似技术,我们进一步提出了一个可扩展的IDGL版本,即IDGL-hand,该版本大大降低了IDGL的时间和空间复杂性而不会损害性能。我们对九个基准测试的广泛实验表明,我们提出的IDGL模型可以始终超越或匹配最先进的基线。此外,IDGL对对抗图可以更健壮,并应对触发和感应学习。
In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly and iteratively learning graph structure and graph embedding. The key rationale of IDGL is to learn a better graph structure based on better node embeddings, and vice versa (i.e., better node embeddings based on a better graph structure). Our iterative method dynamically stops when the learned graph structure approaches close enough to the graph optimized for the downstream prediction task. In addition, we cast the graph learning problem as a similarity metric learning problem and leverage adaptive graph regularization for controlling the quality of the learned graph. Finally, combining the anchor-based approximation technique, we further propose a scalable version of IDGL, namely IDGL-Anch, which significantly reduces the time and space complexity of IDGL without compromising the performance. Our extensive experiments on nine benchmarks show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines. Furthermore, IDGL can be more robust to adversarial graphs and cope with both transductive and inductive learning.