论文标题

拓扑和内容共对准图卷积学习

Topology and Content Co-Alignment Graph Convolutional Learning

论文作者

Shi, Min, Tang, Yufei, Zhu, Xingquan

论文摘要

在传统的图形神经网络(GNN)中,图卷积学习是通过拓扑驱动的递归节点内容聚合来进行网络表示学习的。实际上,由于节点之间的无关或缺少的链接,网络拓扑和节点内容并不总是一致的。纯拓扑驱动的特征聚合方法之间的不一致的社区之间的学习范围会恶化结构 - 容器一致性差的节点的学习,而错误的消息可能会在整个网络上传播。在本文中,我们通过使拓扑和内容网络对齐以最大化一致性来提倡共对象卷积学习(COGL)。我们的主题是迫使拓扑网络尊重基本内容网络,同时优化内容网络,以尊重优化表示形式学习的拓扑。给定网络,COGL首先重建了来自节点功能的内容网络,然后通过(1)最小化内容损失,(2)最小化的分类损失,以及(3)最小化的对抗性损失,将内容网络和原始网络共对准。在六个基准上进行的实验表明,COGL显着胜过现有的最新GNN模型。

In traditional Graph Neural Networks (GNN), graph convolutional learning is carried out through topology-driven recursive node content aggregation for network representation learning. In reality, network topology and node content are not always consistent because of irrelevant or missing links between nodes. A pure topology-driven feature aggregation approach between unaligned neighborhoods deteriorates learning for nodes with poor structure-content consistency, and incorrect messages could propagate over the whole network as a result. In this paper, we advocate co-alignment graph convolutional learning (CoGL), by aligning the topology and content networks to maximize consistency. Our theme is to force the topology network to respect underlying content network while simultaneously optimizing the content network to respect the topology for optimized representation learning. Given a network, CoGL first reconstructs a content network from node features then co-aligns the content network and the original network though a unified optimization goal with (1) minimized content loss, (2) minimized classification loss, and (3) minimized adversarial loss. Experiments on six benchmarks demonstrate that CoGL significantly outperforms existing state-of-the-art GNN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源