论文标题
通过最大化速度降低的几何图表示学习
Geometric Graph Representation Learning via Maximizing Rate Reduction
论文作者
论文摘要
学习歧视性节点表示有利于图形分析中的各种下游任务,例如社区检测和节点分类。现有的图表学习方法(例如,基于随机步行和对比度学习)仅限于最大化连接节点的局部相似性。这样的成对学习方案可能无法捕获表示形式的全局分布,因为它对表示空间的全局几何特性没有明确的约束。为此,我们建议几何图表示学习(G2R)通过最大化速率降低以无监督的方式学习节点表示。通过这种方式,G2R映射不同的组中的节点(隐式存储在邻接矩阵中)中,而每个子空间都紧凑,并且不同的子空间分布分布。 G2R在编码器中采用图神经网络,并最大程度地利用邻接矩阵降低速率。此外,我们从理论和经验上证明,降低速度最大化等于最大化不同子空间之间的主要角度。现实世界数据集的实验表明,G2R在节点分类和社区检测任务上的表现优于各种基准。
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification. Existing graph representation learning methods (e.g., based on random walk and contrastive learning) are limited to maximizing the local similarity of connected nodes. Such pair-wise learning schemes could fail to capture the global distribution of representations, since it has no explicit constraints on the global geometric properties of representation space. To this end, we propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner via maximizing rate reduction. In this way, G2R maps nodes in distinct groups (implicitly stored in the adjacency matrix) into different subspaces, while each subspace is compact and different subspaces are dispersedly distributed. G2R adopts a graph neural network as the encoder and maximizes the rate reduction with the adjacency matrix. Furthermore, we theoretically and empirically demonstrate that rate reduction maximization is equivalent to maximizing the principal angles between different subspaces. Experiments on real-world datasets show that G2R outperforms various baselines on node classification and community detection tasks.