论文标题
两级图神经网络
Two-level Graph Neural Network
论文作者
论文摘要
图形神经网络(GNN)最近是用于处理图形数据的神经网络结构。由于其受雇的邻居聚合策略,现有的GNN专注于捕获节点级别的信息并忽略高级信息。因此,现有的GNN遭受了由局部置换不变性(LPI)问题引起的代表性限制。为了克服这些局限性并丰富了GNN捕获的特征,我们提出了一个新型的GNN框架,称为两级GNN(TL-GNNN)。这将子图级信息与节点级信息合并。此外,我们提供了对LPI问题的数学分析,该分析表明子图级信息有益于克服与LPI相关的问题。还提出了基于动态编程算法的子图计数方法,这具有时间复杂性为O(n^3),n是图的节点的数量。实验表明,TL-GNN的表现优于现有的GNN,并实现最先进的性能。
Graph Neural Networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs therefore suffer from representational limitations caused by the Local Permutation Invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the Two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has time complexity is O(n^3), n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.