论文标题
通过电源迭代集群了解图形神经网络中传递的消息
Understanding the Message Passing in Graph Neural Networks via Power Iteration Clustering
论文作者
论文摘要
在图形神经网络(GNN)中传递消息的机制仍然是神秘的。除了卷积神经网络外,尚未提出GNN的理论来源。令我们惊讶的是,通过权力迭代,可以最好地理解信息传递。通过完全或部分删除GNN的激活函数和层权重,我们提出了仅使用一个聚合器进行迭代学习的子空间电源迭代聚类(SPIC)模型。实验表明,我们的模型扩展了GNN,并增强了处理随机特征网络的能力。此外,我们证明了设计中某些最先进的GNN的冗余,并通过消息传递的随机聚合器来定义模型评估的下限。我们的发现突破了神经网络的理论理解的界限。
The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.