论文标题
从多阶相互作用发现和解释图神经网络的表示瓶颈
Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions
论文作者
论文摘要
图形神经网络(GNNS)主要依赖于通信范式来传播节点特征并构建交互,而不同的图形学习任务需要不同的节点交互范围。在这项工作中,我们探讨了GNN在具有不同复杂性不同的上下文中捕获节点之间相互作用的能力。我们发现,GNN通常无法捕获各种图形学习任务的最有用的互动样式,因此将此现象命名为GNNS的表示瓶颈。作为回应,我们证明了现有图形构造机制引入的电感偏差可以防止GNN学习最合适的复杂性的相互作用,即导致表示瓶颈。为了解决该限制,我们根据GNNS所学习的相互作用模式提出了一种新型的图形重新布线方法,以动态调整每个节点的接受场。对现实世界和合成数据集进行的广泛实验证明了我们算法可以减轻表示瓶颈的有效性及其优越性,以增强GNN的性能,而不是先进的图形反向底线。
Graph neural networks (GNNs) mainly rely on the message-passing paradigm to propagate node features and build interactions, and different graph learning tasks require different ranges of node interactions. In this work, we explore the capacity of GNNs to capture interactions between nodes under contexts with different complexities. We discover that GNNs are usually unable to capture the most informative kinds of interaction styles for diverse graph learning tasks, and thus name this phenomenon as GNNs' representation bottleneck. As a response, we demonstrate that the inductive bias introduced by existing graph construction mechanisms can prevent GNNs from learning interactions of the most appropriate complexity, i.e., resulting in the representation bottleneck. To address that limitation, we propose a novel graph rewiring approach based on interaction patterns learned by GNNs to adjust the receptive fields of each node dynamically. Extensive experiments on both real-world and synthetic datasets prove the effectiveness of our algorithm to alleviate the representation bottleneck and its superiority to enhance the performance of GNNs over state-of-the-art graph rewiring baselines.