论文标题
重新布线,用于图形神经网络的位置编码
Rewiring with Positional Encodings for Graph Neural Networks
论文作者
论文摘要
最近的几项作品使用位置编码扩展了配备了注意力机制的图形神经网络(GNN)层的接受场。然而,这些技术以实质性的计算成本并冒着常规GNN的电感偏见的变化或需要进行复杂的体系结构调整的风险将接收场扩展到完整的图。作为保守的替代方案,我们使用位置编码将接收场扩展到$ r $ - 霍普社区。更具体地说,我们的方法增加了带有其他节点/边缘的输入图,并将位置编码用作节点和/或边缘功能。因此,在将它们输入到下游GNN模型之前,我们会修改图形,而不是修改模型本身。这使我们的方法模型不合时宜,即与任何现有的GNN体系结构兼容。我们还提供了位置编码的示例,这些示例是无损的,在原始图和修改图之间具有一对一地图。我们证明,通过位置编码和虚拟完全连接的节点扩展接收场可显着提高GNN性能,并使用小$ r $ $ $ $ $ $ $ $ $ $ $ $ R $减轻了过度阵型。我们获得了各种模型和数据集的改进,并使用传统的GNN或图形变压器达到竞争性能。
Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to $r$-hop neighborhoods. More specifically, our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. We thus modify graphs before inputting them to a downstream GNN model, instead of modifying the model itself. This makes our method model-agnostic, i.e., compatible with any of the existing GNN architectures. We also provide examples of positional encodings that are lossless with a one-to-one map between the original and the modified graphs. We demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small $r$. We obtain improvements on a variety of models and datasets and reach competitive performance using traditional GNNs or graph Transformers.