论文标题

图神经网络的概括和代表性限制

Generalization and Representational Limits of Graph Neural Networks

论文作者

Garg, Vikas K., Jegelka, Stefanie, Jaakkola, Tommi

论文摘要

我们解决了有关图神经网络(GNN)的两个基本问题。首先,我们证明完全依赖本地信息的GNN无法计算几个重要的图形属性。这样的GNN包括标准消息传递模型,以及更强大的空间变体,这些变体可以利用本地图结构(例如,通过消息的相对取向或本地端口订购)来区分每个节点的邻居。我们的治疗包括一种新颖的图理论形式主义。其次,我们为消息传递GNN提供了第一个依赖数据的概括范围。该分析明确解释了GNN的局部置换不变性。我们的界限比现有的基于VC-dimension的GNN保证更紧密,并且与Rademacher的界限相当。

We address two fundamental questions about graph neural networks (GNNs). First, we prove that several important graph properties cannot be computed by GNNs that rely entirely on local information. Such GNNs include the standard message passing models, and more powerful spatial variants that exploit local graph structure (e.g., via relative orientation of messages, or local port ordering) to distinguish neighbors of each node. Our treatment includes a novel graph-theoretic formalism. Second, we provide the first data dependent generalization bounds for message passing GNNs. This analysis explicitly accounts for the local permutation invariance of GNNs. Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源