论文标题

深图神经网络,带有浅水图抽样器

Deep Graph Neural Networks with Shallow Subgraph Samplers

论文作者

Zeng, Hanqing, Zhang, Muhan, Xia, Yinglong, Srivastava, Ajitesh, Malevich, Andrey, Kannan, Rajgopal, Prasanna, Viktor, Jin, Long, Chen, Ren

论文摘要

尽管图神经网络(GNN)是用于图表上学习表示的强大模型,但大多数最新模型的准确性没有超过两到三层。深处的GNNS从根本上需要解决:1)。由于过度厚度而引起的表达挑战,以及2)。邻里爆炸引起的计算挑战。我们提出了一个简单的“深GNN,浅采样器”设计原理,以提高GNN的准确性和效率 - 生成目标节点的表示,我们使用深度GNN仅在浅层局部的子绘图中传递消息。正确采样的子图可能排除无关紧要甚至嘈杂的节点,并且仍然保留关键的邻居特征和图形结构。然后,深GNN平滑了信息丰富的当地信号,以增强特征学习,而不是将全局图信号过度缩小为“白噪声”。从理论上讲,我们是合理的,为什么将深gnns与浅采样器的结合产生最佳的学习表现。然后,我们提出了各种抽样算法和神经结构扩展,以获得良好的经验结果。在最大的公共图数据集(OGBN-Papers100m)上,我们以降低硬件成本的数量级来实现最先进的准确性。

While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源