论文标题
通过尖峰神经网络扩展动态图表学习
Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks
论文作者
论文摘要
近年来,对动态图表示学习的研究激增,该学习的目的是建模随着时间的流逝而动态和不断发展的时间图。但是,当前的工作通常用复发性神经网络(RNN)建模动力学,使它们严重遭受大型时间图上的计算和内存开销。到目前为止,大型时间图上动态图表示学习的可伸缩性仍然是主要挑战之一。在本文中,我们提出了一个可扩展的框架,即Spikenet,以有效捕获时间图的时间和结构模式。我们探索了一个新方向,因为我们可以使用尖峰神经网络(SNN)而不是RNN捕获时间图的不断发展的动力学。作为RNN的低功率替代品,SNNS明确地将模型图形为图形动力学作为神经元种群的尖峰列车,并以有效的方式启用基于尖峰的传播。在三个大型现实世界图表数据集上的实验表明,Spikenet的表现优于较低的计算成本在时间节点分类任务上的强大基准。特别是,Spikenet概括为一个大的时间图(270万节点和1390万个边缘),其参数明显较少,计算开销。
Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2.7M nodes and 13.9M edges) with significantly fewer parameters and computation overheads.Our code is publicly available at \url{https://github.com/EdisonLeeeee/SpikeNet}.